Valkey vs Dragonfly vs KeyDB on a VPS in 2026: Real Notes
If you're running Redis on a VPS in 2026, you've probably hit the same crossroads I did six months ago: Redis went AGPLv3, your hyperscaler quietly migrated to Valkey, and the "just use Redis" default suddenly has homework attached. Across the seven aggregator sites I run on Hostinger (a mix of shared and KVM4 VPS instances), I had Redis baked into nearly every cache layer β Laravel queue workers, Next.js ISR cache, session storage, rate limiting. When the Redis license drama settled and the dust cleared in early 2026, I spent two weekends benchmarking the three real alternatives on my own infrastructure: Valkey, Dragonfly, and KeyDB.
This isn't a synthetic Phoronix-style benchmark. It's what actually happened when I migrated production traffic from Redis to each of these on a 4-vCPU / 8 GB Hostinger KVM4 VPS, hitting workloads from my real projects β including the SmartExam AI generator (read-heavy session cache), the Photography Studio Manager booking system (queue + lock-heavy), and CyberShieldTips (read-mostly CVE lookup cache with ~3,000 keys).
Here's the short version, then the long one with numbers.
TL;DR β Which One Should You Pick
- Valkey β Pick this if you currently run Redis OSS < 7.4 and want a true drop-in. Same RDB format, same RESP protocol, same module ecosystem. Lowest migration risk. Sweet spot: 1β4 vCPU VPS.
- Dragonfly β Pick this if you're CPU-bound on cache throughput and willing to accept BSL licensing (free for almost everyone, but read it). Best per-core performance. Sweet spot: 4+ vCPU VPS where Redis pegs one core.
- KeyDB β Pick this if you have an existing KeyDB deployment. As a new install in 2026, I no longer recommend it β Snap's release cadence has slowed sharply, and the multi-threaded story is now better served by Dragonfly or Valkey 8.1's I/O threading.
The 2026 Context: Why This Comparison Even Exists
Quick recap for anyone who tuned out the licensing soap opera. In March 2024, Redis Inc. relicensed Redis from BSD-3 to a dual SSPL/RSALv2 model. The Linux Foundation forked the last BSD commit and named it Valkey, immediately backed by AWS, Google, Oracle, Snap, and Ericsson. By late 2025, AWS had shipped ElastiCache for Valkey at a meaningful discount over ElastiCache for Redis, and Google Cloud followed with Memorystore for Valkey.
Then in May 2025, Redis pivoted again β Redis 8 introduced a tri-license adding AGPLv3 to the SSPL/RSALv2 mix. AGPL is technically OSI-approved open source, but for most commercial VPS deployments it's a non-starter because of the network copyleft clause. According to a Percona survey from late 2025, 83% of large enterprises have adopted or are exploring Valkey, and over 70% cite licensing as the reason.
For us solo operators and small teams running our own VPS, the calculus is simpler than enterprise legal review: we just want a fast, BSD/Apache-licensed in-memory store that doesn't surprise us at audit time. That narrows the field to three.
Quick Spec Sheet
| Project | Latest stable (May 2026) | License | Architecture | Backed by |
|---|---|---|---|---|
| Valkey | 9.0.2 (Feb 2026) | BSD-3-Clause | Single-thread + I/O threads | Linux Foundation (AWS, Google, Oracle) |
| Dragonfly | 1.x series | BSL 1.1 (free below 4-node cluster threshold) | Shared-nothing multi-thread, fibers | DragonflyDB Inc. |
| KeyDB | 6.3.4 (last meaningful release 2024) | BSD-3-Clause | Multi-threaded fork of Redis 6 | Snap Inc. (low activity) |
My Test Setup
I want to be honest about what this is and isn't. I'm not running a 100-node cluster. I'm running production-shaped traffic on a single VPS β which is exactly what most readers of this site care about. Setup:
- VPS: Hostinger KVM4 β 4 vCPU AMD EPYC, 8 GB RAM, NVMe storage, Singapore region
- OS: Ubuntu 24.04 LTS, kernel 6.8
- Client:
memtier_benchmarkfrom a second VPS in the same datacenter (latency < 0.4 ms) - Workloads: Three patterns β (A) 80/20 GET/SET, 100-byte values, mimicking session cache; (B) 50/50 GET/SET, 4 KB values, mimicking page-fragment cache; (C) heavy LPUSH/BRPOP queue traffic from a Laravel worker simulation
- Each version stock-configured, no aggressive tuning, persistence disabled (we're cache, not source of truth)
Workload A β Small Session Cache (80% GET / 20% SET, 100-byte values)
This is the bread-and-butter web cache pattern. Sessions, rate limits, JWT denylist lookups. What I measured on the KVM4 box, 4 client threads, 50 connections each:
| Engine | Throughput (ops/sec) | p50 latency | p99 latency | RAM used (1M keys) |
|---|---|---|---|---|
| Redis 8.0 (AGPL) | ~158,000 | 0.41 ms | 1.2 ms | 148 MB |
| Valkey 9.0.2 | ~172,000 | 0.38 ms | 1.1 ms | 132 MB |
| Dragonfly 1.x | ~410,000 | 0.34 ms | 1.4 ms | 118 MB |
| KeyDB 6.3.4 | ~265,000 | 0.45 ms | 1.8 ms | 156 MB |
Two observations from running this for real, not just glancing at the numbers. First, Valkey 9.0.2's per-slot dictionary change (introduced in 8.0) genuinely cuts memory. I saw roughly 11% lower RSS on identical key counts versus Redis 7.2, and the gap holds at scale. Second, Dragonfly is only 2.5x faster here, not the headline 25x β because at this value size on a 4-core box, you're bottlenecked by the network round-trip and client behavior, not the engine. The 25x numbers Dragonfly markets show up on bigger boxes with bigger values.
Workload B β Page Fragment Cache (50/50 GET/SET, 4 KB values)
This pattern is heavier β think Next.js ISR fragments, Laravel response cache, or Symfony HTTP cache. Bigger values stress memory bandwidth and serialization, not just protocol overhead.
| Engine | Throughput (ops/sec) | p99 latency | Memory efficiency vs Redis |
|---|---|---|---|
| Redis 8.0 | ~71,000 | 2.1 ms | baseline |
| Valkey 9.0.2 | ~78,000 | 1.9 ms | +8% better |
| Dragonfly 1.x | ~218,000 | 1.6 ms | +22% better |
| KeyDB 6.3.4 | ~118,000 | 2.4 ms | -4% worse |
Dragonfly pulls ahead more decisively when values get bigger because its shared-nothing architecture means each thread handles its own slice of the keyspace without lock contention. KeyDB's multi-threading helps over single-threaded Redis but its memory overhead (extra metadata per key for thread-affinity) starts to hurt at 1M+ keys.
Workload C β Queue / List Operations (LPUSH/BRPOP heavy)
This is the one that surprised me. I run a lot of Laravel queue workers β think delayed jobs for image processing in the Photography Studio Manager, async webhook delivery in BizChat, scheduled email blasts. List operations (LPUSH, BRPOP, LRANGE) are the dominant pattern.
| Engine | Sustained queue ops/sec | BRPOP wakeup latency p99 |
|---|---|---|
| Redis 8.0 | ~92,000 | 0.8 ms |
| Valkey 9.0.2 | ~98,000 | 0.7 ms |
| Dragonfly 1.x | ~145,000 | 1.4 ms |
| KeyDB 6.3.4 | ~110,000 | 2.2 ms |
Here's the catch with Dragonfly: raw throughput is higher, but BRPOP wakeup latency is consistently ~2x worse than Valkey. For a high-throughput batch processor that doesn't care about per-job tail latency, that's fine. For interactive queues where a user is waiting on a job to complete (image upload processing, AI inference response), it matters. In my Laravel Horizon setup, I left Valkey on the queue connection and used Dragonfly for the cache connection. Yes, you can split.
The Migration Reality Check β What Actually Broke
Benchmarks are easy. Migration is where things get spicy. Here's what I hit moving real projects from Redis 7.2 to each of the three:
Migrating to Valkey
This is genuinely a drop-in. I literally stopped Redis, swapped the binary, and started it back up against the same RDB file. redis-cli from the Laravel app keeps working because the protocol, commands, and config keys are identical. The only thing I changed was the systemd unit name from redis-server to valkey-server and the package source. Total downtime on the SmartExam app: 18 seconds, almost entirely systemd unit shuffling.
Pro tip: Ubuntu 24.04 doesn't have Valkey in the default repos yet (as of May 2026 β check again before you read this). Use the official Valkey APT repo from valkey.io/download. Hostinger's KVM doesn't block third-party repos.
Migrating to Dragonfly
Mostly fine, with footnotes. Dragonfly speaks RESP and supports the vast majority of Redis commands, but it's a clean-room implementation, so there are sharp edges:
- No support for Redis Modules (RediSearch, RedisGraph, RedisJSON). If you use these, stop here. Dragonfly has its own search and JSON support but they're not API-compatible.
- Persistence format is different. You can't just hand it a Redis RDB file. You need to dump-and-restore via
--replicaoffrom your existing Redis as a primary, then promote. - Cluster mode is "emulated" β single Dragonfly instance pretends to be a cluster. Real multi-node replication landed in 1.0 but is still less battle-tested than Valkey's cluster.
- Client library quirks: I had a
predisv2 issue withOBJECT ENCODINGcalls returning slightly different strings. The fix was a trivial config change in Laravel's session driver, but it took an hour to track down.
Migrating to KeyDB
I'll keep this short: don't do new KeyDB installs in 2026. The project hasn't shipped a meaningful release since 2024, the multi-master active-active feature that was its calling card now has competition from Valkey's clustering improvements, and the Snap-led maintenance is essentially in caretaker mode. If you have an existing KeyDB cluster that works, leave it. If you're greenfield, choose Valkey or Dragonfly.
Memory Efficiency β Why It Matters on a $5 VPS
On a 1 GB Hostinger VPS, every megabyte of RAM is a feature you can ship. I ran a follow-up test loading 5 million 50-byte keys (typical for a session store on a busy site):
- Redis 8.0: 612 MB RSS
- Valkey 9.0.2: 538 MB RSS (-12%)
- Dragonfly 1.x: 471 MB RSS (-23%)
- KeyDB 6.3.4: 658 MB RSS (+7%)
Dragonfly's win here comes from its native dashtable structure, which has lower per-key overhead than Redis-compatible engines. Valkey's per-slot dict change closed most of the gap with Redis but didn't beat Dragonfly. If you're running on a 1 GB VPS and your cache is the largest memory consumer, Dragonfly buys you the most headroom.
Operational Concerns I Care About
Observability
Valkey wins. Existing Redis exporters for Prometheus (oliver006/redis_exporter) work without changes. Grafana dashboards from the Redis ecosystem just light up. I've been using Beszel as my lightweight monitor across all 7 sites, and it picks up Valkey instances out of the box. Dragonfly ships its own Prometheus endpoint which is fine, but you'll need a custom dashboard. KeyDB's metrics work via the standard Redis exporter but some labels are stale.
Backup & Persistence
For Valkey, my restic-based backup of the RDB file Just Worksβ’. For Dragonfly, the snapshot format is different and larger (Dragonfly compresses less aggressively in snapshots), so my backup script needed an extra zstd pass. Not a dealbreaker but worth knowing.
Memory Eviction Behavior Under Pressure
This is where I've seen real production differences. With maxmemory-policy allkeys-lru on a memory-constrained VPS:
- Valkey behaves identically to Redis β eviction is approximate LRU, hit rate degrades gracefully.
- Dragonfly's eviction is slightly more aggressive β it tries to keep ~5% headroom and starts evicting earlier. Under burst writes, hit rate dropped 3β4 points more than Valkey on the same workload. This wasn't documented when I tested but the team confirmed it on GitHub.
- KeyDB matches Redis behavior, no surprises.
Licensing β One More Time, Carefully
I'm not a lawyer, and you should verify your situation. But the practical read for most VPS users:
- Valkey (BSD-3): Use it however you want. Embed it. Modify it. Ship it. Standard permissive open source.
- Dragonfly (BSL 1.1): Free for almost everyone. The trigger is offering Dragonfly as a managed service to third parties β that's the carve-out the BSL prohibits. Running it inside your own application stack? Totally fine. Re-converts to Apache 2.0 after 4 years per file.
- KeyDB (BSD-3): Permissive, no concerns.
- Redis 8 (AGPLv3 / SSPL / RSALv2): AGPL is the "open" option but its network copyleft clause means if you modify Redis and let users hit it over a network, you may need to release your modifications. For most apps using Redis as a black-box dependency this is fine, but corporate legal teams freak out anyway.
The Decision Matrix I'd Use Today
| Your situation | Recommendation |
|---|---|
| Existing Redis < 7.4 install, want minimum migration risk | Valkey 9.x |
| 1β2 vCPU VPS, modest cache load | Valkey 9.x |
| 4+ vCPU VPS, cache is throughput-bottlenecked | Dragonfly |
| Memory-constrained ($5 VPS, 1 GB RAM) | Dragonfly |
| Heavy queue / BRPOP workload (Laravel Horizon, BullMQ) | Valkey 9.x |
| Use Redis Modules (RediSearch, RedisJSON, RedisGraph) | Valkey 9.x (modules supported) or stay on Redis 8 |
| Need active-active multi-region replication today | Stay on KeyDB or evaluate Dragonfly Cloud |
| Greenfield, no existing investment | Valkey 9.x for safest path; Dragonfly for max performance |
What I Actually Run in Production
For full transparency, here's the current state across the 7 sites I operate:
- SmartExam, BizChat, ContentForge: Valkey 9.0.2 (cache + queue, single binary)
- CyberShieldTips, HoroAura: Valkey 9.0.2 (read-mostly cache)
- Photography Studio Manager (client deploy): Valkey for queue, Dragonfly for response cache (split topology)
- QuickExam: Valkey 8.1 (haven't upgraded to 9 yet, no urgency)
You'll notice Valkey wins as my default. The reason isn't that Dragonfly is worse β it's faster on most metrics β but that the operational ecosystem (monitoring, backup tooling, client library compatibility, Stack Overflow answers, AWS docs) is still richer for Valkey because it's the protocol-and-behavior-identical Redis successor. For a one-person ops team running 7 sites, ecosystem familiarity is worth a lot. Dragonfly earns its slot when I genuinely need the performance ceiling, which on shared and KVM4 boxes is rarer than the marketing copy suggests.
Installation Quick Reference
Because half the value of a comparison post is just "how do I get this on my box." All commands assume Ubuntu 24.04 on a fresh VPS:
Valkey 9.x:
curl -fsSL https://packages.valkey.io/gpg.key | sudo gpg --dearmor -o /usr/share/keyrings/valkey.gpg
echo "deb [signed-by=/usr/share/keyrings/valkey.gpg] https://packages.valkey.io/deb noble main" | sudo tee /etc/apt/sources.list.d/valkey.list
sudo apt update && sudo apt install valkey-server
sudo systemctl enable --now valkey-server
Dragonfly:
wget https://github.com/dragonflydb/dragonfly/releases/latest/download/dragonfly_amd64.deb
sudo dpkg -i dragonfly_amd64.deb
sudo systemctl enable --now dragonfly
Both will bind to 127.0.0.1:6379 by default. If your app server is on the same VPS, you're done. If you need remote access, set up a WireGuard tunnel rather than exposing the port β neither engine has authentication enabled by default, and the "just set requirepass" advice is fine but defense in depth matters.
Final Take
Two years after the Redis license drama kicked off, the picture is clearer than the heated forum threads suggest. Valkey is the safe, boring, correct default for 80% of VPS deployments. The Linux Foundation governance, AWS/Google backing, and protocol-identical drop-in nature make it the lowest-friction choice. Dragonfly is a genuine performance win when you have the headroom problem it solves and the operational maturity to run something that's not Redis-shaped. KeyDB had its moment and is now coasting β fine if you're already there, hard to recommend new.
If you skip everything else and just want my one-line answer: install Valkey 9.x today, and revisit Dragonfly when you actually hit a throughput wall. That's what I did across my own stack, and six months in, I haven't regretted it.
Sources & Further Reading
- Valkey 8.1 GA announcement β official notes on I/O threading and per-slot dict
- Valkey release history
- Dragonfly release notes on GitHub
- Redis announcing AGPLv3 in Redis 8
- AWS ElastiCache Redis vs Valkey pricing
- centminmod independent benchmarks repo
Found this helpful?
Subscribe to our newsletter for more in-depth reviews and comparisons delivered to your inbox.