Caddy vs Nginx vs Traefik in 2026: Which Reverse Proxy Wins on a Real VPS?
Three years ago I picked Nginx for everything because it was the default in every Laravel deployment guide I had ever read. Today I am running a mix of Caddy, Nginx, and Traefik across my seven aggregator sites and roughly fifteen client production servers β and I genuinely cannot recommend a single winner anymore. The right reverse proxy in 2026 depends on what you are deploying, how often the topology changes, and how much patience you have for editing config files at 2 AM when a TLS certificate expires.
This is not a synthetic benchmark from a 64-core bare-metal box. This is what I learned after migrating production traffic across all three on Hostinger VPS plans, a few Hetzner CX22 nodes, and a couple of Contabo VPS-S boxes that I use for staging. If you have been wondering whether Caddy is finally ready, whether Traefik is worth the complexity, or whether Nginx is still the safe pick β keep reading.
Why I Stopped Defaulting to Nginx in 2024
For the first eight years of my career I shipped Nginx config the same way: copy the previous server block, swap the domain name, regenerate the Let's Encrypt cert with Certbot, reload, done. That worked. It still works. The reason I started looking around was not performance β Nginx is still the fastest of the three for raw throughput on a small VPS β it was operational drag.
By 2024 I was juggling Laravel apps, Vue dashboards, a couple of Next.js sites, and a small fleet of Go services on the wardigi.com infrastructure. Every new subdomain meant another vhost file, another Certbot run, another reload, and another thing I had to remember to back up. When I added the seventh aggregator site to my Hostinger setup, I realized I had forty-three Nginx server blocks across various boxes and I could not tell you what half of them did without grepping.
That was the point I started testing Caddy seriously, and a few months later I added Traefik to the rotation for one specific client deployment that was running everything in Docker. Three years and a lot of broken configs later, here is the breakdown.
The Three Contenders, Honestly
Nginx
Still the de-facto standard. Written in C, single-threaded event loop, tiny memory footprint, and it has been hardened by a decade and a half of internet-scale deployments. The official mainline release in 2026 is 1.27.x, with QUIC and HTTP/3 stable since the 1.25 series. On my Hostinger KVM-2 box (2 vCPU, 8 GB RAM) a fresh Nginx install idles at around 8 to 12 MB of resident memory with three vhosts loaded. That is not a typo.
What you give up: there is no automatic HTTPS. You bolt on Certbot. The configuration language is its own dialect that you have to keep relearning. Dynamic backends require either the commercial Plus version or a hack involving the resolver directive plus a variable in proxy_pass β neither of which is fun.
Caddy
Caddy 2 hit a stable rhythm a while ago and the v2.8.x line in 2026 feels like the most polished release the project has shipped. It is written in Go, ships as a single static binary, and the killer feature is still the same as day one: automatic HTTPS via Let's Encrypt or ZeroSSL with no extra tooling. You write a Caddyfile, point a domain at your server, and Caddy provisions the cert on first request. No cron jobs, no certbot.timer, no renewal failures at 3 AM.
On the same Hostinger box, idle Caddy with the same three sites sits around 28 to 35 MB of resident memory. Heavier than Nginx β but for what you get back, I will pay that tax every day.
Traefik
Traefik v3.x is the Docker-native option. It watches the Docker socket (or Kubernetes API, or Consul, or a static file) and reconfigures itself when containers come and go. You add labels to your container, Traefik picks them up, requests a cert, and starts routing traffic. There is no config to edit when you deploy a new app β you just spin up the container.
The cost is real. Idle Traefik on the same hardware sits around 75 to 110 MB depending on how many providers you have wired in. The dashboard is informative but the routing rules use their own DSL that takes a week to feel natural. And when something goes wrong, the logs are denser than Nginx error logs by a wide margin.
What I Actually Measured on a $9 VPS
I wanted real numbers from gear I already pay for, not from someone else's c5.4xlarge in us-east-1. So I ran the same test on my Hostinger KVM-2 box: 2 vCPU, 8 GB RAM, NVMe storage, AlmaLinux 9. Each proxy was put in front of an identical PHP-FPM 8.3 backend serving a small Laravel route that returned a 2 KB JSON response. The load tool was wrk, run from a separate Hetzner CX22 in the same datacenter region to keep latency honest.
Test command: wrk -t4 -c100 -d60s https://test.cloudhostreview-lab.example/api/ping
| Reverse Proxy | Requests/sec | p50 latency | p99 latency | Idle RAM |
|---|---|---|---|---|
| Nginx 1.27.4 | 14,820 | 6.1 ms | 22 ms | 11 MB |
| Caddy 2.8.4 | 11,940 | 7.8 ms | 31 ms | 32 MB |
| Traefik 3.3.1 | 9,610 | 9.6 ms | 44 ms | 96 MB |
Two takeaways from those numbers. First, Nginx is still measurably faster, by roughly 24 percent over Caddy and 54 percent over Traefik on this exact workload. Second β and this is the part that matters β none of the three sites I run on Caddy or Traefik are anywhere close to saturating that throughput. My busiest aggregator site does about 380 requests per minute at peak. Even Traefik on a $4 Contabo VPS-S would handle that without sweating.
If you are running a SaaS that does ten million requests per day on a single node, the gap matters. If you are like most small-business projects β including six of my seven aggregator sites β it does not.

The Config Comparison Nobody Wants to Show You
The fairest way to compare these is to set up the same task in each: serve a Laravel app at example.com with HTTPS, a static React frontend at app.example.com, and proxy /api to a Node service on port 3000. Here is what the config looks like in each tool.
Nginx (with Certbot already run)
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
root /var/www/laravel/public;
index index.php;
location / { try_files $uri $uri/ /index.php?$query_string; }
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php8.3-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
server {
listen 443 ssl http2;
server_name app.example.com;
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
root /var/www/react/dist;
location / { try_files $uri /index.html; }
location /api/ { proxy_pass http://127.0.0.1:3000/; }
}
Plus a separate port 80 server block for HTTP-to-HTTPS redirects, plus the Certbot setup, plus the renewal cron. That is 40-plus lines of config before you have done anything interesting.
Caddy
example.com {
root * /var/www/laravel/public
php_fastcgi unix//var/run/php/php8.3-fpm.sock
file_server
}
app.example.com {
root * /var/www/react/dist
try_files {path} /index.html
file_server
handle /api/* {
reverse_proxy 127.0.0.1:3000
}
}
Eleven lines. Automatic HTTPS. Automatic HTTP-to-HTTPS redirect. Automatic gzip and zstd. Renewal handled by the same process. The first time I shipped a Caddyfile to production I assumed I was missing something β but no, this really does work out of the box.
Traefik (with Docker labels)
Traefik does not have a standalone config in the same sense. You write a static file for the entrypoints and ACME setup, then attach labels to your Docker containers:
services:
laravel:
image: my-laravel:latest
labels:
- "traefik.enable=true"
- "traefik.http.routers.laravel.rule=Host(`example.com`)"
- "traefik.http.routers.laravel.tls=true"
- "traefik.http.routers.laravel.tls.certresolver=letsencrypt"
react-app:
image: my-react:latest
labels:
- "traefik.enable=true"
- "traefik.http.routers.react.rule=Host(`app.example.com`)"
- "traefik.http.routers.react.tls.certresolver=letsencrypt"
If you live in Docker Compose, this is wonderful β a new app means adding a service with five labels and Traefik wires up TLS plus routing without you touching anything. If you do not live in Docker, Traefik is the wrong tool.
Where Each One Actually Wins
Pick Nginx when
- You are running on a tiny VPS β a $4-per-month Contabo or a Hostinger KVM-1 β and every megabyte of RAM matters because you are also running PHP-FPM, MySQL, and Redis on the same box.
- You need extreme tuning for static asset delivery, where the C-level micro-optimizations and aggressive caching directives still beat Go-based proxies.
- Your team already knows Nginx. The cost of retraining is real, and Nginx will do anything Caddy or Traefik can do β it just takes more lines.
- You are deploying a single big monolith that rarely changes shape. Long-lived static config is Nginx's home turf.
Pick Caddy when
- You are tired of Certbot. This was my reason. After Caddy provisioned a wildcard cert via DNS-01 with three lines of config, I stopped looking back.
- You run a small fleet of sites on bare metal or VPS without containers. My seven aggregator sites all sit behind a single Caddy instance now and the entire Caddyfile is under 90 lines.
- You need to terminate HTTPS for a Laravel app, a static SPA, a couple of Node services, and maybe a websocket β and you want all of it in one readable config file.
- You want ops-friendly defaults. Brotli and zstd compression, modern cipher suites, sensible logging β all on by default.
Pick Traefik when
- You are deploying with Docker Compose, Docker Swarm, or Kubernetes and the topology changes weekly.
- You want a real-time dashboard showing routing tables, middleware chains, and circuit breakers without bolting on extra tooling.
- You need advanced features like canary releases, traffic mirroring, or weighted load balancing that would otherwise push you toward HAProxy or a service mesh.
- Your team is comfortable with declarative infra and you have memory to spare on the proxy host.
The Operational Surprises Nobody Warns You About
Three things bit me hard enough to be worth flagging.
Caddy and ACME rate limits. The first time I migrated a fleet of sites to Caddy, I rebuilt the config and restarted the process roughly forty times in an hour while debugging routing rules. Caddy dutifully tried to issue a fresh certificate every restart. Let's Encrypt rate-limits to fifty certs per registered domain per week. Guess who hit that limit and locked themselves out for six days. Set email globally, use the staging endpoint while iterating, and only flip to production when the config is stable.
Traefik and Docker socket security. By default, Traefik mounts the Docker socket so it can watch for label changes. This means Traefik effectively has root on your host. On any production box I now run a Docker socket proxy in front of it, exposing only the read-only endpoints Traefik actually needs. This is documented but not loud enough in the getting-started guides.
Nginx and the silent reload trap. When you run nginx -s reload with a syntax error, Nginx logs the error and continues serving with the old config. That is great for uptime β and a disaster when you assume a deploy succeeded because the site is still responding. Always run nginx -t before reload. Always.
HTTP/3 and QUIC in 2026 β Where Each One Stands
HTTP/3 is no longer experimental. All three proxies support it, but the experience is different.
- Caddy ships HTTP/3 enabled by default. You do not flip a switch. You point a browser at it and it negotiates QUIC.
- Nginx requires the
http3directive on the listen line and a UDP firewall opening on 443. Stable since 1.25 but still feels bolted on. - Traefik supports HTTP/3 in v3 via an experimental flag. It works, but I have hit edge cases with intermittent connection upgrades that pushed me back to HTTP/2 in production for one client.
For a typical aggregator site or marketing page, HTTP/3 is a small win β single-digit percent improvement in time-to-first-byte from cold connections, more on lossy mobile networks. Not a deciding factor, but Caddy gives it to you for free.
What I Actually Run in Production
Here is the honest inventory across my own infra and a few client setups:
- The seven aggregator sites β all behind a single Caddy instance on a Hostinger KVM-2. Caddy was the right call because adding a new site is one block in the Caddyfile and the cert is automatic.
- SmartExam AI Generator (client SaaS, Laravel + Vue) β Nginx on a Hetzner CX32. The traffic patterns are predictable, the topology is stable, and the team that took over operations was already fluent in Nginx.
- An internal dashboard for one client (six microservices, all in Docker Compose) β Traefik. The whole stack lives in one compose file, services come and go, and Traefik picks up new ones automatically. This is the only place I run Traefik happily.
- Photography Studio Manager production (Laravel + PostgreSQL) β Nginx, fronted by Cloudflare. The Cloudflare layer handles edge caching and DDoS, Nginx handles app routing.
Notice the pattern. Caddy when I want zero-friction HTTPS for a fleet of small sites. Nginx when the workload is predictable and performance per dollar matters. Traefik when the application lives in containers and I want labels, not config files.
The Decision Matrix I Actually Use
When a client asks me which one to pick, I run them through three questions.
- Are you deploying with Docker Compose or Kubernetes? If yes β Traefik. If no, move on.
- Is your team already fluent in Nginx, and is the config likely to stay stable for years? If yes β Nginx. The migration cost is rarely worth it.
- Anything else? Caddy. Especially for greenfield projects, internal tools, multi-site setups, and anything where you want your TLS configuration to handle itself.
That algorithm has not steered me wrong in three years. The one place I would tweak it: if you are running on a $3-per-month VPS where every megabyte of RAM is precious and you can tolerate the operational overhead, Nginx is still the right answer regardless of the answer to question one.
Frequently Asked Questions
Is Caddy really production-ready in 2026?
Yes. I have been running it in production since 2023 across multiple sites and have not had a single proxy-side outage caused by Caddy itself. The only incidents I have logged were ACME rate limits I caused and one config bug where I accidentally exposed an internal endpoint β both my fault, not Caddy's. The codebase is mature, the release cadence is sane, and the documentation is the best of the three.
Does Nginx still beat Caddy on performance?
On raw throughput, yes β by about 20 to 30 percent on the workloads I have measured. For 99 percent of small to mid-scale deployments this gap is theoretical. If you are saturating a $9 VPS at peak, you have other problems first.
Why is Traefik so much heavier than the others?
It maintains an in-memory representation of the entire routing graph and watches multiple providers (Docker, file, Kubernetes API) for changes. That state has a cost. On a memory-constrained VPS the difference matters; on a server with 8 GB of RAM it does not.
Can I run all three on the same box?
Technically yes, on different ports β but I would not. Pick one and commit. The mental overhead of keeping three different config languages in your head when you debug a routing issue at 2 AM is not worth it.
What about HAProxy and Nginx Proxy Manager?
HAProxy is excellent for L4 load balancing and high-volume HTTP, but its config language is the steepest of any of these and it lacks built-in ACME. Nginx Proxy Manager is a UI on top of Nginx β fine for a homelab, not what I would put in front of a paying customer.
Final Recommendation
If you take one thing from this article: stop defaulting to Nginx out of habit. The reverse proxy ecosystem in 2026 is more diverse than it was when most of us learned it, and the right tool for a specific deployment is rarely the same one we used five years ago.
For new projects on a single VPS with a handful of sites, Caddy is the answer I give nine times out of ten. For container-native deployments where the topology shifts week to week, Traefik earns its place despite the memory cost. For everything else β and especially for performance-critical static-asset workloads β Nginx is still the safe and fast choice it has always been.
Whichever one you pick, the most important thing is to actually use the same tool across your fleet. Three years of mixing all three taught me that operational consistency is worth more than any 20 percent throughput improvement on a synthetic benchmark. Pick one, learn it deeply, and move on to the actual problem you were trying to solve.
Found this helpful?
Subscribe to our newsletter for more in-depth reviews and comparisons delivered to your inbox.