Uptime Kuma vs Gatus vs Statping-ng: Self-Hosted Monitoring (2026)
I have been running self-hosted uptime monitoring on a $4.50/month Hetzner CX22 VPS for the last 18 months β initially to keep tabs on seven aggregator sites I operate (CloudHostReview included), and later because BetterStack's pricing for 30 monitors crept past $24/month. After cycling through Uptime Kuma, Gatus, and Statping-ng on the same box, I have a strong opinion on which one fits which workflow. This post is that opinion, with the numbers behind it.
If you only have ten seconds: Uptime Kuma wins for solo operators who want a beautiful UI with zero YAML, Gatus wins for teams that already version-control everything in Git, and Statping-ng is mostly coasting on legacy goodwill β I would not recommend it for a fresh deployment in 2026.
Why bother self-hosting in 2026?
The hosted SaaS landscape (BetterStack, UptimeRobot Pro, Pingdom, Checkly) has moved up-market. Free tiers still exist but they have been gradually clipped β UptimeRobot dropped their 1-minute interval from the free plan in 2024, and BetterStack's "Hobby" plan caps you at 10 monitors. For my situation β 7 aggregator sites with around 4 endpoints each (homepage, sitemap, RSS, admin), plus a handful of client status checks β I was paying $24/month for what amounts to "ping a URL every minute and tell me if it breaks."
Across the 50+ projects we have shipped at wardigi.com, the pattern repeats: clients ask for a public status page they can point partners to. Self-hosted gets you that for the cost of a small VPS. Three reasons it matters in 2026:
- Cost compression. A Hetzner CX22 (4GB RAM, 2 vCPU, ARM64) runs €3.79/month. That single box hosts Uptime Kuma plus a Caddy reverse proxy plus PostgreSQL for unrelated workloads, with headroom to spare.
- Privacy of endpoint URLs. If your endpoints include staging environments, internal admin URLs, or webhook receivers, sending those to a third-party SaaS is a real disclosure surface. Self-hosted keeps the list inside your perimeter.
- Status page branding. The public status page is often the one piece of your infrastructure customers see during an incident. Owning the layout, custom domain, and copy without paying $50/month for a "branded" tier is meaningful.
That said, self-hosting a monitor that monitors itself is a known footgun. I will cover the dual-region trick later β short version: run a second tiny monitor on a different provider so you find out when your primary monitor box dies.
Quick comparison table
| Feature | Uptime Kuma | Gatus | Statping-ng |
|---|---|---|---|
| Language | Node.js (TypeScript) | Go | Go |
| GitHub stars (Apr 2026) | ~85,600 | ~10,700 | ~3,400 |
| Docker image size | ~280 MB | ~35 MB | ~20 MB |
| Idle RAM (my measurement) | 240β320 MB | 28β35 MB | 55β70 MB |
| Configuration | Web UI | YAML file | Web UI |
| Database | SQLite, MariaDB | SQLite, PostgreSQL, in-memory | SQLite, MySQL, Postgres |
| Monitor types | 20+ (HTTP, TCP, DNS, ping, push, Steam, Docker) | HTTP, TCP, DNS, ICMP, SSH, WebSocket, STARTTLS | HTTP, TCP, ICMP, gRPC |
| Public status page | Built-in, multiple pages | Built-in, single endpoint | Built-in, polished |
| Notification channels | 90+ (Telegram, Slack, Gotify, email, webhook) | ~20 (Slack, Discord, email, webhook, ntfy) | ~15 (Slack, Discord, email) |
| Last release (as of writing) | v1.24 series, active | v5.x, active (Feb 2026) | v0.96, last commit June 2025 |
| License | MIT | Apache-2.0 | GPL-3.0 |
The RAM numbers are from docker stats on my Hetzner box, all three running concurrently with 30 HTTP monitors each on a 60-second interval. Your numbers will vary, but the relative ordering is what matters: Gatus is roughly an order of magnitude lighter than Uptime Kuma at idle.
Uptime Kuma β the popular choice for a reason
Uptime Kuma is the project that turned self-hosted monitoring from "Nagios-flavored YAML hell" into "click three buttons and you are monitoring." It is built on Node.js with a Vue.js frontend, ships as a single Docker image, and the default SQLite backend is fine for hundreds of monitors. The interface is genuinely nice β clean dark-mode dashboard, real-time ping graphs, color-coded status, and a notification matrix that lets you wire each monitor to a different channel without copy-pasting config.
What I actually use it for
On my CloudHostReview ops box, Uptime Kuma watches:
- Homepage 200 status with 30s interval and a keyword check for the latest article slug (catches CDN cache poisoning and PHP-FPM crashes).
- Sitemap.xml β keyword check for <urlset> (catches the "empty sitemap" bug Hostinger's shared PHP throws when MySQL connections are exhausted).
- Push monitor for daily cron jobs β the cron pings a unique URL on success, and Uptime Kuma alerts when the ping does not arrive within the expected window. This caught a broken cron three times last quarter.
- TLS certificate expiry β 30-day and 14-day warning thresholds.
- HTTP(s) JSON Query for the WordPress REST API on one client site.
The push-monitor pattern is what locked me into Uptime Kuma initially. Gatus added equivalent "external endpoint" support in v5.6 (Feb 2026), but Uptime Kuma had it polished years earlier.
Where it hurts
The Node.js process is not free. On my box, baseline RAM hovers around 240 MB with 30 monitors. Push it to 200 monitors and you are looking at 600β800 MB resident. There is a known issue (louislam/uptime-kuma#5654) where virtual memory balloons even though resident memory stays bounded β annoying on memory-constrained VPS like the $5 DigitalOcean droplet.
The other gripe: configuration is web-only. There is no kuma.yaml you can commit to Git. If your dev box dies and you forgot to back up /app/data/kuma.db, you are rebuilding 30 monitor configs by hand. There is a JSON export, but it is manual β no API-driven sync. For a one-person operation that is fine. For a team with three devops engineers and a "infrastructure-as-code or it didn't happen" culture, this is a hard no.
Quick install
docker run -d --restart=always \
-p 3001:3001 \
-v uptime-kuma:/app/data \
--name uptime-kuma \
louislam/uptime-kuma:1
Then put a Caddy reverse proxy in front and you are live in three minutes. Caddy will handle TLS via Let's Encrypt automatically β I will share my Caddyfile later in the production setup section.
Gatus β config-as-code for people who like it that way
Gatus comes from the opposite philosophy: every monitor lives in a YAML file, alerts are declarative, and the whole thing is a single Go binary that boots in under 200 ms. The web UI is read-only β you cannot add or edit a monitor through it. If you change a monitor, you edit YAML and redeploy.
This sounds annoying until you have managed monitors across multiple environments, then it becomes obviously correct. I keep a gatus.yaml in the same repo as my Terraform for one client, and a CI job re-pushes the config to their Gatus instance whenever the file changes. New service goes live, monitor goes live in the same PR. No "oh, we forgot to add a check" post-mortems.
Configuration shape
storage:
type: postgres
path: "postgres://gatus:secret@db/gatus?sslmode=disable"
alerting:
slack:
webhook-url: "https://hooks.slack.com/services/..."
ntfy:
topic: "gatus-alerts"
url: "https://ntfy.sh"
endpoints:
- name: cloudhostreview-home
url: "https://cloudhostreview.com/"
interval: 60s
conditions:
- "[STATUS] == 200"
- "[RESPONSE_TIME] < 1500"
- "[BODY] == pat(*CloudHostReview*)"
alerts:
- type: slack
failure-threshold: 3
send-on-resolved: true
- name: cloudhostreview-tls
url: "https://cloudhostreview.com/"
interval: 1h
conditions:
- "[CERTIFICATE_EXPIRATION] > 240h"
Two things stand out the moment you actually use it: (1) the condition DSL is genuinely powerful β [CERTIFICATE_EXPIRATION], [BODY].path.in.json, [IP], all directly in the YAML β and (2) failure thresholds are first-class. "Alert me only after 3 consecutive failures, send-on-resolved" is one line. Uptime Kuma has the same feature but you click through 4 settings panels.
What Gatus does not do
The trade-offs are real. There is no incident management β you cannot post a "we are investigating, ETA 30 minutes" note on the status page like you can with Statping-ng. There is no maintenance-window mode that shows on the public page (you can suppress alerts via cron-style maintenance config, but the public page does not say "scheduled maintenance"). The notification list is shorter than Uptime Kuma's 90+ β Gatus covers the major ones (Slack, Discord, email, ntfy, Telegram, webhook, PagerDuty, Opsgenie) but if you wanted to push to Steam Game Server or Mattermost, you may be writing a webhook bridge.
RAM is the win. On the same box, Gatus sits at around 30 MB resident with 30 monitors and barely moves the CPU graph. That matters when you are running a $5 VPS and the monitor is sharing space with three other services.
Statping-ng β the polished status page that stopped polishing
Statping-ng forked the original Statping project in 2020 after upstream development stopped, and it kept things alive for a while. The selling point has always been the public status page: out-of-the-box, it is the prettiest of the three. Big colored bars, a service catalog, group-by-category, and a polished "incidents" tab with manual incident creation β exactly what you want when a customer screenshots your status page during an outage.
I deployed it for two months early last year. The public-facing UI was a clear win. The admin UI felt dated next to Uptime Kuma. And then I checked the GitHub repo: last meaningful release was June 2025, and the issue tracker is full of unresolved bug reports going back to 2023. As of April 2026, that has not changed.
Why I would not pick it for a fresh deployment in 2026
- Maintenance has stalled. The "ng" in Statping-ng was the community keeping the lights on after the original Statping went dormant. The lights are flickering. Open issues have grown faster than they get closed for at least 18 months.
- Security responsiveness is slow. A Go binary is fine until it depends on
github.com/mattn/go-sqlite3versions that have CVEs and nobody has rebuilt in a while. I would not run an unmaintained Go web app on the public internet without strong faith in the maintainer's response time, and that faith is not currently warranted. - Feature parity has slipped. Push monitors, JSON-query monitors, and modern notification channels (ntfy, Pushover v2, native PagerDuty Events API v2) are missing or half-implemented.
If you already have Statping-ng running and it works for your team, there is no urgent reason to rip it out β it is not broken, it is just not progressing. But for a new deployment in 2026, the choice is between Uptime Kuma and Gatus. Statping-ng is a third option that nobody actually picks once they compare honestly.
Decision matrix β which one for your situation?
I find these matrices useful only when they are concrete, so here is the actual logic I would apply:
| Your situation | Pick | Reason |
|---|---|---|
| Solo operator, 5β50 services, want it running today | Uptime Kuma | Easiest setup, best UX, most monitor types, push monitors built-in |
| Devops team with GitOps culture, infra in Terraform | Gatus | YAML-as-code matches the rest of your stack, Git diff = audit trail |
| Memory-constrained VPS ($3β$5/month tier) | Gatus | 30 MB vs 240 MB makes a real difference on a 1GB box |
| Public status page is the primary deliverable to customers | Uptime Kuma | Statping-ng UI is prettier but the project is stagnating; Kuma's status page is good enough and actively improving |
| 500+ monitors | Gatus + Postgres backend | Uptime Kuma SQLite struggles past ~500 monitors; Gatus + PostgreSQL scales further |
| Need integration with PagerDuty / Opsgenie / Atlassian Statuspage | Uptime Kuma | Native integrations exist; Gatus has webhook + PagerDuty native |
| Push-based "did this cron run?" monitoring is critical | Uptime Kuma | Most polished push-monitor implementation |
| You already run Statping-ng and it works | Stay (for now) | Plan a migration in 6β12 months if upstream remains quiet |
Production setup tips (from running it for 18 months)
The default Docker run command gets you running in 60 seconds. Here is what I added on top after learning the hard way.
1. Caddy in front of everything
Caddy 2 handles TLS for free, and the config is three lines. My block for Uptime Kuma:
status.example.com {
reverse_proxy localhost:3001
encode gzip zstd
log {
output file /var/log/caddy/status.log
format json
}
}
That is the entire config. Let's Encrypt cert is provisioned on first request, auto-renewed, and you do not touch it for the next two years. I covered the Caddy-vs-Nginx-vs-Traefik trade-off in a previous article; for a single-service status page, Caddy is overwhelmingly the right pick.
2. The "monitor the monitor" problem
If your monitor lives on the same VPS as your apps, a server-wide outage takes down both β and you find out from a customer email at 2 AM. The fix is cheap: a second monitoring instance on a different provider. I run a tiny Gatus container on a Fly.io free tier (or use Hetzner if Fly cuts you off) that does one job: ping the Uptime Kuma URL on my Hetzner box every 60 seconds. If it goes silent, ntfy pings my phone.
Total monthly cost of the redundancy layer: $0. Total prevented outages I would have missed: 2 (in 18 months).
3. Backup the database
SQLite is fine until your VPS dies. I run a nightly cron that sqlite3 /app/data/kuma.db ".backup /tmp/kuma-$(date +%F).db", then rsyncs to Hetzner Storage Box (€3.81/month for 1 TB). Restore is a file copy. Do this on day one β the worst time to discover you have no backups is when you need them.
4. Notification channels β pick at least two
Single-channel alerting fails the day Slack has an outage during a cascading regional incident (this happens more often than you think). I wire every Sev1-equivalent monitor to both Slack and ntfy. ntfy push notifications are free, self-hostable, and work from a single curl call. Belt and suspenders.
5. Tune the failure threshold
The default "alert on first failure" is wrong for HTTP monitors. Transient network blips happen. I set every HTTP monitor to "3 consecutive failures" before alerting, with the interval at 30 seconds β that is 90 seconds of confirmed downtime before my phone buzzes, which filters 95% of the noise without meaningfully delaying real-incident notification.
What I actually run today
For full transparency: my main box runs Uptime Kuma, and I am genuinely happy with that choice. I tested Gatus seriously and the YAML config-as-code is appealing, but for a one-person ops setup the click-to-add ergonomic of Uptime Kuma wins. I do not need a Git PR review for "add a new keyword check on a blog post" β I need to add it in 30 seconds and move on.
If I were running this for a team with shared on-call rotation and a real Terraform setup, I would switch to Gatus tomorrow. The ability to grep git log for "when did we add monitoring on the new payment service" is genuinely useful in a team context, and the memory savings stack up if you are running monitoring in many regions.
Statping-ng I would only consider if I needed its specific status-page styling and I was willing to fork it myself. Otherwise, the lack of upstream momentum is a deal-breaker.
FAQ
Can I migrate from Uptime Kuma to Gatus?
Not automatically. Uptime Kuma exports a JSON dump, but Gatus expects YAML β you would write a small script to convert. For 30 monitors that is an afternoon. For 300, plan a day. The conditions translate cleanly: HTTP status code, response time, body keyword check all map directly.
What about Healthchecks.io self-hosted?
Healthchecks (the OSS one β healthchecks/healthchecks) is a fourth option I did not include in the main comparison because it is purpose-built for cron-job / push monitoring, not URL pinging. If your problem is "did my backup script run last night," Healthchecks is the right tool, not Uptime Kuma. If your problem is both URL pinging and cron monitoring, Uptime Kuma covers both reasonably; otherwise run Healthchecks alongside.
Can these tools alert via SMS?
All three support Twilio webhooks, which means SMS at Twilio's per-message cost. Uptime Kuma also has direct integrations with various SMS gateways including international carriers. None of them have a "free SMS" tier β that is a SaaS feature, not an open-source one. For free push, ntfy is the best answer in 2026.
What about Prometheus + Alertmanager + Grafana?
Different category. Prometheus is for metrics-driven alerting at infrastructure scale (latency percentiles, request rates, JVM heap pressure). Uptime Kuma / Gatus are for binary up/down monitoring of endpoints. They complement each other β I run both on my main box. Use Prometheus for "P95 latency on the API has crossed 800ms." Use Uptime Kuma / Gatus for "the marketing site returns 200 with the homepage keyword."
Will any of these handle 1,000+ monitors?
Gatus with PostgreSQL backend, comfortably. Uptime Kuma with MariaDB, yes but the UI gets sluggish past ~500. Statping-ng β I would not push it. At 1,000+ monitors you are also in territory where you should consider whether the right architecture is actually a Prometheus blackbox-exporter setup with proper SRE tooling, but that is another article.
Is there a hosted version of Uptime Kuma?
Yes β Elestio offers a managed Uptime Kuma starting around $10/month. That defeats the cost-savings argument for small deployments but makes sense if you do not want to run a VPS. The other angle is "self-hosted as a service" providers like PikaPods, which run open-source apps on shared infrastructure for around $1.40/month for Uptime Kuma specifically. Worth knowing both exist.
Bottom line
Self-hosted uptime monitoring is one of the easier wins in the homelab/small-ops playbook in 2026. The tooling has matured to the point where Uptime Kuma "just works" for solo operators, Gatus is the obvious choice for GitOps teams, and there is no compelling reason to deploy Statping-ng on a fresh box anymore. Pick based on whether you prefer clicking buttons or editing YAML β both will catch your outages, and both are free forever.
The decision I would not stress about: which one to start with. Both Uptime Kuma and Gatus support importing/exporting their config, and migrating between them is a small project, not a rebuild. Pick one, get it running this weekend, and you will spend more time tuning thresholds than regretting the choice.
Found this helpful?
Subscribe to our newsletter for more in-depth reviews and comparisons delivered to your inbox.