Beszel vs Netdata vs Glances 2026: Lightweight VPS Monitoring Compared

Beszel vs Netdata vs Glances 2026: Lightweight VPS Monitoring Compared

By Fanny Engriana Β· Β· 12 min read Β· 9 views

Last December I shipped a small change to the way our 7 aggregator sites talk to MySQL, and within an hour two of them started returning 502s. The problem wasn't the code β€” it was that I had no real-time visibility into per-process memory on the Hostinger VPS we use as a reporting box. By the time I SSH'd in, the OOM killer had already done its work. That night I rebuilt the monitoring stack from scratch, and the question I had to answer was the same one most VPS operators are asking in 2026: do you go heavy and feature-rich (Netdata), terminal-first and surgical (Glances), or lightweight and multi-host friendly (Beszel)?

This comparison is from someone who actually runs these tools in production, not from someone who installed them in a sandbox for an afternoon. I'll cover footprint numbers I've measured, the specific failure modes each tool hides or exposes, and a clear decision matrix at the end. If you're picking a monitoring stack for a single $5 VPS, a homelab, or a fleet of small instances powering things like the SmartExam AI Generator we built at Warung Digital Teknologi, this guide is for you.

Why lightweight monitoring matters more in 2026

The cheap-VPS market in 2026 is brutal. You can rent an Ampere ARM instance with 4 GB of RAM for the price of two coffees, but the moment your monitoring agent eats 400 MB of that RAM you've lost 10% of your useful capacity to watching the machine instead of using it. I've seen teams spend more on monitoring overhead than on the workloads themselves β€” usually because they reached for Prometheus + Grafana + Node Exporter + cAdvisor without asking whether their use case actually needs a time-series database with PromQL.

The three tools in this comparison occupy very different spots on the weight-vs-capability axis:

  • Beszel β€” a 2024-born, hub-and-agent web monitor written in Go. Agent footprint around 23 MB of RAM at idle. Designed for many small servers reporting to one dashboard.
  • Netdata β€” the heavyweight observability platform with sub-second resolution, AI anomaly detection, and 800+ auto-discovered collectors. Agent footprint typically 200–500 MB.
  • Glances β€” a Python-based, terminal-first system monitor that also exposes a web UI and a REST API. Footprint around 50–100 MB.

None of them is "best" in a vacuum. The right pick depends on three questions: how many servers, what fidelity of metrics, and how much RAM you can spare per agent.

Beszel deep-dive: the 23 MB agent

Beszel runs on a hub-and-agent model. The hub β€” a single Docker container β€” provides the dashboard and stores historical data. Agents run on each server you want to monitor and push metrics to the hub over an encrypted SSH connection. There's no inbound port on the monitored server, which is a quiet but important security property: I never have to argue with our network team about firewall exceptions when adding a new node.

I switched the CloudHostReview reporting box and three other lightly-loaded VPS instances to Beszel two months ago. Here's what I measured on a 1 GB Hostinger VPS running PHP-FPM + nginx + MySQL:

  • Agent RSS at idle: 23 MB
  • Agent CPU at 1-second poll: less than 0.5%
  • Hub Docker container (8 servers reporting): around 90 MB RAM
  • Disk usage for 30 days of history (8 servers): under 200 MB

What Beszel tracks well: CPU, memory, swap, disk usage and IO, network throughput, temperature (where available), Docker container metrics (CPU, memory, network per container), load average, and uptime. You get historical graphs at 1-minute granularity by default, which is enough fidelity to catch a slow memory leak or a runaway cron job, but not enough to debug a millisecond-level latency spike.

What Beszel does not track: per-process memory (no htop-style process list), application-level metrics (no PostgreSQL pg_stat scraping, no nginx vhost metrics), and no log aggregation. If your bug is "which PHP-FPM worker is leaking?" Beszel will not answer it. If your bug is "is server #4 trending toward 95% memory?" Beszel will answer it cleanly.

Setup difficulty

I had the hub running and three servers reporting in about 20 minutes. The hub is a one-liner Docker Compose. Each agent is a single Go binary you drop into /opt/beszel-agent and a systemd unit file that's literally 12 lines long. Adding a new server takes me under 90 seconds now: paste the public key from the hub, run the install script, done. For comparison, getting Prometheus + Grafana + Node Exporter to the same level of usability took me a full afternoon when I tried it in 2023.

One thing that surprised me in a good way: Beszel handles fleet expansion gracefully. The hub doesn't care if you have 3 agents or 30 β€” the dashboard just shows them all in a sortable grid with status badges. I've heard from another small ops team that they're running 47 agents into a single 512 MB hub container without issue.

Where Beszel falls short

The alerting is functional but basic. You get thresholds on CPU, memory, disk, bandwidth, temperature, load average, and status β€” and notifications via webhook, email, Pushover, ntfy, or Gotify. There's no dependency tracking ("if database is down, suppress web-server alerts"), no anomaly detection, no SLO/SLI primitives. If you want PagerDuty-grade routing, Beszel will not get you there. For my use case β€” a small ops team where everyone gets the same WhatsApp alert anyway β€” it's fine.

Netdata deep-dive: the everything-monitor

Netdata is the opposite philosophy: collect everything, store everything, visualize everything, and apply machine learning to the result. Out of the box, Netdata auto-discovers around 800 collectors covering applications, databases, web servers, message queues, container runtimes, and protocols. Per-second metric resolution. Built-in anomaly detection. A web dashboard that loads instantly because the metrics are stored locally on each agent in a custom time-series database.

I ran Netdata on the SmartExam AI Generator's primary inference VPS for three months. The good: when a student session started thrashing the disk during peak load, Netdata had already flagged the IO subsystem as anomalous before our users complained. The drill-down β€” from "disk is unhappy" to "this specific cgroup is saturating it" β€” took me under 30 seconds. That kind of debugging speed is the entire reason to pay the resource cost.

The bad: the resource cost is real. On that same instance, Netdata's main process consistently used 340–410 MB RSS, and the disk database grew about 1.2 GB per day at the default retention. On a 4 GB VPS that's 10% of memory and a meaningful chunk of disk. On a 1 GB VPS that's a non-starter.

What Netdata is genuinely great at

  • Per-second resolution. You can see a 2-second CPU spike that minute-resolution tools would smooth out completely.
  • Application auto-discovery. Install Netdata on a box running PostgreSQL, MySQL, Redis, nginx, and Docker β€” within 30 seconds you have dashboards for all of them. No config files to write.
  • The dashboard UX. Hovering, zooming, and correlating across charts is the smoothest of the three by a wide margin. When you're debugging at 2 AM, this matters.
  • Anomaly detection. The ML-based "this is unusual" flag has caught at least two issues for me that I would have missed with static thresholds.

Where Netdata gets in your way

The cloud product is now the default sign-up path on netdata.com, which annoys some self-hosters. The fully self-hosted experience still works β€” the agent is open source and you can run a Netdata "parent" node to aggregate children β€” but the polish has clearly moved toward Netdata Cloud. If you're philosophically committed to keeping monitoring data on your own boxes, this is a tradeoff.

The other gotcha: Netdata's per-second resolution means that on a busy server it generates a lot of writes. On an NVMe VPS this is invisible. On the cheap shared-NVMe instances some providers sell, I've seen Netdata's writes show up as IO contention in the very dashboard it's drawing.

Glances deep-dive: the terminal weapon

Glances is what I reach for when I'm SSH'd into a single server and I need to know right now what's going on. It's a Python TUI (terminal UI) that gives you htop-style per-process visibility plus everything else: CPU per core, memory, swap, disk IO, network per interface, sensors, Docker stats, and a half-dozen plugins for things like RAID arrays and IPMI. One screen, no clicks.

I've been running Glances since 2019 across more than 50 client projects and I still default to it for ad-hoc debugging. The keystroke P sorts by CPU, M by memory, I by IO. When a customer pings me saying "the app is slow", I'm in their server with Glances running within 60 seconds and I usually have an answer in another 60.

Memory footprint: 50–100 MB depending on which plugins you enable. Higher than Beszel, lower than Netdata. The --export flag can push metrics to InfluxDB, Prometheus, Graphite, ElasticSearch, and a dozen other backends, which means you can use Glances as a collector even if you're not using its UI.

Where Glances wins

  • Per-process visibility. This is the one Beszel doesn't have and the reason I keep both installed on critical boxes. When a process is misbehaving, you need to see it by name.
  • Zero infrastructure. No hub, no database, no Docker, no web server. Just pip install glances and you're running.
  • Web UI fallback. glances -w gives you a basic web dashboard on port 61208. Not pretty, but it works.
  • REST API. Every metric Glances knows is exposed as JSON at /api/4/all. I've used this to feed quick-and-dirty Slack bots that ping engineers with summary stats.

Where Glances loses

It's not built for fleets. You can run glances --server on each box and a central Glances client can connect to them, but the UX is rough β€” you're switching between hosts one at a time, and there's no shared dashboard or alerting layer. If you have more than 3 servers, Glances alone will not scale.

Historical data is also weak. Glances will show you a recent rolling window in the TUI, but for anything beyond the last few minutes you need to export to an external time-series database. That defeats most of the "lightweight" pitch.

Resource footprint side-by-side

Numbers from my own VPS fleet (1–2 GB Hostinger VPS, idle to lightly loaded). I measured each tool fresh after a 30-minute warmup, then again at the 7-day mark to catch any memory creep.

ToolAgent RAM (idle)Agent RAM (7 days)CPU at 1s pollDisk per server / day
Beszel~23 MB~25 MB<0.5%~7 MB
Glances~55 MB~60 MB~1%0 (in-memory)
Netdata~340 MB~410 MB1–3%~1.2 GB

The disk numbers for Netdata may surprise you. That's the cost of per-second resolution times 800+ collectors. You can dial it down by trimming the retention or disabling collectors you don't need, but if you do that you've started pulling Netdata toward the middle of the chart and asking why you didn't just install Beszel.

Feature comparison

FeatureBeszelNetdataGlances
Multi-server dashboardYes (native hub)Yes (parent node or Cloud)Limited (server/client mode)
Per-process visibilityNoYesYes (best of three)
Docker container metricsYesYesYes
Application auto-discoveryNoYes (800+ collectors)Limited (plugins)
Historical dataYes (1-min, weeks)Yes (1-sec, days to months)No (export needed)
Anomaly detectionNoYes (ML)No
Alerts (built-in)Threshold-basedThreshold + anomalyThreshold-only (in TUI)
Web UI qualityClean, modernBest-in-classBasic
Setup time (single host)~5 min~3 min~1 min
Setup time (10 hosts)~20 min~30 min + parent configPainful
Auth / multi-userYes (OAuth)Yes (Cloud) / basic (self-host)Basic HTTP auth
LicenseMITGPL-3LGPL-3

Picking the right tool for your situation

Here is the decision matrix I'd hand to a colleague asking "which one?":

You have 1 server

Install Glances for ad-hoc debugging. If you also want a persistent dashboard with historical graphs, add Beszel β€” the agent is small enough that running both is fine. Skip Netdata unless your server is genuinely large (8+ GB RAM, 4+ cores) and you'll use the application-level dashboards.

You have 2–10 servers, mostly small VPS instances

Beszel is the answer. The hub-and-agent model was designed for exactly this case, the resource cost across the fleet is negligible, and the web dashboard gives you a single pane of glass without paying for a SaaS. Add Glances on each box for terminal debugging.

You have 10+ servers, mixed sizes, and you debug applications regularly

Netdata, with a self-hosted parent node aggregating children, or Netdata Cloud if you're okay with that tradeoff. The application auto-discovery and anomaly detection start paying back the resource cost when you're hunting bugs across a fleet. If memory is tight on the small members of the fleet, run Netdata in "child" mode with metrics shipped to the parent and minimal local retention β€” that drops the per-host footprint dramatically.

You're a one-person homelab with a Pi and 2 NUCs

Beszel. Specifically because the agent runs comfortably on a Pi Zero 2W, where Netdata would consume most of the available RAM on its own. I have a friend running Beszel on 4 Raspberry Pis and a Synology NAS β€” the dashboard sits at under 100 MB total RAM across the entire stack.

You need application-level metrics (PostgreSQL, Redis, nginx vhosts)

Netdata, full stop. Beszel doesn't do this and Glances has limited plugin coverage. The auto-discovery alone justifies the resource cost when you have a dozen different services to keep an eye on.

What I run on the CloudHostReview stack

Since this is a CloudHostReview piece, here's the real answer for our own infrastructure. Across the 7 aggregator sites and the reporting box that handles the daily import jobs, I run:

  • Beszel hub on the reporting VPS, with agents on all 7 site VPSes plus the reporting box itself. Total monitoring overhead across the fleet: roughly 200 MB of RAM and a few hundred MB of disk per month.
  • Glances installed on every box for ad-hoc ssh && glances debugging.
  • No Netdata β€” the application stack is mostly PHP + MySQL + nginx, all of which are well-understood to me without per-second metrics. If we ever spin up something more complex (we're scoping a streaming aggregator for May 2026 that may need it), I'll add Netdata to that single box rather than the whole fleet.

This combo costs me less than 1% of the total RAM across the fleet, gives me a single dashboard to scan in the morning, and a fast terminal tool when something needs hands-on attention. I haven't had a repeat of last December's silent OOM event since.

FAQ

Can I run Beszel and Netdata side by side?

Yes. They use different ports and don't conflict. I'd only do this on boxes with at least 2 GB of RAM, and only if you have a clear reason to want both views. On a 1 GB VPS, pick one.

Does Beszel work without Docker?

The agent is a single Go binary that doesn't need Docker. The hub officially ships as a Docker container, but you can also run the hub binary directly under systemd if you don't want Docker on your monitoring host.

What about Prometheus + Grafana?

Different category. Prometheus + Grafana is a build-your-own observability stack β€” far more flexible, far more setup, and the resource cost is closer to Netdata's. If you already run Prometheus for application metrics, exposing system metrics via Node Exporter and graphing them in Grafana is reasonable. If you're starting from zero and just want server-level visibility, the three tools in this comparison will save you days of setup time.

What about Zabbix?

Zabbix is enterprise-grade and capable, but the setup and resource overhead are in a different league. For VPS-scale monitoring on small fleets, it's overkill. I'd reach for it on dedicated server fleets in the dozens-of-hosts range or where compliance demands its kind of audit trail.

Is Beszel safe to run on a public-internet VPS?

The agent doesn't open any ports. The hub does (the dashboard), so you should put the hub behind a reverse proxy with TLS and either OAuth or basic auth β€” exactly what you'd do for any self-hosted dashboard. Beszel supports OAuth out of the box, which makes this less painful than rolling your own.

Which has the smallest CPU impact?

Beszel by a small margin, then Glances, then Netdata. In practice all three are well under 5% of a single core on idle hardware, so this rarely drives the decision.

Bottom line

If you take only one recommendation away from this article: for small VPS fleets in 2026, install Beszel and stop overthinking it. The footprint is small enough to be free, the setup is fast enough to do during a coffee break, and the dashboard is good enough that you'll actually look at it. Add Glances on each host for terminal-level debugging. Reach for Netdata only when you have a genuine application-level observability problem and the RAM to pay for the answer.

Monitoring is one of those things that pays back nothing on the day you set it up and saves you a weekend the first time something quietly goes wrong. Pick the lightest tool that meets your real needs, not the most feature-rich one you can find. Future-you, debugging a flaky cron at 2 AM, will be grateful.

Found this helpful?

Subscribe to our newsletter for more in-depth reviews and comparisons delivered to your inbox.