PostgreSQL Connection Pooling on VPS in 2026: PgBouncer vs Supavisor vs PgCat

PostgreSQL Connection Pooling on VPS in 2026: PgBouncer vs Supavisor vs PgCat

By Fanny Engriana Β· Β· 8 min read Β· 16 views

When I first hit a connection storm on a Laravel app I run on a 4 vCPU Hostinger VPS β€” the database had 312 idle Postgres backends sitting at 18 GB of RAM while the app itself was barely doing anything β€” I learned the hard way that a Postgres server on a VPS is not the same animal as a Postgres server on a managed cloud DB. Every idle connection costs you 8–12 MB of resident memory, and if your framework opens one per worker without a pooler in front, you are renting RAM you will never use.

Across the seven aggregator sites I run on Hostinger and the 50+ client projects we have shipped at Warung Digital Teknologi, three connection poolers keep showing up in conversations: PgBouncer, PgCat, and Supavisor. They solve the same problem in very different ways, and picking the wrong one for a small VPS-based deployment can either waste money or wreck your tail latency.

This is the side-by-side I wish I had when I migrated our SmartExam AI Generator backend off direct Postgres connections. I will cover what each pooler is, where it shines, where it falls over, and which one I would put on a 4 GB / 8 GB VPS today.

PostgreSQL connection pooling on VPS β€” server racks

Why a connection pooler matters more than people think

Postgres uses a process-per-connection model. Every connection is a forked OS process holding its own work_mem, temp_buffer, and prepared statement cache. On the SmartExam stack β€” Laravel queue workers, a Vue.js dashboard, and a Python OpenAI orchestration service β€” we had three independent clients that each wanted their own pool. Without a shared pooler that traffic translates into 80–120 concurrent backends easily, and the box runs out of RAM before it runs out of CPU.

The fix is one of two things: either cap your client-side pools to a number Postgres can comfortably hold (which throttles your app), or put a transaction-level pooler in the middle that multiplexes thousands of client connections onto a small fixed pool of real backends. On a single 4 GB VPS, the second option is the only one that scales without surprises.

The wider point: your VPS pricing is set by RAM, and Postgres connections eat RAM. A pooler turns the connection count into a software problem instead of a hardware bill.

PgBouncer β€” the boring, reliable default

PgBouncer has been around since 2007 and is the pooler I still reach for first on a small VPS. The binary is around 1.5 MB. Its memory footprint is roughly 2 MB of resident RAM per 1,000 idle client connections. It is single-threaded β€” one core does all the work β€” and on small machines that is a feature, not a bug, because it leaves the other cores for Postgres itself.

What it gives you:

  • Three pool modes: session, transaction, and statement.
  • Battle-tested behaviour under load β€” most major Postgres-backed SaaS products run PgBouncer somewhere in their path.
  • Trivial config: a single pgbouncer.ini file plus a userlist.txt.
  • An admin console accessible over the same Postgres protocol β€” handy for SHOW POOLS, SHOW STATS.

What it does not give you:

  • Native read replica routing β€” you have to put HAProxy or pgpool in front, or split your DSNs in the app.
  • Sharding.
  • Multi-core throughput β€” PgBouncer peaks at around 44,000 transactions per second in published benchmarks before degrading to 25,000–30,000 tps once you push past 75 concurrent client connections.
  • Named prepared statements in transaction mode (a long-standing limitation; PgBouncer 1.21+ added partial support, but ORM behaviour varies).

For most VPS workloads β€” a Laravel app, a Django app, a small Node API β€” those limits are theoretical. I have not personally pushed PgBouncer past 8,000 tps on real client traffic, and it has never been the bottleneck.

PgCat β€” the multi-core Rust rewrite

PgCat is the newer entrant, written in Rust by the team at PostgresML. It speaks the PgBouncer admin protocol, which makes drop-in replacement deceptively easy, but the architecture under the hood is fundamentally different: it is multi-threaded and uses Tokio async I/O, so on a 16-core machine PgCat will use all 16 cores while PgBouncer pegs one.

The benchmark gap is real. In published Tembo and pkgpulse comparisons, PgCat reaches around 59,000 tps at peak β€” roughly 30% above PgBouncer β€” and beyond 750 concurrent clients PgCat sustains more than 2Γ— the queries-per-second of either alternative. On a 4 vCPU VPS the gap is smaller, but it is not zero.

What PgCat adds beyond PgBouncer:

  • Read/write query routing: it inspects the SQL and sends SELECTs to replicas, writes to the primary. No app changes needed.
  • Sharding across multiple Postgres backends, with a configurable sharding key.
  • Failover: if a replica goes down, PgCat will retry on a healthy one without an external orchestrator.
  • Mirroring: useful for shadow traffic when migrating between databases.

The tradeoffs I would call out from running it on a small VPS:

  • Resident memory at idle is around 25–35 MB versus PgBouncer's ~5 MB. Not a deal-breaker on a 4 GB box, but noticeable on a 1 GB one.
  • The configuration is TOML and substantially more complex once you turn on routing and sharding.
  • Operational maturity is younger. PgBouncer has 18 years of edge-case bug fixes; PgCat is closer to two years of meaningful production use.

If you genuinely need read/write splitting and you do not want pgpool in your stack, PgCat is the cleanest option I have used.

Supavisor β€” the cloud-native pooler from Supabase

Supavisor is the odd one out. It is written in Elixir, runs on the BEAM virtual machine, and was built for Supabase's multi-tenant cloud β€” which means it is engineered for 1 million+ concurrent connections across thousands of tenants on a single cluster. That kind of scale is not what a VPS user is buying, but the design choices have practical knock-on benefits.

Where Supavisor wins:

  • Native support for named prepared statements in transaction mode β€” the headline limitation of PgBouncer that breaks ORMs like Prisma and SQLAlchemy in async contexts.
  • Per-tenant pool isolation: if you are running a multi-tenant SaaS on a single Postgres, Supavisor lets you cap connections per tenant.
  • Distributed by design β€” Erlang/OTP supervisors mean a misbehaving tenant cannot kill the whole pooler.
  • Steady-state throughput is predictable: peak is around 21,700 tps in benchmarks, lower than PgCat or PgBouncer, but it does not degrade as concurrency climbs.

Where Supavisor is less ideal for a small VPS:

  • The BEAM runtime baseline RAM is 80–120 MB at idle. On a 1 GB VPS that is meaningful.
  • Configuration is database-driven and assumes you are running the Supabase control plane or its open-source variant. Standalone use is supported but documentation skews towards the managed product.
  • Lower peak throughput than the alternatives for single-tenant workloads.

If you are running edge functions on Cloudflare Workers, Vercel, or fly.io that hammer Postgres with bursts of short-lived connections, Supavisor is the pooler that was actually designed for that traffic shape.

Database performance monitoring and connection pool dashboards

Side-by-side comparison

Capability PgBouncer PgCat Supavisor
Language / runtime C Rust (Tokio) Elixir (BEAM)
Multi-core No (single-threaded) Yes Yes
Idle RAM (baseline) ~5 MB ~25–35 MB ~80–120 MB
Peak tps (published bench) ~44,000 ~59,000 ~21,700
Read/write routing No Yes (built-in) Limited
Sharding No Yes No
Named prepared stmts (transaction mode) Partial (1.21+) Yes Yes
Config format INI TOML DB-driven
Best-fit workload Single VPS apps Replica-aware apps Edge / serverless

What I actually run on a 4 GB Hostinger VPS

For a single-app, single-database VPS β€” which describes most of the boxes I touch β€” I run PgBouncer in transaction pooling mode with these tuning numbers and they have not let me down:

[databases]
appdb = host=127.0.0.1 port=5432 dbname=appdb

[pgbouncer]
listen_addr = 127.0.0.1
listen_port = 6432
auth_type = scram-sha-256
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 25
reserve_pool_size = 5
reserve_pool_timeout = 3
server_idle_timeout = 600
server_lifetime = 3600

Why these numbers:

  • default_pool_size = 25: roughly 6Γ— the vCPU count. Postgres throughput on small boxes peaks around 4–8Γ— cores, so 25 leaves headroom for occasional long queries.
  • max_client_conn = 1000: clients can open as many idle handles as they want; PgBouncer multiplexes them onto the 25 real backends.
  • server_lifetime = 3600: recycles backend processes hourly. This single line eliminated a slow-leaking work_mem growth problem we used to see on long-running BizChat instances.

If the app is a Laravel project β€” which 80% of mine are β€” I point DB_HOST=127.0.0.1, DB_PORT=6432, and that is the entire migration from direct Postgres. Eloquent's connection handling is forgiving in transaction mode as long as you avoid session-level features like SET LOCAL outside of transactions and LISTEN/NOTIFY. If you do need those, set per-connection sessions to use a separate session-mode pool on a different port β€” easy to do in PgBouncer with two database stanzas.

When I would switch away from PgBouncer

I would move to PgCat the moment a project picks up a read replica and the app cannot easily separate read and write DSNs. The work to introduce HAProxy plus a separate PgBouncer for the replica is more than the work to deploy PgCat with routing turned on. PgCat also wins if you are running a 16+ core machine and pushing serious tps β€” the multi-core scaling pays for itself.

I would move to Supavisor only if my app is fundamentally serverless and connection-bursty: think Vercel functions hitting Postgres on every request, where each invocation expects a fresh connection. PgBouncer can handle this too, but Supavisor's design treats it as the primary case rather than an edge condition. For a long-running Laravel app on a VPS, Supavisor is overkill.

FAQ

Do I need a pooler on a 1 GB VPS running just one app?

If your app already uses an internal pool with a sensible cap (PHP-FPM with one connection per worker, or Node with pg-pool capped at 10), and your peak concurrency is under 30, you can skip a pooler. Once you have queue workers, scheduled jobs, and a separate API process all sharing the database, a pooler pays for itself.

Will a pooler hurt latency?

On a localhost VPS deployment, the added round trip is sub-millisecond. The latency people see is usually from waiting for a backend in the pool when default_pool_size is too small β€” that is a tuning issue, not a pooler issue.

Transaction mode versus session mode?

Transaction mode multiplexes clients per transaction and is what gives you the multiplexing benefit. Session mode is closer to a direct connection. Transaction mode breaks features that rely on session state (SET, prepared statements without protocol-level handling, advisory locks across statements). Use transaction mode by default, and route session-dependent code to a session-mode pool on a separate port.

Is there a hosted option?

Yes β€” Supabase, Neon, and PolyScale all run poolers as a managed product. If you do not want to operate one, those are reasonable options, but the cost on a small project is usually higher than a $5 VPS running PgBouncer yourself.

The recommendation

For 90% of VPS Postgres deployments in 2026 the answer is still PgBouncer. It is small, predictable, and the operational knowledge has been baked into hosting tutorials for nearly two decades. Pick PgCat the moment you grow a read replica or hit a thread-bound bottleneck. Pick Supavisor when your traffic shape is genuinely serverless and your client count is in the thousands.

The mistake I see most often is not picking the wrong pooler β€” it is not running one at all, and watching a $5 VPS get pushed into a $40 plan because Postgres backends keep eating memory. Whichever of the three you pick, getting a pooler in front of the database is the single highest-leverage change you can make on a small Postgres deployment.

Found this helpful?

Subscribe to our newsletter for more in-depth reviews and comparisons delivered to your inbox.