Linux 7.0 Might Cut PostgreSQL Throughput in Half on AWS: What VPS and Cloud Teams Should Do Before Ubuntu 26.04 Lands
I hate upgrade-season optimism. Every release note sounds like a smoothie ad until somebody's database starts breathing through a paper bag.
This weekend's example is ugly: Phoronix reported that an AWS engineer, Salvatore Dipietro, found PostgreSQL on a Graviton4 system delivering roughly 0.51x the throughput of earlier kernels under Linux 7.0 development builds. Half. Not five percent. Half.
If you searched for Linux 7.0 PostgreSQL performance AWS, you probably do not need hot takes. You need a plan before Ubuntu 26.04 LTS ships with this kernel line later in April.
Could Linux 7.0 really tank PostgreSQL performance on AWS?
Yes, in some workloads that rely heavily on PostgreSQL lock behavior, the reported regression is severe enough to matter immediately. The current evidence points to Linux 7.0 scheduler and preemption changes increasing time spent in a user-space spinlock, which can slash throughput and worsen latency on Graviton4-based systems until either the kernel defaults change or PostgreSQL adapts.
The specific drama, according to the report, is tied to Linux 7.0 restricting available preemption modes. Dipietro proposed restoring PREEMPT_NONE as the default because of the regression severity. Peter Zijlstra pushed back and suggested PostgreSQL should use the RSEQ time slice extension instead. Translation: the finger-pointing phase has already begun, which means production teams should stop waiting for a magical clean resolution.
Why this matters beyond AWS
Because kernel-level regressions have a disgusting habit of escaping the nice neat boundaries people imagine for them.
Ubuntu 26.04 LTS is expected later this month. Plenty of VPS hosts and cloud images will adopt the newer kernel line quickly, or offer it as the default shiny option. Some teams will upgrade because security policy says so. Others because they like living dangerously. Either way, if PostgreSQL sits under customer-facing workloads, you do not want to discover a scheduler regression during Monday traffic.
Nora Kim, who runs infra for a small analytics startup, told me at 7:05 PM her standard rule is simple: never trust a fresh kernel on a database tier until pgbench tells you the truth. Sensible person. I should borrow more rules from sensible people.
What should you check before upgrading?
Run a PostgreSQL-specific benchmark on your own workload shape, compare kernel versions side by side, hold back automatic distro upgrades on database nodes, and watch for patches around PREEMPT defaults or PostgreSQL RSEQ support. If you operate Graviton4 or other modern ARM infrastructure, test there first, not last.
1. Benchmark the thing you actually run
Generic CPU tests are almost useless here. Use pgbench, your own query mix, or a replay environment. Check throughput, p95 latency, lock-heavy behavior, and CPU steal time. If your app is queue-heavy or transaction-dense, this is not optional.
2. Freeze surprise upgrades
If your image pipeline auto-rolls new kernels into staging or production, now is the time to become less adventurous. Boring is underrated. Hold the database tier on a known-good kernel until you have your numbers.
3. Track the mailing-list direction
The kernel side may still change. PostgreSQL may also adapt. But roadmaps do not pay back lost throughput. Watch what lands; do not assume.
4. Separate app-node and DB-node upgrade cadence
I still see teams upgrading everything together like they are moving apartments in one truck. Please stop. Web nodes can absorb more chaos. Databases are the porcelain cabinet of infrastructure.
Linux 7.0 vs older kernels: what is the real hosting decision?
| Choice | Upside | Risk right now |
|---|---|---|
| Upgrade immediately to Linux 7.0 | Latest kernel features, long-term alignment | Possible major PostgreSQL throughput loss |
| Stay on current stable kernel | Predictable database behavior | You delay newer scheduler changes and distro defaults |
| Split rollout by node role | Safer validation path | More ops discipline required |
| Test Linux 7.0 only on non-critical replicas | Real signal with limited blast radius | Replica results may still understate production pain |
If you are on a small VPS budget, I would absolutely choose caution over novelty here. The same instinct applied in our RunPod vs Cloud Run vs VPS comparison, and it showed up again in that Linux router guide: cheap infrastructure only stays cheap when surprises do not eat engineer hours.
The competitor coverage misses one practical gap
Most early write-ups stop at the regression headline. Useful, but incomplete. Hosting teams need an operational response: freeze kernel rollouts, benchmark on real workload shapes, isolate database nodes, and treat Ubuntu 26.04 enthusiasm as a separate issue from database safety.
That gap matters because the people searching this are not just curious. They are often mid-upgrade already, with a Slack channel growing teeth.
My recommendation
If you run PostgreSQL on AWS, especially on modern ARM hardware, do not greenlight Linux 7.0 or Ubuntu 26.04 on production database nodes until you have benchmark evidence from your own environment. App servers? Fine, test sooner. Database servers? Earn your confidence first.
I know this sounds conservative. It is. Conservatism is underrated when the alternative is explaining to customers why the dashboard got sticky after an upgrade somebody called "routine."
For now, the smartest move is not panic. It is a boring checklist executed early. Boring wins a lot of wars in infrastructure.
Found this helpful?
Subscribe to our newsletter for more in-depth reviews and comparisons delivered to your inbox.