I Set Up a Remote Dev Environment With an AI Coding Agent on a $12 VPS — And Now I Cannot Go Back to Local Development

I Set Up a Remote Dev Environment With an AI Coding Agent on a $12 VPS — And Now I Cannot Go Back to Local Development

I Set Up a Remote Dev Environment With an AI Coding Agent on a $12 VPS — And Now I Cannot Go Back to Local Development

Three weeks ago, my MacBook's SSD started making a noise that sounded like a cricket having an existential crisis. I had a client deadline in four days, 22 uncommitted files, and the nearest Apple Store had a three-day repair wait. Instead of panicking — okay, after panicking for about 15 minutes — I spun up a VPS, installed an AI coding agent, and kept working from my wife's ancient ThinkPad as if nothing happened.

That accidental experiment changed how I think about development environments entirely. And now that AI coding agents like OpenCode have exploded in popularity (it hit 120,000 GitHub stars and 948 points on Hacker News this week), I want to walk you through exactly how to set up a remote dev environment that runs an AI agent 24/7 on a cheap VPS. Because once you try it, local development starts feeling like burning CDs in the age of Spotify.

Why a Remote AI Coding Environment Makes Sense in 2026

Here is the pitch in three sentences. AI coding agents need compute, memory, and stable internet to work well. Your laptop provides some of that, but it also needs to run Slack, Chrome with 47 tabs, and Spotify simultaneously. A VPS does one thing: run your dev environment. It does it well, it does it always, and it does not care that you spilled coffee on it because it lives in a data center in Virginia.

My friend Carlos, who runs a three-person dev shop in Austin, switched to remote dev environments last November. I asked him last week if he would go back. He laughed so hard he started coughing. "Brother, my staging server, my dev environment, and my AI agent all run on the same $24/month box. I could code from a Chromebook at Starbucks and nobody would know the difference."

He is not wrong. And the economics have gotten ridiculous.

Choosing the Right VPS: What Actually Matters for AI Dev Environments

Not all VPS providers are created equal, and the requirements for an AI coding agent workflow are different from hosting a WordPress site or running a game server. Here is what you need to prioritize:

RAM: The Non-Negotiable

AI coding agents load your entire project context into memory. The LSP servers, the file watchers, the git index, the AI conversation history — it adds up fast. For a medium-sized project (say, a Next.js app with 200-400 files), you want minimum 4GB RAM. For monorepos or anything with heavy TypeScript compilation, 8GB is the sweet spot.

I tested on 2GB and the OOM killer terminated my agent session mid-refactor. I lost about 40 minutes of work and gained a new appreciation for adequate memory allocation. Learn from my suffering.

CPU: Cores Matter More Than Clock Speed

AI agents often run multiple processes in parallel — LSP servers, file watchers, test runners, and the agent process itself. Two cores is the bare minimum. Four cores gives you breathing room for parallel sessions. The clock speed matters less than you think because most of the heavy AI compute happens on the model provider's servers, not yours.

Disk Speed: NVMe or Go Home

File operations are the bottleneck you did not expect. The AI agent reads and writes files constantly — scanning directories, loading context, saving changes. A traditional HDD will make everything feel sluggish. NVMe SSD is standard on most modern VPS providers, but double-check. I have seen budget providers still shipping SATA SSDs that benchmark at half the speed.

Network: Latency Beats Bandwidth

You are SSHing into this machine and streaming terminal output in real-time. The agent is making API calls to model providers (Anthropic, OpenAI, etc.). Low latency matters more than raw bandwidth. Choose a data center close to your model provider's servers — most are in US-East or US-West. If you are in Europe, US-East typically gives sub-100ms latency, which feels instantaneous.

The Best VPS Providers for AI Dev Environments (Tested March 2026)

I have run this setup on five different providers over the past three weeks. (If you want the pure VPS comparison without the AI agent angle, see our Hetzner vs DigitalOcean vs Vultr 90-day test.) Here is my honest ranking:

1. Hetzner Cloud — Best Overall Value

Starting at $4.35/month for 2 vCPUs, 4GB RAM, 40GB NVMe. Their CPX21 plan at $8.10/month (3 vCPUs, 4GB RAM, 80GB NVMe) is the sweet spot for solo developers. I ran a multi-session AI agent setup on this for two weeks and it handled everything I threw at it.

The Ashburn, Virginia data center gives excellent latency to both Anthropic and OpenAI's API endpoints. I measured 12ms average to Anthropic's API, which is basically free in terms of perceived latency.

Caveat: Hetzner's US offering is newer and has fewer data center options than their European fleet. If you need a specific US region, check availability first.

2. DigitalOcean — Best Developer Experience

The $12/month droplet (2 vCPUs, 2GB RAM, 60GB NVMe) is underpowered for heavy AI agent workloads, but the $24/month option (2 vCPUs, 4GB RAM, 80GB NVMe) is solid. What DigitalOcean wins on is the developer experience — the dashboard, the CLI tools, the snapshot system, and the one-click marketplace.

I set up a DigitalOcean droplet with their dev tools marketplace image and had a fully configured AI coding environment running in under 20 minutes. That includes installing OpenCode, configuring tmux, setting up my dotfiles, and connecting to Claude.

My one gripe: their Premium CPU droplets (AMD EPYC) are great, but the regular shared CPU droplets can get noisy-neighbor'd during peak hours. For $24/month, I expect consistent performance. Most of the time it is fine. Emphasis on "most."

3. Vultr — Best for GPU-Adjacent Workloads

If you are running local models (Llama, Mistral, CodeQwen) instead of cloud APIs, Vultr's GPU instances are worth looking at. Their Cloud GPU offering starts at $90/month for an A40, which can run a 13B parameter model at acceptable speeds for coding agent tasks.

For API-only workflows, their regular $12/month plan (2 vCPUs, 4GB RAM, 80GB NVMe) performs comparably to DigitalOcean. Network routing is slightly better for US-East in my testing — about 3ms lower latency to Anthropic.

4. Linode (Akamai) — Most Consistent Performance

Linode's dedicated CPU plans are excellent if you absolutely cannot tolerate performance variability. The Dedicated 4GB plan at $36/month gives you 2 dedicated cores, 4GB RAM, and 80GB NVMe. It is pricier than shared options, but I never once saw CPU steal above 0% in two weeks of testing. For production dev environments where consistency matters more than cost, this is the pick.

5. AWS Lightsail — The "Already in the Ecosystem" Pick

If your company already lives in AWS, Lightsail's $12/month plan (2 vCPUs, 2GB RAM, 60GB SSD) keeps everything in one billing system. Performance is middle-of-the-road. The real value is integration — VPC peering to your production AWS resources, same IAM system, familiar tooling. Not the cheapest, not the fastest, but sometimes "already there" is the feature that matters most.

Setting Up Your Remote AI Dev Environment: Step by Step

I am going to walk through this using Hetzner (my recommended provider) and OpenCode (the open source AI coding agent that just blew up on Hacker News). Adjust for your preferred provider and agent as needed.

Step 1: Spin Up the VPS

Create a CPX21 instance (3 vCPUs, 4GB RAM, 80GB NVMe) in the Ashburn data center. Choose Ubuntu 24.04 LTS. Add your SSH key. Boot it up. Total time: about 90 seconds.

Step 2: Secure the Basics

SSH in and run your standard hardening — create a non-root user, disable password auth, configure UFW firewall, install fail2ban. (For a deeper dive on server security, read our complete VPS lockdown guide.). I am not going to belabor this because if you are setting up a dev VPS, you should already know this dance. If you do not, Digital Ocean has an excellent guide titled "Initial Server Setup with Ubuntu" that covers everything.

(Yes, I just recommended a DigitalOcean guide while setting up a Hetzner server. The internet is a sharing economy. Deal with it.)

Step 3: Install Your Dev Stack

Install your language runtimes, package managers, and dev tools. For a typical full-stack setup:

curl -fsSL https://fnm.vercel.app/install | bash  # Node via fnm
fnm install 22
curl -sSf https://rye.astral.sh/get | bash  # Python via Rye
sudo apt install tmux ripgrep fd-find git -y

Step 4: Install and Configure OpenCode

curl -fsSL https://opencode.ai/install | bash
opencode config set provider anthropic
opencode config set model claude-sonnet-4-20250514

Set your API key as an environment variable in your shell profile. Do not hardcode it anywhere. I once committed an API key to a public repo and within 11 minutes someone in Romania had racked up $47 on my account. Eleven minutes. I still have the Anthropic invoice as a reminder.

Want to see what AI coding agents can actually do? Check out our detailed OpenCode vs Cursor review on SoftwarePeeks.

Step 5: Set Up Persistent Sessions With tmux

This is the magic trick. You want your AI agent sessions to persist even when you disconnect from SSH. tmux does this perfectly:

tmux new-session -s dev
opencode  # Start your first agent session

Now you can detach (Ctrl+B, then D), close your laptop, go get lunch, and reconnect later. The agent keeps running. Your session survives internet drops, laptop crashes, and coffee spills. This is the single biggest advantage of remote dev environments — your work persists independently of your local machine.

Step 6: Configure Remote Access From Your Editor

VS Code's Remote SSH extension connects to your VPS and gives you a full IDE experience with the file system, terminal, and extensions all running remotely. The editing happens in your local VS Code window, but the compute is on the VPS. It feels seamless over a good connection.

Alternatively, if you are a terminal purist, just use SSH + tmux + your favorite terminal editor. (And if you want a web-based dashboard on top, Cockpit is free and excellent.). I have been using Neovim on the remote machine and honestly it is faster than any GUI editor I have tried because there is zero local-to-remote file sync latency. The files are already there.

Cost Comparison: Remote AI Dev vs Local Machine

SetupMonthly CostProsCons
Local MacBook + Cursor Pro$20 (Cursor)Offline capable, familiarBattery drain, thermal throttle, single machine
Hetzner VPS + OpenCode$8.10 (VPS) + $0 (OpenCode)Always on, any device, persistent sessionsInternet required, initial setup
DigitalOcean + OpenCode$24 (VPS) + $0 (OpenCode)Great dashboard, easy snapshotsPricier, shared CPU variability
GitHub Codespaces$0-36 (usage based)Zero setup, GitHub integrationUsage-based billing surprise, limited customization

The Hetzner + OpenCode combo at $8.10/month total (plus your model API costs, which you are paying regardless of where you run the agent) is genuinely hard to beat. That is less than a single month of Cursor Pro, and you get a persistent server that runs 24/7.

Gotchas I Learned the Hard Way

Backup your VPS. I mean it. My Hetzner instance had a disk issue on day 9 of my test. Hetzner's snapshot feature saved me — I was back up on a new instance in four minutes. But if I had not taken that snapshot the night before, I would have lost two days of uncommitted experiments. Snapshots cost $0.01/GB/month. Just do it.

Set up your API key rotation. Your API key lives on a server you access via SSH. If someone compromises your VPS, they get your Anthropic/OpenAI key. Use environment variables, not config files. Rotate keys monthly. Set usage alerts on your API provider dashboard. I have mine set to alert at $10 and hard-cap at $50.

Monitor your RAM. AI agent sessions can leak memory over long-running conversations. I run a simple cron job that restarts the agent process if memory exceeds 80%. It has triggered twice in three weeks — both times during massive refactoring sessions where the context window was full.

The Bottom Line

A year ago, "remote AI coding environment" was something only well-funded teams with DevOps engineers could set up. Today, you can do it in 30 minutes on an $8 VPS with a free open source agent.

The convergence of cheap cloud compute, open source AI coding agents, and persistent remote sessions has created something genuinely new: a development environment that is always on, accessible from any device, immune to hardware failures, and powered by AI that gets better every month.

My MacBook is back from the Apple Store now, by the way. New SSD, works fine. But I have not moved my dev environment back to it. The VPS is faster, more reliable, and it does not make cricket noises. Sometimes the best solutions come from the worst emergencies.

Set up a VPS this weekend. Install OpenCode. Connect it to Claude or GPT. Start a tmux session. Then close your laptop and go outside. Your code will still be there when you get back. And honestly? That kind of peace of mind is worth way more than $8.10 a month.

Found this helpful?

Subscribe to our newsletter for more in-depth reviews and comparisons delivered to your inbox.