I lost everything on a Tuesday.
Not dramatically — no fire, no flood, no Hollywood-worthy catastrophe. Just a database migration that went wrong at 2 AM, a rollback that didn't roll back, and a backup system I'd assumed was working but hadn't actually tested in seven months. By the time I realized what happened, three months of customer data, order history, and uploaded files had evaporated.
It took me two weeks to partially recover from payment processor logs and customer emails. Some data was gone forever. The total cost, including lost customers and emergency consulting fees: roughly $14,000.
The backup system that could have prevented all of this would have cost me about $3 per month.
This guide is the one I wish I'd had before that Tuesday. It covers setting up automatic, reliable, tested backups for your server — whether you're running a VPS, a dedicated server, or a cloud instance.
Step 1: Decide What Needs Backing Up
Before you configure anything, make a list. Seriously. Open a text file and write down every piece of data your application depends on.
For most web applications, this breaks down into four categories:
Database. This is usually the most critical and the most overlooked. Your MySQL, PostgreSQL, or MongoDB database contains your application's state — users, orders, settings, everything. Losing your files is painful. Losing your database is catastrophic.
User uploads. Profile pictures, documents, attachments — anything your users have uploaded. These are often stored in a directory like /var/www/uploads or in an object storage bucket. They're irreplaceable because they came from your users, not from your code.
Configuration files. Your nginx configs, environment files, SSL certificates, cron jobs, and any custom server configuration. You can always reinstall software, but recreating your exact configuration from memory at 3 AM during an outage is a special kind of hell.
Application code. If you're using git (and you should be), your code is already backed up in your repository. But make sure your deployment scripts, docker-compose files, and any local modifications are captured too.
Step 2: Set Up Database Backups
Database backups deserve their own process because they require special handling. You can't just copy database files while the server is running — you'll get corrupted data.
For MySQL/MariaDB, use mysqldump. Create a script at /usr/local/bin/backup-db.sh:
#!/bin/bash
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/backups/database"
mkdir -p $BACKUP_DIR
mysqldump --all-databases --single-transaction \
--routines --triggers --events \
| gzip > "$BACKUP_DIR/all_databases_$TIMESTAMP.sql.gz"
# Keep only last 30 days
find $BACKUP_DIR -name "*.sql.gz" -mtime +30 -delete
The --single-transaction flag is critical for InnoDB tables — it ensures you get a consistent snapshot without locking the database. Your application keeps running normally during the backup.
For PostgreSQL, use pg_dumpall with similar logic. For MongoDB, use mongodump. The principle is the same: use the database's native export tool, compress the output, and rotate old backups.
Step 3: Set Up File Backups With Rsync
For everything that isn't a database, rsync is your best friend. It's fast, it only transfers changed files, and it's been rock-solid for decades.
Create a file backup script at /usr/local/bin/backup-files.sh:
#!/bin/bash
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/backups/files"
mkdir -p $BACKUP_DIR
# Sync uploads and configs
rsync -az --delete /var/www/uploads/ $BACKUP_DIR/uploads/
rsync -az /etc/nginx/ $BACKUP_DIR/nginx/
rsync -az /etc/letsencrypt/ $BACKUP_DIR/letsencrypt/
# Create dated archive of the sync
tar -czf "$BACKUP_DIR/files_$TIMESTAMP.tar.gz" \
-C $BACKUP_DIR uploads nginx letsencrypt
# Keep only last 14 daily archives
find $BACKUP_DIR -name "files_*.tar.gz" -mtime +14 -delete
The --delete flag on the uploads sync ensures that your backup mirror matches your live directory. If a user deletes a file, it eventually gets deleted from the backup too (after the retention period).
Step 4: Automate Everything With Cron
Manual backups are backups that stop happening the moment you get busy. Automate them from day one.
# Edit your crontab
crontab -e
# Database backup: every 6 hours
0 */6 * * * /usr/local/bin/backup-db.sh >> /var/log/backup-db.log 2>&1
# File backup: daily at 4 AM
0 4 * * * /usr/local/bin/backup-files.sh >> /var/log/backup-files.log 2>&1
Why every 6 hours for the database? Because the question isn't whether you'll lose data — it's how much data you can afford to lose. Backups every 6 hours means your maximum data loss is 6 hours' worth. For most small to medium applications, that's an acceptable trade-off between safety and storage costs.
If you're running a high-transaction application where losing even an hour of data is unacceptable, look into continuous replication instead of periodic dumps. That's a different (and more complex) architecture.
Step 5: Send Backups Off-Server
This is the step that most people skip, and it's the most important one.
A backup sitting on the same server as your data isn't really a backup. If the server dies — hardware failure, datacenter issue, accidental deletion of the wrong directory — your "backup" dies with it.
You need to send copies to a separate location. The cheapest options in 2026:
Backblaze B2: $0.006 per GB per month for storage, $0.01 per GB for downloads. For 50GB of backups, that's $0.30/month. Use the b2 CLI tool or rclone to sync your backup directory.
Cloudflare R2: $0.015 per GB per month, but zero egress fees. If you ever need to download your backups in a hurry (which is exactly when you need them), not paying egress is a big deal.
AWS S3 Glacier: Even cheaper for archival, but retrievals take hours and cost money. Good for long-term archives, not for "I need this data back right now."
Add a sync step to your backup script:
# Sync to Backblaze B2 (or Cloudflare R2)
rclone sync /backups/ remote:my-server-backups/ \
--transfers 4 --checkers 8
Step 6: Test Your Backups (The Step Everyone Skips)
An untested backup is Schrödinger's backup — it exists in a superposition of working and not working until you actually try to restore from it.
Set a monthly calendar reminder to do a restore test. The process should be:
- Spin up a temporary server or use a local VM
- Download the latest backup from your off-site storage
- Restore the database from the SQL dump
- Restore the files from the archive
- Verify the application runs correctly
- Document anything that was missing or broken
Yes, this takes time. Yes, it's tedious. And yes, it's the only way to know your backups actually work. I learned this the hard way when my "working" backup turned out to be a corrupt gzip file that mysqldump had been silently failing on for months because the disk was full.
Step 7: Set Up Alerts
The final piece is knowing when something goes wrong. Add monitoring to your backup scripts:
# At the end of your backup script:
if [ $? -eq 0 ]; then
curl -s "https://hc-ping.com/YOUR-UUID" > /dev/null
fi
Healthchecks.io (free for up to 20 checks) will ping you if your backup script doesn't check in on schedule. Dead simple, incredibly effective. If your 4 AM backup didn't run, you'll know by 5 AM instead of finding out three months later when you actually need it.
The Full Picture
When you're done, your backup system should look like this:
- Database dumps every 6 hours, compressed, stored locally
- File backups daily, synced and archived
- Everything synced to off-site storage (B2, R2, or S3)
- 30-day retention for databases, 14-day for files
- Monthly restore tests on a separate system
- Automated alerts if any backup fails to run
Total cost for a typical small server: $1-5 per month in storage. Total cost of not having backups: ask me about my $14,000 Tuesday.
The best time to set up backups was before you needed them. The second best time is right now. Go do it. Your future self will thank you.