← Back to blog

How We Recovered Lost Data Using Postgres Backups on Railway

|5 min read
postgresrailwaydevopsdatabases

We lost data. Not a lot, but enough to matter. A bad migration wiped a handful of tables that weren't supposed to be touched, and our most recent application-level backup was stale. What we did have was Railway's built-in Postgres backup — a daily snapshot sitting right there in the dashboard.

The problem? Railway's restore feature replaces your current database. We didn't want to nuke production to get the old data back. We needed a way to spin up the backup alongside the live database, inspect it, and surgically pull out what we needed.

Here's the technique we figured out. It's not documented anywhere obvious, and it relies on understanding how Railway's volume mounts work under the hood.

The Core Idea

Railway's Postgres databases store their data on volumes — essentially mountable disk images. When you restore a backup, Railway doesn't overwrite your existing volume. Instead, it creates a new volume from the backup snapshot and stages a swap. This is the key insight: volumes are interchangeable, and you can control which database they attach to before deploying.

The strategy is simple:

  1. Duplicate your database (creates a new empty instance)
  2. Trigger a restore on the original (stages a volume swap)
  3. Redirect the backup volume to the duplicate instead
  4. Deploy — your production stays untouched, and the duplicate now has your backup data

Step-by-Step Walkthrough

1. Duplicate the Database

In your Railway project, right-click on the Postgres database you want to recover data from. Click Duplicate, then Deploy. This spins up a second Postgres instance with identical configuration — same version, same extensions — but with an empty data volume.

Give it a recognizable name like Postgres Backup Copy so you don't mix them up later.

2. Trigger the Backup Restore

Open the original database and navigate to the Backups tab. Find the backup you want to recover from — they're labeled by date — and click Restore.

This is where it gets interesting. Railway doesn't immediately apply the restore. Instead, it stages a set of volume changes:

  • The original database's current volume gets disconnected
  • A new volume (named with the backup's date stamp, e.g. 2024-12-04) gets created from the backup and queued for connection

Nothing happens yet. These changes are staged, waiting for you to deploy. This is your window to rearrange things.

3. Swap the Volumes

This is the critical step. In Railway's volume management interface, you'll see three volumes:

| Volume | Description | |--------|-------------| | Original volume (e.g. effect-disk) | Your live production data | | Backup volume (e.g. 2024-12-04) | The restored backup snapshot | | Duplicate's volume (e.g. jar-volume) | Empty volume from the duplicate |

Now rearrange them:

  • Drag the original volume (effect-disk) back to the original database — this ensures production keeps its current data
  • Move the backup volume (2024-12-04) to the duplicate database — this is where your recovered data will live
  • Remove the duplicate's empty volume (jar-volume) — it's not needed anymore

4. Review and Deploy

Take a moment to verify the staged configuration:

  • Original database → original volume (unchanged)
  • Duplicate database → backup volume (restored data)
  • Empty volume → removed

Once you're confident, hit Deploy. Railway applies all the volume changes atomically.

5. Verify the Recovery

Connect to the duplicate database and confirm your data is there:

-- Check that tables exist
\dt

-- Verify row counts on critical tables
SELECT COUNT(*) FROM users;
SELECT COUNT(*) FROM orders;

-- Spot-check specific records you know should exist
SELECT * FROM users WHERE email = 'specific@example.com';

From here, you can use pg_dump to export specific tables or run queries to extract exactly the rows you need, then insert them back into production.

Why This Works

Railway's architecture treats volumes as first-class, portable objects. A Postgres instance doesn't care which volume it boots from — it just reads whatever's mounted at its data directory. By intercepting the restore flow before deployment, you're essentially telling Railway: "Yes, create that backup volume, but mount it over there instead."

This is a non-destructive operation. At no point does your production database lose its volume or go offline. The backup volume is a completely independent copy.

Things to Watch Out For

Swap volumes simultaneously. Don't deploy with a database disconnected from any volume — it will fail to start and you'll have a brief outage while you fix it. Make all your volume moves in a single staged deployment.

Name your volumes clearly. When you're staring at three volumes in the management interface, it's easy to mix them up. Railway's date-stamped naming helps, but take a screenshot of the "before" state if you're nervous.

Schema drift. If you've run migrations since the backup was taken, the restored database will have an older schema. Your application code may not be compatible. Connect with a raw SQL client first, not your application.

Backup freshness. Railway's automatic backups typically run daily. Check the timestamp — if the data loss happened right before a backup window, you might be recovering stale data anyway.

Key Takeaways

This technique is useful beyond disaster recovery. We've since used it to:

  • Debug production issues by querying a backup copy without touching live data
  • Test migrations by running them against a backup first
  • Generate reports from historical snapshots without loading production

The underlying principle is worth remembering: in Railway, volumes are portable. Once you understand that, the platform becomes a lot more flexible than its UI might suggest.

The best disaster recovery plan is one you've practiced before you need it. Test this with non-critical data first.

If you're running Postgres on Railway and don't have a recovery playbook, take 20 minutes to try this with a test database. Future-you will be grateful.