Case Study: Moving a SaaS Product from Heroku to AWS

Andrew Snow
Written by Andrew Snow · 31 March 2026

We recently helped a customer make the move from Heroku to AWS using Tapitalee — and we're excited to share how it went. This is a mid-sized SaaS company (in the $50–100M ARR range, with 51–200 employees) that had been running a Ruby on Rails application on Heroku. They were ready to move on, and we worked with them as part of our consulting and on-boarding process to make it happen smoothly.

The whole migration took about two months from start to finish. We began with the staging environment as a test run, then rolled out Preview Apps (one per pull request), and wrapped up with a successful production cutover. The best part? They're actually saving money on hosting now, thanks to the cost advantages of running on AWS for both compute and database.

Here's a walk-through of how we pulled it off — and how we can do the same for you.

Step 1: Feasibility

Before anything else, we did a thorough appraisal of what the customer needed. Were there external services, Heroku add-ons, or specific requirements that might be a blocker? This is a critical step — we want to make sure everything's going to work before we start moving pieces around.

In this case, the customer had two key requirements: they wanted to use Cloudflare as a WAF (Web Application Firewall), and they needed Gated Deployments — a workflow where production deploys require manual approval from specific team members before going live. The deployment pipeline pauses and waits for that approval.

Neither of these existed in Tapitalee at the time, so we built them during the lead-up to on-boarding. The Cloudflare Tunnel Add-on lets you publish your Tapitalee app containers to the Internet secured by Cloudflare, using Cloudflare's supplied tunnel sidecar container. Gated Deployments gives teams the control they need over what hits production and when.

Step 2: Setting Up the Deployment Process

We integrated with the customer's existing workflow using GitHub Actions. The development and release workflow covered several important aspects:

  • CI/CD: Test runs were moved to a parallelized approach in GitHub Runners, speeding things up considerably.
  • Preview Apps: Every pull request gets its own Preview App automatically. When the PR is closed, the Preview App is deleted — or it auto-deletes after a chosen number of days. If a new commit appears on a closed PR, the Preview App gets recreated on the fly.
  • Staging Deploys: After merging a PR, a new Release gets created (e.g. v123) and staging is automatically deployed.
  • Gated Production: At the same time, a Production build is created but not deployed — it waits for approval from a manager before going live.

If you'd like to see a canonical example of this approach with GitHub workflows, check out our little demo application: github.com/tapitalee/bigdemo.

Step 3: Running in Parallel with Heroku

Staging and Preview Apps were deployed on Tapitalee in parallel with the existing Heroku setup. After a testing period where everything looked good, Heroku was shut down. This approach gave the development team plenty of time to get comfortable with the new system without interrupting the daily development cycle.

This phase is also when we ironed out any kinks with the various Add-ons. For example, we discovered that Serverless Elasticache (Redis) isn't suitable for use with Sidekiq because of Redis namespacing issues. No problem — a regular micro instance of Valkey (AWS's version of Redis) works great and was a straightforward swap.

Step 4: Pre-Production Testing

Before the go-live date, we set up the production system as a second staging environment on a different URL. This was a simple procedure with Cloudflare and Tapitalee working together.

Redis and PostgreSQL data were copied over by spinning up the Tapitalee utility container, which comes preloaded with all the tools you need to sync data between environments. This container runs inside your Tapitalee app in AWS and has access to external datasources (like Heroku) over the Internet. In this case, the two environments were even in different datacentres — Heroku in us-east-1 and the new production environment in us-west-2.

This dry run gave us a clear picture of the timing required for the final data sync. Based on those results, the customer allocated a comfortable 3-hour maintenance window for the real cutover.

Step 5: Going Live

The move to production was a success! We immediately noticed a reduction in the load required for the website — the AWS containers run on more powerful CPUs, so the app was doing the same work with less effort.

Tapitalee's Datadog monitoring Add-on flagged a disk space issue shortly after go-live. It turned out to be a problem with verbose logging — something that had gone unnoticed on Heroku because of its generous temp space allowance. We fixed it temporarily by increasing the ephemeral container space, and then permanently with a quick code change and redeploy. Exactly the kind of thing you want to catch early.

We also deployed the SSH Server and Tailscale Add-ons to allow the customer's data warehouse to securely connect to the production database with a read-only user for periodic data syncing. All in all, a smooth transition — and the customer is now running happily on AWS with lower costs, better performance, and a modern deployment workflow.

Ready to move off Heroku?

We'd love to help you make the move. Get in touch to talk about your migration, or jump straight in and connect your AWS account.