Skip to content
From VPS to AWS: How to Migrate a Web App and What It Costs
← ← Back to Thinking Cloud

From VPS to AWS: How to Migrate a Web App and What It Costs


You have a web app running on a VPS — React on the frontend, Node.js on the backend, PostgreSQL as the database and Redis for cache. Everything runs fine on a €15–30/month server at Hetzner or DigitalOcean. But then traffic grows, clients want SLAs, and you need real scalability. It’s time to look seriously at AWS.

In this article we walk through the right AWS architecture for such an application, compare costs with real numbers, and explain how EC2 autoscaling works — with actual figures.

The starting point: the classic VPS

The typical scenario looks like this: a single server (Hetzner CPX31 — 4 vCPU, 8GB RAM, 160GB SSD) at ~€15/month running everything — Node.js, PostgreSQL, Redis, and maybe Nginx as a reverse proxy. You might add a second server for redundancy and end up at €30–40/month total.

It works. But you have a few fundamental issues: no autoscaling (Black Friday traffic can take the server down), no automatic failover (if the server goes down, everything goes down), backups are manual or semi-automated, and scaling means “buy a bigger server and migrate manually.”

The proposed AWS architecture

On AWS, the same application is split into dedicated services:

Frontend (React) — S3 + CloudFront. Static assets (HTML, CSS, JS) are served from S3 through the CloudFront CDN. No servers to manage, low latency globally, negligible cost for moderate traffic.

Load Balancer — Application Load Balancer (ALB). It distributes HTTP/HTTPS traffic to your backend instances. Costs ~$0.0252/hour fixed plus a variable cost based on LCU (Load Balancer Capacity Units) depending on traffic.

Backend (Node.js) — EC2 Auto Scaling Group. Minimum 2 t3.medium instances (2 vCPU, 4GB RAM) in different Availability Zones. The Auto Scaling Group adds or removes instances automatically based on traffic.

Database (PostgreSQL) — RDS PostgreSQL. A db.t3.medium instance with Multi-AZ for automatic failover. Automated backups, AWS-managed patching, point-in-time restore.

Cache (Redis) — ElastiCache. A fully managed cache.t3.medium node. You no longer need to manage the Redis process, updates, or monitoring.

Cost comparison: VPS vs. AWS

Let’s put the numbers side by side. Prices are for the eu-central-1 (Frankfurt) region, On-Demand, with no discounts.

VPS scenario (Hetzner)

A typical production setup with two servers for redundancy costs roughly: two CPX31 servers at €15/month each, plus managed backup at ~€3/month, for about €33/month total.

AWS scenario (On-Demand)

On AWS, costs are spread across each service. Two EC2 t3.medium instances (backend) are about $67/month ($0.046/hour × 730h × 2). The ALB is around $25/month (fixed fee plus LCUs for moderate traffic). RDS PostgreSQL db.t3.medium Single-AZ is ~$53/month ($0.072/hour × 730h), and with Multi-AZ it roughly doubles to about $106/month. ElastiCache Redis cache.t3.medium is about $50/month. S3 and CloudFront for the static frontend cost under $5/month at moderate traffic. In total, AWS On-Demand lands at about $200–306/month, depending on Single-AZ or Multi-AZ for the database.

Optimized AWS scenario

With Reserved Instances (1 year, no upfront payment) you save about 30–40%. So the total drops to roughly $130–200/month. With 1-year Savings Plans you can go even lower. And if you use Graviton instances (t4g instead of t3), you gain another 10–20% on price with similar or better performance.

The cost verdict

The VPS is 5–8 times cheaper in raw price. But you’re not comparing like with like. AWS gives you: automatic failover, managed backups, elastic scalability, integrated monitoring (CloudWatch), certification and compliance (SOC 2, ISO 27001, GDPR), and a 99.99% SLA on the ALB. When you factor in the real cost — including your time for administration, downtime risk, and potential losses — the gap shrinks significantly.

EC2 autoscaling: how it works

Autoscaling is the main reason you migrate to AWS. Instead of paying for maximum capacity 24/7, you pay only for what you use.

Auto Scaling Group (ASG) defines: minimum (e.g. 2), maximum (e.g. 8), and desired number of EC2 instances. The ASG keeps the desired count and replaces instances that fail health checks.

Scaling policies define when and how to scale. There are two main approaches.

Target Tracking Scaling — the simplest and recommended. You set a target (e.g. “I want average CPU at 60%”) and AWS adjusts the number of instances automatically to keep that value. It works like a thermostat: if traffic rises and CPU goes above 60%, instances are added; if it drops, they are removed.

Step Scaling — for finer control. You define steps: if CPU goes over 70%, add 1 instance; if it goes over 90%, add 3. Useful when you have predictable traffic patterns and want different reactions.

Metrics to scale on. CPU is the most common but not always the best. For a Node.js app you can scale on: ALB Request Count per Target (how many requests each instance gets — ideal for web apps), CPU Utilization (good for compute-intensive workloads), or custom metrics via CloudWatch (e.g. job queue length, response latency).

Cooldown period — after the ASG adds an instance, it waits for an interval (default 300 seconds) before making another scaling decision. This prevents oscillation: you don’t want to add 10 instances in 10 seconds for a brief spike.

Practical example: your app serves on average 500 req/s during the day and 50 req/s at night, with spikes of 2000 req/s during campaign launches. You configure the ASG with min 2, max 8 instances, and Target Tracking on ALBRequestCountPerTarget at 1000 requests per instance. At night you run on 2 instances (~$2/day). During the day the system keeps 2–3 instances. On a spike it scales to 4–5, then scales back. You pay only for the hours actually used.

What you don’t scale: the database and Redis

One important detail — RDS PostgreSQL and ElastiCache don’t scale horizontally as easily as EC2. The database can use Read Replicas to distribute reads, but writes stay on the primary instance. Redis on ElastiCache supports cluster mode with sharding, but that adds complexity.

Practical recommendation: size the database and Redis for peak from the start (or slightly above), and scale only the backend layer (EC2). It’s simpler, more predictable, and covers 90% of scenarios.

Concrete migration steps

The migration process involves several steps. First, deploy the React frontend to S3 with CloudFront. Then set up RDS PostgreSQL and migrate data with pg_dump and pg_restore. Next, set up ElastiCache Redis. Then create the AMI (Amazon Machine Image) with your Node.js app and configure the Launch Template and Auto Scaling Group. Put the ALB in front of the ASG. Last but not least, set up Route 53 for DNS, with a planned cutover from VPS to AWS.

Realistic time for a full migration: 2–4 weeks, including testing and a parallel run period.

When it’s worth it and when it isn’t

Migrate to AWS when: you need elastic scalability, clients require SLAs and compliance, you want high availability without manual effort, or the application is outgrowing a VPS.

Stay on a VPS when: traffic is predictable and modest, budget is the top priority, the team is small and has no AWS experience, or the app is an MVP in the validation phase.

There’s no one-size-fits-all answer. But if your application generates revenue and every hour of downtime means lost money, AWS isn’t just a cost — it’s an investment in stability.


Published on teninvent.ro — TEN INVENT S.R.L. provides AWS infrastructure consulting and implementation. Contact us for a free assessment of your cloud architecture.