Skip to content
Serverless on AWS: What It Means, How to Code It, and When It’s Worth It
← ← Back to Thinking Cloud

Serverless on AWS: What It Means, How to Code It, and When It’s Worth It

You’ve heard the term “serverless” hundreds of times. And every time someone feels the need to clarify: “it doesn’t mean there are no servers.” Correct. The servers exist, but they’re no longer your problem. AWS runs them, scales them, patches them, and you pay only when your code actually runs. Zero traffic? Zero cost.

In previous articles we covered migrating a classic app (React + Node.js + PostgreSQL + Redis) to EC2 with Auto Scaling and zero-downtime deployment. Now we explore the radical alternative: what if you removed servers from the equation entirely?

What is serverless, concretely?

Serverless means you write functions (pieces of code) that run in response to events — an HTTP request, a file uploaded to S3, a message in an SQS queue, a change in the database. You don’t provision servers, you don’t configure Auto Scaling Groups, you don’t think about how much RAM an instance has.

On AWS, the serverless ecosystem is built from a few key services:

AWS Lambda — the compute engine. You write a function (Node.js, Python, Java, Go, etc.), upload it, and Lambda runs it every time it receives an event. It scales automatically from zero to thousands of concurrent executions.

API Gateway — the front door. It receives HTTP/HTTPS requests and routes them to Lambda functions. It handles authentication, rate limiting, and API versioning.

DynamoDB — the serverless database. NoSQL, fully managed, with automatic scaling and millisecond latency. You pay per request and per GB stored.

S3 + CloudFront — for the static frontend (React/Vue), same as in the classic architecture.

SQS / SNS / EventBridge — for asynchronous communication between functions.

What does Lambda function code look like?

Let’s start with a concrete example. You have an endpoint that returns a list of products from DynamoDB:

// handler.js
const { DynamoDBClient } = require('@aws-sdk/client-dynamodb');
const { DynamoDBDocumentClient, ScanCommand } = require('@aws-sdk/lib-dynamodb');

const client = new DynamoDBClient({});
const docClient = DynamoDBDocumentClient.from(client);

exports.getProducts = async (event) => {
  try {
    const result = await docClient.send(
      new ScanCommand({ TableName: 'Products' })
    );
    
    return {
      statusCode: 200,
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify(result.Items)
    };
  } catch (error) {
    return {
      statusCode: 500,
      body: JSON.stringify({ error: 'Internal Server Error' })
    };
  }
};

The function receives an event (which contains details about the HTTP request — path, headers, body, query parameters) and returns an object with statusCode, headers, and body. That’s it. No Express setup, no server listening on a port, no process management.

The fundamental difference from a classic Node.js app: there is no long-running process. The function “wakes up” on each request, runs the code, and stops. AWS keeps a pool of containers ready to minimize latency (but on the first call after a period of inactivity you’ll see a “cold start” of 100–500ms).

How do you deploy?

Serverless deployment is radically different from EC2. You don’t update instances, there’s no rolling or blue/green deployment. You upload the code, and the new version is live.

Option 1: AWS SAM (Serverless Application Model) — the most popular framework. You define the infrastructure in a template.yaml file:

# template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

Globals:
  Function:
    Runtime: nodejs20.x
    Timeout: 10
    MemorySize: 256

Resources:
  GetProductsFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: handler.getProducts
      Events:
        Api:
          Type: HttpApi
          Properties:
            Path: /products
            Method: get
      Policies:
        - DynamoDBReadPolicy:
            TableName: Products

  ProductsTable:
    Type: AWS::DynamoDB::Table
    Properties:
      TableName: Products
      BillingMode: PAY_PER_REQUEST
      AttributeDefinitions:
        - AttributeName: id
          AttributeType: S
      KeySchema:
        - AttributeName: id
          KeyType: HASH

Deploy is two commands:

sam build
sam deploy --guided

SAM packages the code, creates a CloudFormation stack, provisions Lambda, API Gateway, and DynamoDB — all from a single config file. To update? Change the code or template and run sam deploy again.

Option 2: Serverless Framework — a popular alternative with a rich plugin ecosystem. It uses serverless.yml instead of template.yaml, but the principle is the same.

Option 3: AWS CDK (Cloud Development Kit) — for those who prefer defining infrastructure in code (TypeScript, Python) instead of YAML. More verbose, but more flexible and easier to test.

CI/CD integration is straightforward. In GitHub Actions, add a step that runs sam build && sam deploy --no-confirm-changeset. Same in GitLab CI/CD. You no longer need CodeDeploy — SAM handles everything via CloudFormation.

The real advantages of serverless

Zero infrastructure management. No servers to patch, no OS to update, no Nginx config. The team focuses 100% on code.

Scale to zero. With EC2, even with Auto Scaling, you pay for at least 2 instances 24/7 (~$67/month). With Lambda, if no one hits the API, the cost is literally $0.

Pay per execution. Lambda costs $0.20 per million requests plus $0.0000166667 per GB-second of compute. The free tier includes 1 million requests and 400,000 GB-seconds per month — permanently, not just for 12 months.

Concrete cost example: an app with 5 million requests per month, each function with 256MB memory and 200ms average duration, costs about $4–5/month on Lambda. The same app on EC2 with ALB costs at least $90–100/month.

Instant deploy. You upload the code, Lambda serves it immediately. No rolling update, no downtime. You can use Lambda aliases and versions for canary deployment if you want.

Built-in resilience. Lambda runs automatically across multiple Availability Zones. If one zone fails, traffic goes to others. You don’t configure anything.

The downsides you need to know

Cold starts. When a function hasn’t been invoked recently, the first call can take an extra 100–500ms (or more for Java functions). Solution: Provisioned Concurrency (pre-warmed functions), but it costs extra.

15-minute limit. A Lambda function cannot run longer than 15 minutes. Long-running work must be split into smaller steps (Step Functions) or moved to ECS/EC2.

API Gateway timeout: 29 seconds. If the Lambda function doesn’t respond within 29 seconds through API Gateway, the request times out. For long operations, use the async pattern: the function starts processing, returns a job ID, and the client polls.

Vendor lock-in. Lambda code is tightly coupled to the AWS ecosystem. Migrating to another cloud provider requires significant refactoring. You can partly mitigate with frameworks like Serverless Framework that abstract the provider.

Harder debugging. You can’t SSH into a server to see what’s going on. You rely on CloudWatch Logs, X-Ray for tracing, and local testing with SAM CLI (sam local invoke).

Architectural complexity. An app with 50 Lambda functions, 10 SQS queues, and 5 DynamoDB tables can be harder to understand and debug than a single Node.js server.

Serverless vs. EC2: When to choose what?

Choose serverless when: traffic is variable or unpredictable (from 0 to massive spikes), you want fast time-to-market, the team is small and doesn’t want to manage infrastructure, the app naturally breaks down into independent functions, or the budget for low traffic must be minimized.

Choose EC2/containers when: you have steady, predictable traffic (24/7 at high capacity), the app has long-running processes (websockets, video processing), you need full control over the execution environment, or the existing app is a monolith that doesn’t split easily.

The hybrid approach is often the most pragmatic: frontend on CloudFront + S3, main API on Lambda + API Gateway, but heavy background work on ECS Fargate or EC2. PostgreSQL stays on RDS, Redis on ElastiCache — not everything has to be serverless.

From theory to practice

If you want to try serverless without committing to a full migration, start with a single endpoint. Create a Lambda function with SAM, connect it to API Gateway, and see how it behaves. The AWS free tier easily covers experimentation.

The most natural first step: move async processing to Lambda. Sending emails, generating PDFs, image processing, cron jobs — all are perfect candidates. The main backend can stay on EC2, and Lambda handles tasks that don’t need to respond synchronously.

Serverless isn’t “all or nothing.” It’s another tool in your toolbox — extremely powerful when used in the right context.


Published on teninvent.ro — TEN INVENT S.R.L. provides serverless AWS consulting and implementation. Contact us to evaluate whether serverless is right for your application.