How to calculate the billing savings of moving an EC2 app to Lambda

LambdaAWSCloudWatch

One of the biggest draws of migrating to a serverless architecture is the potential for a huge — sometimes order of magnitude — cost saving in your monthly AWS bill over a more traditionally architected application running on EC2 + ELB.

And while this potential is true, it may not be the case for your app. So you need to complete your due diligence around how your app’s current usage levels would fit into the new pricing model before you can come up with an estimate.

But the pricing model for Lambda and API Gateway are very different to EC2’s and getting a handle on what CloudWatch metrics you need to look for and where to find them can be very confusing.

This article will give you a step-by-step guide on how to extract the right metrics from your existing CloudWatch repository and plug them into a formula to give you an estimate which you can confidently stand by.

Understanding the new pricing model

Before we analyse the existing usage levels of your app, we first need to identify what metrics we’re looking for.

The main units of charge for an EC2-based app are the number and size of each EC2 instance and Elastic Load Balancer, and whether or not they’re always on or spun up/down as required. In the new serverless world, these metrics no longer apply but are instead replaced by metrics which are much closer tied to the usage levels for your app.

Looking at Lambda first, its units of charge are as follows:

  • Number of requests (function executions) per month — $0.20 per 1M requests, first 1M per month are free.
  • Request duration (seconds)
  • Request memory allocation (Gigabytes)

The last 2 metrics are multiplied to give a compute figure which is measured in GB-seconds per month. As of writing, the first 400,000 GB-seconds per month are free with a charge of $0.00001667 for every GB-second used thereafter. An important thing to note is that the selected memory allocation for a Lambda function also affects the CPU and network resources that AWS will allocate to it.

If your Lambda functions are to be exposed over the web (either as a front-end or API), then API Gateway will be needed to trigger the function calls. It charges as follows:

  • Number of requests per month — $3.50 per million API calls received
  • Download bandwidth consumed (Gigabytes) — this is the size of the data which is sent back to the client. $0.09/GB for first 10TB

So based on the above units of charge for both services, here are the figures we need to extract from your app’s current usage:

  • Number of requests per month
  • Average request duration
  • Maximum request memory allocation
  • Download bandwidth

Gauging current usage levels in CloudWatch

Now that we know what metrics to look for, we’ll look at how to derive estimates for them using CloudWatch. We will use the get-metric-statistics command of the CloudWatch CLI to fetch the relevant statistics. You will need to install the AWS CLI on your machine to run the commands below.

Number of requests

Run the following command in your terminal to get the total number of requests processed by all your ELBs during the last 30 day period. Note that you will need to change the --start-time and --end-time date values below to be the start and end dates of the period you wish to evaluate.

aws cloudwatch get-metric-statistics \
    --namespace AWS/ELB \
    --metric-name RequestCount --statistics "Sum" \
    --period 2592000 \
    --start-time 2018-06-01T00:00:00 \
    --end-time 2018-06-30T23:59:59

This will return JSON data which should return a single data point which looks something like this:

{
    "Datapoints": [
        {
            "Timestamp": "2018-06-01T00:00:00Z",
            "Sum": 4213041.0,
            "Unit": "Count"
        }
    ],
    "Label": "RequestCount"
}

The value of the Sum field (4213041 in this example), is what we’re interested in.

Request duration

To get the average request duration, we will use the Latency metric via the following command (again update the date range):

aws cloudwatch get-metric-statistics \
    --namespace AWS/ELB \
    --metric-name Latency --statistics "Average" \
    --period 2592000 \
    --start-time 2018-06-01T00:00:00 \
    --end-time 2018-06-30T23:59:59

The response should look like the following:

{
    "Datapoints": [
        {
            "Timestamp": "2018-06-01T00:00:00Z",
            "Average": 0.10936070027143541,
            "Unit": "Seconds"
        }
    ],
    "Label": "Latency"
}

The value of the Average field (0.109… in this example), is what we’re interested in.

Memory allocation

This metric is the hardest to gauge because as previously mentioned, memory isn’t the only consideration here, CPU also needs to be taken into account. From the Lambda docs:

AWS Lambda allocates CPU power proportional to the memory by using the same ratio as a general purpose Amazon EC2 instance type, such as an M3 type. For example, if you allocate 256 MB memory, your Lambda function will receive twice the CPU share than if you allocated only 128 MB (the minimum). You can update the configuration and request additional memory in 64 MB increments from 128MB to 3008 MB.

Unfortunately, there are no built-in CloudWatch metrics which will help us here based on your existing EC2 app, so you will need to examine your existing EC2 instance sizes and make a best guess as to how much to allocate. For costing purposes, it’s better to go with a larger, more conservative estimate. Allocating extra memory also helps functions to complete faster, thus reducing costs on the duration side.

For the purpose of our calculation, we will assume a memory allocation of 1024MB.

Download bandwidth

This metric is the amount of data that your app will send to clients over the course of 1 month. Whilst not an exact substitute, the EstimatedProcessedBytes ELB metric is the best pointer to this as it measures the total amount of data in both requests received and responses sent that were processed by the load balancer. So this estimate should come in on the high side.

To fetch this metric from Cloudwatch, run the following in your terminal, replacing YOUR_ELB_NAME with the name of your load balancer:

aws cloudwatch get-metric-statistics \
    --namespace AWS/ELB \
    --metric-name EstimatedProcessedBytes --statistics "Sum" \
    --dimensions Name=LoadBalancerName,Value=YOUR_ELB_NAME \
    --period 2592000 \
    --start-time 2018-06-01T00:00:00 \
    --end-time 2018-06-30T23:59:59

The response should look like this:

{
    "Datapoints": [
        {
            "Timestamp": "2018-06-01T00:00:00Z",
            "Sum": 104507690620.0,
            "Unit": "Bytes"
        }
    ],
    "Label": "EstimatedProcessedBytes"
}

The value of the Sum field (104507690620 in this example), is what we’re interested in.

If you have multiple ELBs you will need to run this command for each one (changing the LoadBalancerName value) and add up the Sum values to get a total.

Coming up with the final figure

Let’s look at all the metric values we’ve captured:

Metric Value
Number of Requests 4,213,041
Average Request Duration (secs) 0.2
Memory allocation (GB) 1.024
Download bandwidth (GB) 104.52

I’ve converted the data values to Gigabyte units and rounded up the Average Request Duration from 0.11 to 0.2 as it’s charged in 100ms intervals. We can now plug these values into the formulas below to come up with estimated totals for each service:

Lambda estimated cost

lambda_request_cost = (number_of_requests - free_requests) / 1000000 * 0.20
                    = (4213041 - 1000000) / 1000000 * 0.20
                    = $0.64

lambda_compute_cost = MAX(((number_of_requests * memory_allocation
                * average_request_duration) - free_compute, 0) * 0.00001667
                    = MAX(((4213041 * 1.024 * 0.2) - 400000, 0) * 0.00001667
                    = $7.72

lambda_total_cost   = lambda_request_cost + lambda_compute_cost
                    = $8.36

API Gateway estimated cost

api_request_cost    = number_of_requests / 1000000 * 3.50
                    = 4213041 / 1000000 * 3.50
                    = $14.75

api_bandwith_cost   = download_bandwidth * 0.09
                    = 104.52 * 0.09
                    = $9.41

api_total_cost      = api_request_cost + api_bandwith_cost
                    = $24.15

This gives an estimated total monthly cost of $32.51.

What’s next?

Now you have a figure for your estimated monthly bill when running your app in Lambda and API Gateway. Unfortunately, the AWS bill isn’t the only factor in the total cost of ownership of migrating an app to a serverless architecture.

The next step is to analyse your codebase to see what effort is involved in re-architecting it to work inside Lambda functions. You will also need to consider the cost of upskilling required for your devs and ops engineers to work in this new serverless world.

Both of these tasks will require a non-trivial amount of time to research. But now that you’re armed with a concrete figure for savings on your bill, you can bring this to your boss to justify asking for time to investigate in more depth.

Originally published .

Other articles you might enjoy:

Free Email Course

How to transition your team to a serverless-first mindset

In this 5-day email course, you’ll learn:

  • Lesson 1: Why serverless is inevitable
  • Lesson 2: How to identify a candidate project for your first serverless application
  • Lesson 3: How to compose the building blocks that AWS provides
  • Lesson 4: Common mistakes to avoid when building your first serverless application
  • Lesson 5: How to break ground on your first serverless project

    🩺
    Architecture & Process Review

    Built a serverless app on AWS, but struggling with performance, maintainability, scalability or DevOps practices?

    I can help by reviewing your codebase, architecture and delivery processes to identify risk areas and their causes. I will then recommend solutions and help you with their implementation.

    Learn more >>

    🪲 Testing Audit

    Are bugs in production slowing you down and killing confidence in your product?

    Get a tailored plan of action for overhauling your AWS serverless app’s tests and empower your team to ship faster with confidence.

    Learn more >>