How to migrate your Node.js Lambda functions to AWS SDK v3

How to migrate your Node.js Lambda functions to AWS SDK v3


In my experience with writing Lambda functions, the AWS SDK is often the largest dependency and certainly the most commonly imported one in the apps that I build. The larger a package is, the longer it will take for the Lambda service to load it into memory and thus the slower “cold starts” will be.

Because of this, many people don’t include the AWS SDK in their deployment artifact as there is a version of the SDK automatically available in the Lambda runtime environment. However, I have been bitten in the past by a new SDK function that worked on my machine but not when deployed to Lambda, so prefer the certainty of having a fixed version to run my production code against and have always bundled the SDK, despite it adding to the package size.

So the news that the new modular v3 version of the SDK is now generally available is very welcome. It means we can now get the best of both worlds — a small package size (and thus faster cold starts) and a fixed dependency version.

In this post, I’ll give you a step-by-step migration plan to switch an existing Lambda project to use v3 of the AWS SDK.

The 2 size reduction factors of SDK v3

There are two main reasons why the new SDK enables you to deploy smaller bundle sizes:

  1. You only require (or import) the module for the specific service client you need, rather than importing the entire aws-sdk module.
  2. The API design for v3 is still broadly equivalent to v2 but it also allows developers to invoke specific operations using a new command-style syntax via a generic send function. This still means that you only need to import the code for the specific operation you’re going to call. This makes it easy for code bundler to use a technique called treeshaking to only import the code needed for a single client operation. And when you’re building single-purpose Lambda functions, you only ever need to call one operation more often than not. Example from the SDK docs:
const { DynamoDBClient, ListTablesCommand } = require("@aws-sdk/client-dynamodb");

(async () => {
  const client = new DynamoDBClient({ region: "us-west-2" });
  const command = new ListTablesCommand({});
  try {
    const results = await client.send(command);
  } catch (err) {

Two-phased migration approach

To test out the new SDK, I decided to upgrade the source of the sample app from my Serverless Testing Workshop. It uses the Serverless Framework, and previously used v2 of the SDK and webpack for bundling (although I will be switching in esbuild in place of webpack very shortly—stay tuned for blog post!).

I split my migration into 2 phases based on the 2 factors listed in the previous section:

  • Phase 1: Uninstall v2, install v3 and aim to leave individual client API calls as-is as much as possible
  • Phase 2: Gradually replace individual API calls with the new command-style syntax

In the post, I’ll cover the phase 1 steps with phase 2 steps in a follow-up article. If you want to view the full diff of my changes for phase 1 of the migration, see this PR.

💡 Tip: Before starting, you might want to take a baseline of your existing deployment package sizes. If you’re using the Serverless Framework, you can run the sls package command and then list the zip files in the output folder with their file sizes using ls -l ./.serverless | grep .zip.

Note that all the following steps were required for v3.0.0 of the SDK. Some of the issues I found were limitations that may be addressed in future versions.

Step 1: Uninstall the v2 SDK and install new client packages

We’re going to uninstall the existing v2 SDK, install the required v3 SDK modules and then make the minimum amount of code changes necessary to make your code work and be functionally equivalent. We’re not taking advantage of the per-operation command-style API at this stage. Most of the existing API call styles will work with a few exceptions that I’ve tried to capture below.

Start by deleting the aws-sdk:

npm uninstall aws-sdk --save

💡Tip: If you’re using eslint with the import/no-extraneous-dependencies rule enabled, use the errors reported by eslint to identify everywhere you have to fix. You can use a similar approach using tsc for Typescript projects.

Fix your imports/requires

Then find all references to where your code requires or imports the aws-sdk module. In each file, replace this with the more granular v3 client import. This official AWS blog post gives the following example for the S3 client:

// BEFORE (v2)
const AWS = require("aws-sdk");

const s3Client = new AWS.S3({});
await s3Client.createBucket(params);
// AFTER (v3)
const { S3 } = require("@aws-sdk/client-s3");

const s3Client = new S3({});
await s3Client.createBucket(params);

💡Tip: If your existing code previously imported a subfolder of the aws-sdk v2 package, you can use a global find-replace: find 'aws-sdk/clients/ and replace with '@aws-sdk/client-

After this, you’ll also need to ensure that you use braces to import the named export rather than the default module export.

Bye-bye .promise()

v2’s choice to use the then-common callback style made them need to add the verbose .promise() for anyone who wanted to use the much more developer-friendly promises or async/await style. I wonder how many hours were spent finding bugs where devs forgot to add this?! These are gone in v3 thankfully.

To fix this, simply use a global find-replace again to find all .promise() calls and replace with an empty string.

No DynamoDB DocumentClient

This is the biggest bummer that I found in the v3 client modules and required the most time for me to fix. As of this writing, v3.0.0 of the DynamoDB client module does not have a DocumentClient class. This means that rather than being able to read and write plain Javascript objects from your DDB table using get and put functions you now need to use the more primitive getItem and putItem functions which rely on you doing your own marshalling to/from DynamoDB’s native JSON format.

To workaround this, I did the following:

  1. Install @aws-sdk/util-dynamodb utility module, and import marshall and unmarshall functions in modules where I read or write from DynamoDB (you can use require in place of import if you’re using plain Javascript commonjs):
import { marshall, unmarshall } from '@aws-sdk/util-dynamodb';
import { DynamoDB } from '@aws-sdk/client-dynamodb';

const ddb = new DynamoDB({});
  1. Rewrite put calls from:
  TableName: ddbConfig.clubsTable,
  Item: myBizObject,


  TableName: ddbConfig.clubsTable,
  Item: marshall(myBizObject),
  1. Rewrite get calls from:
const response = await ddb.get({
  TableName: ddbConfig.clubsTable,
  Key: {
    id: clubId,
return response?.Item;


const response = await ddb.getItem({
  TableName: ddbConfig.clubsTable,
  Key: marshall({
    id: clubId,
return response.Item ? unmarshall(response.Item) : undefined;

You will need to make similar modifications to other DDB operations such as query, transactWrite, delete, update, etc.

Annoyingly for TypeScript users, if any of the fields on your business object use an enum type, you’ll get an error and you’ll have to resort to marshall(myBizObject as any).

All this marshalling is a PITA that the vast majority of devs shouldn’t have to worry about, so hopefully we get DocumentClient or an equivalent added to the v3 SDK soon.

Lambda client

Moving away now from DynamoDB and onto the Lambda client. I’m not sure if this is just a TypeScript mistyping issue or a breaking runtime change, but the Payload field in the request and response to the invoke method is now typed as a UInt8Array so needs to be encoded/decoded like so:

import { Lambda } from '@aws-sdk/client-lambda';

const lambdaClient = new Lambda({});
const result = await lambdaClient.invoke({
  FunctionName: this.config.lambdaFunctionName,
  InvocationType: 'RequestResponse',
  Payload: new TextEncoder().encode(JSON.stringify(myInputPayloadObject)),
const responseObject = JSON.parse(new TextDecoder('utf-8').decode(result.Payload) || '{}');


The CognitoIdentityServiceProvider client has been renamed to CognitoIdentityProvider.

Unit test mocks

If you use the Jest test framework to mock out AWS SDK clients in your unit tests, you will need to change the way you intercept the constructor and return stubbed responses for client operations. The following example shows how your mocking code would change in v3 for the Simple Email Service (SES) client:

// BEFORE (v2) -> Implement the `promise` field
import 'aws-sdk/clients/ses';
const sendEmail = jest.fn().mockImplementation(() => {
  return {
    promise: () => Promise.resolve({ MessageId: uuid() }),

jest.mock('aws-sdk/clients/ses', () => jest.fn(() => {
  return { sendEmail };

// AFTER (v3) -> Implement the function itself
import '@aws-sdk/client-ses';
const sendEmail = jest.fn().mockImplementation(() => {
  return Promise.resolve({ MessageId: uuid() })

jest.mock('@aws-sdk/client-ses', () => {
  return {
    SES: jest.fn().mockImplementation(() => {
      return { sendEmail };

TypeScript type definition changes

To reference the type used by a v2 client method call in a request or response, you typically needed to reference the parent client as the namespace. This is no longer needed. For example, the type EventBridge.PutEventsRequest now becomes PutEventsRequest with the following import statement:

import type { PutEventsRequest } from '@aws-sdk/client-eventbridge';

In addition to this, several previously non-nullable fields in SDK types are now optional, and so your code should do null checks.

Also ensure that you have "skipLibCheck": true in the compilerOptions of your tsconfig.json file.

Compile, Lint and Test!

TypeScript isn’t for everyone but it really helps with a migration such as this to find out where the breaking changes are. Run tsc and ensure everything is now working.

Also having a comprehensive test suite (unit, integration and E2E tests) allows you to deploy a large cross-cutting migration such as this with much more confidence. Make sure to redeploy all your Lambdas and run all your tests to verify nothing has broken before merging any changes.

BTW, if you want to get better at testing your serverless apps, you might be interested in the next run of my Serverless Testing Workshop 😉.

What’s next?

In this post, I’ve covered how to migrate to the AWS SDK v3 while staying close to the v2 API interfaces. You could probably stop here and call it a day. Maybe you’re happy enough with the bundle savings you’ve got or you don’t like the greater verbosity of the new command-style API. That’s cool.

But if you want to go further, in my upcoming posts I’ll show you how to use the new command-style API that v3 brings and how you can use esbuild to bundle your Lambdas in super-fast time. You can add your email address to my mailing list to be notified when these posts go live.

Originally published .

Other articles you might enjoy:

Free Email Course

How to transition your team to a serverless-first mindset

In this 5-day email course, you’ll learn:

  • Lesson 1: Why serverless is inevitable
  • Lesson 2: How to identify a candidate project for your first serverless application
  • Lesson 3: How to compose the building blocks that AWS provides
  • Lesson 4: Common mistakes to avoid when building your first serverless application
  • Lesson 5: How to break ground on your first serverless project