What’s the big problem with having a large number of Lambda functions in your application?

AWSArchitectureLambdaDaily Email

Before I start into today’s topic, 2 quick items:

  • Please vote in my Twitter poll on how you would describe a specific type of serverless architecture. If you could retweet it too, that would be much appreciated.
  • If you’re interested in learning more about DynamoDB data modelling, Alex DeBrie is going to release 2 preview chapters of his upcoming DynamoDB book to his waitlist next week. Get on that list here.

“Trying to make an entire application, like a Basecamp or Shopify or GitHub or just about anything, out of serverless takes all the bad ideas of microservices and says “BUT WHAT IF WE JUST DID MORE OF THEM”. Profoundly misguided.”

This is a recent tweet from David Heinemeier Hansson (of Ruby on Rails and bootstrapping/Basecamp fame).

What I suspect DHH is getting at here is that such an entire application built using FaaS would need to be decomposed into hundreds of functions (along with other supporting cloud services). And yes, that’s probably true.

But why does he think this is a bad idea for the long-term maintainability of a software system?

Is it the codebase complexity?

Is there a cognitive burden on the developer to understand how all the functions in a serverless application fit together?

To me, no more so than in any large server-based monolithic application. Both types of application mostly share the same concerns: specifying API endpoint paths, adding route handlers, business logic, data access, utility libraries, background jobs, etc.

There are a few implementation differences, e.g. in Rails, you wire up new API routes to their handlers via a few lines of Ruby whereas in serverless apps, it’s typically done with a few lines of YAML. But generally speaking, a well-designed folder structure coupled with a good understanding within the development team of separation of concerns helps to mitigate codebase complexity issues as an application grows. And keeping everything in a monorepo helps in this respect too.

Where opinionated frameworks like Rails did a great job was in putting a standard structure in place across the entire community so that each of these concerns has a familiar place. The Architect framework comes closest to this in the serverless world, but I definitely think it’s something that we could do better.

Is it the deployment overhead?

Are serverless apps harder to deploy?

Hundreds of Lambda Functions !== Hundreds of Deployment Artifacts.

Well, technically, it could actually mean this given that individual Lambda functions can be deployed independently. But in real-world CI/CD workflows, groups of Lambda functions and their associated configuration are often deployed together as a single “service” (or “microservice” if you prefer). You can even group the entirety of your Lambda functions into 1 or 2 services if you don’t want to follow the “micro” part. This is a totally valid approach for some types of application and is often the way I start new projects.

Tools such as the Serverless Framework allow you to run a single CLI command (sls deploy) that will automatically bundle the code and configuration for all your functions in a service and deploy them for you. Its use of CloudFormation behind the scenes allows this deployment to be treated as a transaction that will roll all functions back to their previous versions if deployment of a single resource fails.

To finish on this point, in my experience having hundreds of Lambda functions is not a significant overhead at deploy time as existing tools have already solved this problem. In fact, I would argue that having Infrastructure-as-Code baked into serverless frameworks from the start makes deployment of serverless apps easier than their server-based counterparts.

Is it the debugging and monitoring overhead?

This concern has a bit more weight.

The asynchronous workflows brought by the event-driven model of serverless applications mean that debugging a particular transaction can involve looking in multiple places to find the associated logs. There’s the Lambda function that handles the initial API request from the user and then there are the downstream functions that could be triggered from SNS topics, SQS queues or a DynamoDB stream.

There are correlation strategies to mitigate this concern, but it’s definitely a bit more work than debugging a monolithic app which has centralised logs by default.

However, past monitoring concerns such as tracking server memory, CPU usage and disk space have gone away. This goes some way to evening things up on this score.

So how do you measure the complexity and maintainability of a large FaaS application?

To conclude, I don’t think it’s fair to use the raw number of Lambda functions in a system as a proxy for maintainability.

So what’s a better alternative? I don’t yet have a sufficiently specific answer for this although I suspect service boundaries play a major role. I do know an over-engineered mess when I see it though!

But whether you’re building a system with serverless or containers, using microservices or as a monolith or something in-between, these fundamentals will never change:

  • Make sure your app does what the users need it to do
  • Keep your concerns sufficiently separated within your codebase
  • KISS (Keep it simple, stupid) — Make it as easy as possible for all engineers on your team to be able to understand what’s going on in a particular part of the system.

— Paul

Originally published .

Other articles you might enjoy:

Free Email Course

How to transition your team to a serverless-first mindset

In this 5-day email course, you’ll learn:

  • Lesson 1: Why serverless is inevitable
  • Lesson 2: How to identify a candidate project for your first serverless application
  • Lesson 3: How to compose the building blocks that AWS provides
  • Lesson 4: Common mistakes to avoid when building your first serverless application
  • Lesson 5: How to break ground on your first serverless project

    🩺
    Architecture & Process Review

    Built a serverless app on AWS, but struggling with performance, maintainability, scalability or DevOps practices?

    I can help by reviewing your codebase, architecture and delivery processes to identify risk areas and their causes. I will then recommend solutions and help you with their implementation.

    Learn more >>

    🪲 Testing Audit

    Are bugs in production slowing you down and killing confidence in your product?

    Get a tailored plan of action for overhauling your AWS serverless app’s tests and empower your team to ship faster with confidence.

    Learn more >>