Photo by Suganth on Unsplash

How to continually integrate WIP features for your serverless app

Wikipedia defines Continuous Integration as: the practice of merging all developers’ working copies to a shared mainline several times a day.

Often “practising CI” is incorrectly assumed to mean that a team is using a CI server to perform integration checks when code is merged to main/master. However, the CI server may only be testing feature branches that were worked on in isolation for several days or weeks before merging, and so doesn’t fit the proper definition.

A common objection from developers to merging to main multiple times per day is that the feature they’re working on is a big one and will take several days to implement. They don’t want to push something that’s half-done which will either break something or look bad to the users.

However, there are some small batch checkpoints that you can use to merge work-in-progress (WIP) features in a safe way. I’ll look at them here from the perspective of a developer building a serverless API on AWS.

The key thing to remember with all the techniques below is that the main branch is always in a deployable state. Assume it will (or could) be deployed to production at any time.

Spec test cases without implementing the test body

The TDD workflow of Red, Green, Refactor involves writing a failing test (Red), implementing the application code required to make it pass (Green) and then refactoring the application code to make it better.

I sometimes add a preliminary “Spec” step before Red which involves simply writing the name of the test case without an implementation body. The Jest framework provides the it.todo construct for doing this where test cases marked as “todo” are reported separately from passes and failures. By doing this, I can list out upfront all the behaviours (happy path and edge cases) that I plan to handle. This list may change as I start building out the feature and think of more edge cases to test. That’s fine, this is just a starting point. When building an API, these test cases are typically integration/E2E style tests that will invoke an API endpoint and verify that a certain response is returned.

Once I have my list of spec-only test cases, I can merge them into main, with the option to have them first reviewed by a team member before doing so (e.g. so that they can potentially highlight other edge cases I need to consider).

Document early

Similar to TDD, there is a concept of Documentation-driven development. Now this may not be relevant to the feature you’re building, but if your feature requires documentation, say a high-level Mermaid sequence diagram to show how a complex async event flow will work, it can be useful to create this early and get it reviewed and merged before starting implementation.

Provision new infrastructure first

If your feature requires new cloud resources, say a new DynamoDB table or an S3 bucket, you can go ahead and define the CloudFormation/IaC for these before writing any test or application code. There is no code yet that talks to these resources and there will be no cost of having an unused resource, so there’s minimal risk here. By merging, you’ll make sure that your IaC code is valid across all environments your pipeline deploys to.

Implement downstream components without hooking them up

If your use case involves triggering an async event flow, say a StepFunction standard workflow or a stream or queue handler, you could implement these without hooking them up to your API. For example, you may be building an API Gateway handler which writes to DynamoDB and then have this trigger a stream handler which does a load of other stuff. You could start by forgetting about the API Gateway Lambda handler for now and instead implementing the stream handler. Then write integration tests that verify the handler function behaves as expected. You can safely merge this knowing that it’s not yet hooked up to your API.

Feature flags/toggles

Once you’ve partially implemented your feature, you may wish to ensure that the feature isn’t activated in production or be seen by real users until it’s complete, or has some form of manual sign-off. A technique for doing this is to use a Feature flag whereby you expose some configuration setting that causes the code for the feature to not be activated unless some environment-level or request-level setting is provided.

If you’re building the frontend, this may be important to hide new screens or UI components from users. If you’re building the backend API, you could potentially use it to turn off specific API routes unless an environment variable is set or a specific feature flag HTTP header is provided by the client.

The disadvantage of feature flags is that they add in extra conditional code which can be a little messy and you need to remember to remove once the feature is complete. I generally find that feature flags are great for cutting off a feature at the frontend from users in production, but when building a new backend API route for a feature, it’s generally fine to merge this WIP new route without using a backend feature flag. YMMV.

Join daily email list

I publish short emails like this on building software with serverless on a daily-ish basis. They’re casual, easy to digest, and sometimes thought-provoking. If daily is too much, you can also join my less frequent newsletter to get updates on new longer-form articles.

    View Emails Archive

    🩺
    Architecture & Process Review

    Built a serverless app on AWS, but struggling with performance, maintainability, scalability or DevOps practices?

    I can help by reviewing your codebase, architecture and delivery processes to identify risk areas and their causes. I will then recommend solutions and help you with their implementation.

    Learn more >>

    🪲 Testing Audit

    Are bugs in production slowing you down and killing confidence in your product?

    Get a tailored plan of action for overhauling your AWS serverless app’s tests and empower your team to ship faster with confidence.

    Learn more >>