Why I switched from AWS CodePipeline to GitHub Actions

AWSDevOps

For my first few years building serverless applications on AWS, I used AWS CodePipeline coupled with AWS CodeBuild for my CI/CD pipelines. These services were hosted inside AWS where all my infrastructure lived and were functionally good enough for what I wanted to do, so they seemed a good fit. However, last year I decided to switch over to GitHub Actions. I had started using GHA just for Continuous Integration checks on pull requests (linting, unit tests, etc), but I’ve since proceeded to using it for deployments into my AWS environments as well.

There were several motivations behind this move, but my primary one was that I believe GitHub Actions makes it much easier for developers to maintain their delivery pipelines and respond to the feedback it provides in their daily work.

For context, my client engagements typically involve working for a short period with small dev teams where there is no dedicated DevOps engineer or platform team available to manage infrastructure and do deployments. The developers writing the code also ship it to production. So I’m looking for a tool that the dev team can adopt and run with themselves.

In the sections below, I’ll compare both options across two main areas:

  1. Setting up and maintaining the pipelines
  2. Day-to-day working in the pipelines

Note: if your organisation has non-feature-developing DevOps engineers or separate teams that take care of deployments, or if your releases require co-ordination across multiple teams, then you may care more about other factors that I don’t touch upon in this article.

Terminology comparison

Before we dive in to compare the features and DX of both options, there are some terminology differences to be aware of. In the table below, I’ve listed out terms that are roughly equivalent across the two services.

CodePipeline/ CodeBuild GitHub Actions Notes
Pipeline (CP) Workflow Workflows are more general purpose than CP pipelines.
Pipeline Execution (CP) Workflow Run Single instance of a running pipeline.
Stage (CP) Job A GHA Workflow contains 1 or more Jobs. These can be chained in a list to behave like sequenced stages.
Command (CB) Step A single task that is executed in series within a run (e.g. a bash command)

Setting up the pipelines

Let’s start by comparing what all’s involved in setting up pipelines with each service.

Configuring CodePipeline/CodeBuild

It’s a generally recommended practice to set up separate AWS accounts for each environment/stage that you wish to deploy your application to. The upshot of this for AWS CodePipeline is that if your pipeline needs to deploy to multiple environments (which is highly likely), you need to decide in which account to host the pipeline itself. I always deployed this to a separate tools account and set up the necessary cross-account IAM trust policies to allow it to deploy to “target” accounts (staging, prod, etc). Setting all this up was a real pain and took an awful lot of complex Infrastructure-as-Code just to configure the pipelines, CodeBuild project resources and associated permissions (note: if you use CDK Pipelines this may now involve a little less code).

In terms of configuring the steps for a specific stage of the pipeline, you need to create a CodeBuild buildspec.yml file where you can list out instructions. This typically involves making bash commands which call out to npm scripts defined in package.json which implement the build, deployment and test tasks. This part is the easiest to set up.

In order to notify developers when a build breaks, you can configure notifications to go to an SNS topic. But you then need to manage email and other subscriptions to it yourself. For Slack integration for example, you need to write your own Lambda function which hooks into this SNS topic and then sends request to Slack’s API. Having to deploy actual code (and not just configuration) for every pipeline you wish to create adds a lot of overhead.

Configuring GitHub Actions

With GitHub Actions, the pipeline definition is simply a single YAML workflow file inside your repository’s .github/workflows folder. You don’t need to worry about provisioning the pipeline/workflow—the mere presence of the workflow file will cause GitHub to do that for you automatically. This means you don’t need a dedicated tools account in your AWS org. What you do need in AWS is to create an OIDC Provider in each target account. This is used to issue temporary IAM credentials to a GitHub Actions workflow run in order to authenticate API calls into that AWS account. You also need to create a GitHubActionsDeployer IAM role in the target accounts with the relevant permissions to deploy resources, run tests, etc, in your AWS account.

GitHub Actions also provides access to Marketplace actions which you can refer to in your workflow, reducing the amount of YAML you need to write for common tasks. One such action is the configure-aws-credentials which allows your action to assume your GitHubActionsDeployer IAM role. I also use the action-slack action to send notifications to Slack for each workflow job. The remaining tasks (building, deploying, testing, etc) are configured pretty similar to how they’re done inside CodeBuild’s buildspec.yml file.

Day-to-day tasks working with your pipelines

Reacting to failed builds

If a build fails, the first step is to go to the logs to see where it failed and what the error is. This is probably the single most frequent interaction developers will have with their CI/CD pipelines.

With CodePipeline, email or Slack integrations (if set up correctly) can include a deep URL to the failed pipeline execution inside the AWS console. However, this will likely be in an AWS account you’re not currently logged in to, so there’s the friction involved in logging into the tools account AWS console (which annoyingly will log you out of your dev account in the console). You then need to navigate to the Codebuild project execution linked to from the Pipeline before you get to see the logs. And the logs themselves are just a scrolling wall of text with minimal formatting or grouping.

With GitHub Actions, failed workflow runs will by default send an email notification to the developer who’s push triggered the workflow. It can also send a notification to Slack (using above-linked marketplace action). These notifications also contain a deep link to the workflow run. The UI is very nice where you can quickly get to the failure log line with a single click. And because you’re in GitHub (where your source code lives), you can very quickly navigate to the diff showing the changes that caused this build to break.

Conditional monorepo builds

I generally favour a monorepo for the projects I work on. Often this monorepo contains “service” folders for components that should be deployed separately and have their own independent pipelines, say a frontend web app and a backend api service. I only want to trigger a service’s pipeline if changes are made to its folder (or possibly also to a common folder if you have one).

This is something which is not possible to do out-of-the-box with CodePipeline at time of writing, although it is possible to roll your own Git change detection bash script which runs as the first step of the pipeline.

GitHub Actions provides this feature with a simple on.paths entry. To trigger a workflow only when a push contains changes to files within the api service folder, you just put this at the top of your file:

on:
  push:
    branches:
      - main
    paths:
      - 'services/api/**'

Other features of note

There are some features that I used with CodePipeline/CodeBuild that are also present in GitHub Actions, but which weren’t immediately obvious to me how to implement them. I’ll briefly mention a few of these.

Restricting concurrent deployments

You probably don’t want two instanteous pipeline executions attempting to deploy to the same AWS environment at the same time. CodePipeline handled this automatically but it requires a simple bit of configurationto enable this in GHA (see here).

Preventing malicious changes to workflow files

Given that CodePipeline pipelines needed to be explicitly configured and deployed independently of the application codebase, this wasn’t really a concern before. With GitHub Actions workflows living in the same repo as the code, developers could potentially edit these workflows with potentially bad outcomes—say a malicious command is added to the workflow which deploys to the production environment. This risk can be mitigated through use of GitHub’s branch protection rules, say to ensure that all Pull Requests need to be reviewed by someone else. This could be combined with the CODEOWNERS file feature to automatically tag certain people to review PRs containing changes to workflow files.

Composite actions

If you have common tasks that you perform in separate workflows (perhaps with different arguments), rather than copying and pasting a load of boilerplate, GitHub Actions allows you to define your own composite actions. A composite action is effectively a subroutine that you create in the .github./actions folder and then reference from your workflow files. I’ve created my own composite actions for tasks such as deploying a Serverless Framework service.

Storing artifacts across stages

This isn’t something I use when deploying serverless apps, but folks have asked me about using it on GHA and it looks like it does support storing workflow data as artifacts.

README status badges

Workflow status badges are a nice way to see at a glance the current build and deployment status of all services in your repo. They’re easy to set up through a simple markdown link syntax.

Objections to GitHub Actions

So far, I’ve told you all about the benefits of GitHub Actions, so let’s now discuss a few of the low points.

Reliability

This is the single biggest criticism I (and several others) have about using GitHub Actions. It seems to happen once or twice a month that workflows are very slow to kick off, sometimes being delayed for over an hour. This is sometimes, but not always, coupled with downtime to GitHub’s standard Git operations (if you use GitHub, you’ll understand the frustration in not being able to push your changes). You can check out their status page incident history here.

While this is not great, I’m optimistic that GitHub will greatly improve the reliability as Microsoft aims to assert itself as the dominant provider of dev tools, where CI/CD is a central component.

Cost of Enterprise features

Some of GitHub’s features that you might want to use in your CI/CD pipelines require the very pricey Enterprise plan if your repository is not public. At time of writing in June 2022, this costs $231 USD/user/year compared to the $44/user/year for the next lowest Team plan. The main Enterprise-gated feature which you may wish to use is “Environments”. An Environment allows you to describe a deployment target (staging, prod, etc) and then configure protection rules related to that environment which allow you to do things such as controlling who can access that environment’s secrets. Environments also enable you to specify that certain workflow jobs require a human review and approval before executing. It is possible to achieve human gated approvals with branch protection rules without needing the Enterprise plan and Environments but this setup is more convoluted. It’s a shame that these type of common security concerns command such a premium fee. (That said, maybe GitHub are sneakily trying to encourage removing the practice of human deployment approvals in favour of ungated Continuous Deployment, which I would whole-heartedly approve of 😉 ).

Summary

All in all, I think these few objections are greatly outweighed by the benefits of using GitHub Actions for deploying into AWS (especially if you’re deploying serverless applications). Fundamentally, CI/CD is about enabling fast, high quality feedback to developers to enable them to address issues as soon as possible. On this key factor, the GitHub Actions developer experience is head and shoulders above CodePipeline and CodeBuild IMO. And I only expect it to continue improving whereas I have seen very little innovation with CodePipeline over the last few years.

Another important point to consider here is that almost all developers you hire will already be familiar with GitHub, some might be familiar with GitHub Actions, but very few will be familiar with AWS CodePipeline and CodeBuild. Yes you might find one expert who can set everything up for you, but you’re going to be pretty dependent on them when things break in your pipelines as there are many moving parts. I’ve experienced this myself with clients I set up pipelines for. Moving to GitHub Actions which are simpler and have a wider community usage (thus more googleable) make the barrier to entry much lower.

Further reading

Originally published .

Other articles you might enjoy:

Free Email Course

How to transition your team to a serverless-first mindset

In this 5-day email course, you’ll learn:

  • Lesson 1: Why serverless is inevitable
  • Lesson 2: How to identify a candidate project for your first serverless application
  • Lesson 3: How to compose the building blocks that AWS provides
  • Lesson 4: Common mistakes to avoid when building your first serverless application
  • Lesson 5: How to break ground on your first serverless project

    🩺
    Architecture & Process Review

    Built a serverless app on AWS, but struggling with performance, maintainability, scalability or DevOps practices?

    I can help by reviewing your codebase, architecture and delivery processes to identify risk areas and their causes. I will then recommend solutions and help you with their implementation.

    Learn more >>

    🪲 Testing Audit

    Are bugs in production slowing you down and killing confidence in your product?

    Get a tailored plan of action for overhauling your AWS serverless app’s tests and empower your team to ship faster with confidence.

    Learn more >>