Speed and quality in software delivery are not a trade-off
Being a good software engineer/architect means making decisions which involve trade-offs. You accept the drawbacks of using (or avoiding) a particular technology or practice because you believe the benefits outweigh them in pursuit of your overall goals.
However, there is one false trade-off that is still commonly made today that isn’t really a trade-off at all. The authors of the Accelerate book bring it to light in their research:
(our research results) demonstrate that there is no trade off between improving performance and achieving higher levels of stability and quality… but much dogma in our industry still rests on the false assumption that moving faster means trading off against other performance goals, rather than enabling and reinforcing them.
The authors identified a statistically significant positive correlation between the two speed(throughput)-focused metrics of Lead Time and Deployment Frequency of software teams with the two quality(stability)-focused metrics of Change Failure Rate and Recovery Failure Time. So high performers moved fast and didn’t break things.
You may be thinking “That’s great Paul, but we’re a small team with limited resources, we can’t invest like they do”. And I would retort to you that the Accelerate book states that their research results “are independent of team size, organization size, or industry”.
In my experience, in the significant majority of cases, this perceived “trade-off” is in favour of speed over quality:
- “We’ll not bother with a CD pipeline as it takes too long to configure, we’ll just deploy directly to the environments once we’re ready”
- “We don’t have time to write automated tests right now, we just need to ship features to get feedback”
- “Infrastructure-as-Code slows us down as we’re not familar with it, we’ll just use the console for now and revisit this at a later stage”
While these trade-offs may seem genuine in the very short term, they quickly become false after a few weeks of development, and continue staying false forever. You don’t write automated tests, you then need to do a load of manual testing ahead of each release, which slows you down from getting new features in front of users for their feedback. You don’t build a CD pipeline, you get bugs between environments and a longer gap between code authoring and bug discovery, which again will slow you down. And this slowness compounds over time as your codebase, architecture and team grows. Of course you could go too far where marginal quality improvements produce diminishing returns , but this is atypical in my experience.
A lack of understanding within a team at the outset of a project of how to implement quality practices such as automated testing/IaC/CD pipeline for your chosen architecture and tech stack is totally valid (🙋 If this is you, I can help you here). But know that deciding to proceed with development anyway without spending the time to learn and put these proven practices in place will only cause you to go slower in the end.
Indie Cloud Consultant helping small teams learn and build with serverless.
Learn more how I can help you here.
Join daily email list
I publish short emails like this on building software with serverless on a daily-ish basis. They’re casual, easy to digest, and sometimes thought-provoking. If daily is too much, you can also join my less frequent newsletter to get updates on new longer-form articles.