Are software applications getting harder to maintain?
For a while now, I’ve been noodling on a thought around how the maintainability of software systems may be changing over time. I don’t mean the maintainability of a single given system changing as it gets older, but rather I’m asking this question: Is the median maintainabilty of a software system which has been in production for 3 years lower or higher today than a system of the same age was say 2/5/10/20 years ago?
The forces at play
My reason for considering this is because of a few strong (and growing) forces at play in the tech world right now.
Firstly, there is a huge demand for software engineers which is outstripping supply. This has two knock-on effects:
- Higher engineer turnover — Engineers are staying in the same organisation for shorter periods of time given the greater opportunities/salaries in the marketplace. The person who wrote the code that caused the just-discovered bug or the architect who decided to use a certain cloud service is less likely to still be in the company.
- Higher ratio of junior engineers in a team — Given this high demand, more junior engineers will be hired to fill the roles. Will their relative lack of experience result in generally harder to maintain software systems if there are fewer senior engineers there to guide them?
A second strong force is the move to remote working, potentially resulting in less synchronous communication between team members. Does this mean quality practices could fall through the cracks? Or does it go the other way and things like automation and documentation is now more likely to get done?
The final force is the expansion in available QA and dev tools. The effectiveness of these tools will always lag behind the application or infrastructure technologies to which they are coupled, but these tools should generally have a positive effect on system maintainability.
Why this question is worth considering
While this question would be really hard to answer in the general case (especially retrospectively), if you’re considering the maintability of systems you work on, you could consider metrics such as Delivery Lead Time (Time to deliver a new feature to customers from conception) and Mean Time To Resolve (Time to diagnose and fix a bug/incident) as good proxies for the overall maintainabilty of a system.
Suppose it could somehow be proven that software is indeed getting harder to maintain. How would this change things for you? Would you give greater priority to learning and applying proven quality practices such as test automation, documentation, loose coupling and Continuous Delivery? Or does your priority continue to be learning and using the latest technologies and architectural patterns in the systems you’re building?
What are your thoughts?
So what do you think from your own experience—is software getting harder (or easier) to maintain? Are there other forces I haven’t listed that you think could sway this either way? Hit reply and let me know what you think or add a comment to this Twitter thread.
Indie Cloud Consultant helping small teams learn and build with serverless.
Learn more how I can help you here.
Join daily email list
I publish short emails like this on building software with serverless on a daily-ish basis. They’re casual, easy to digest, and sometimes thought-provoking. If daily is too much, you can also join my less frequent newsletter to get updates on new longer-form articles.