WTF is Legacy Code Anyways?
Legacy code; that hot topic that all of us like to complain about. The big abominable waste that drives tech teams to tears, and bring organizations to their knees. Despite all the complaints and toil revolving around legacy code, there isn’t much written on this topic. It’s such an un-sexy, triggering topic that we don’t even want to talk about it.
You would be hard pressed to find an org that doesn’t have its big bad legacy code monster that is the cause of all pain and suffering. Every project takes longer and longer to accomplish. There are more and more defects every release. Everyone who relies on the software is finding it harder to do their jobs. Some people have had to resort to not using it at all to get their jobs done. Noone knows how to fix it. This old dinosaur just makes life worse by the day.
Let me ask you something though: What exactly is Legacy Code anyways? Most people will just tell you that it’s “Old code”. That can’t be right though; that means all code we write today is doomed to be legacy tomorrow. If that were the case, we would have no hope in the world, as all software would become busted up hulks that perform nothing of value. Others might tell you it’s “Messy code”. Well, tons of shit code gets written every day, are we saying it’s legacy even before the first commit? I believe it’s more nuanced than either of these definitions.
The Properties of Legacy Code
Legacy Code has a few defining properties:
- It no longer supports the new needs of the org; the spec has changed
- It hasn’t supported the spec for a long time, but has been patched to appear to do so for a long duration of time
- Several generations of engineers have contributed to it, and most (all?) are no longer around to support it
- The technology used is no longer on the bleeding edge
In addition to these properties, there is also the concept of having inexperienced engineers with too much power exacerbating the issue.
Let’s chat about a few of these properties first.
Changing Spec
There are three things in life that are guaranteed: Life, Death, and Taxes. In turn, there are three things in Software Engineering that are guaranteed: It won’t be right the first time, It won’t be used forever, and the spec will change. Changing specs are a common complaint against engineers, but that’s like stabbing the ocean; it’s a just a fact of life. It could change tomorrow, it could change 50 years from now. Either way it will change.
Why? Well, isn’t it obvious? The world keeps moving forward in time, and as such the game board is constantly shifting. Needs change, and when they do, the corresponding spec changes.
Imagine you work for a business selling seashells. You write software that categorizes seashells, counts them, identifies them, etc. All things sea shells, all the time. Now imagine you are trying to grow, and decide to start selling river pebbles. Does the software support categorizing, counting, and identifying them? Hell, does it even support more than one thing? Maybe, maybe not. Why would it? You’ve been a seashell business for the last several years, and have been pushed to write seashell software as fast as possible so you can corner the seashell market.
Guess what? Your seashell software is now officially LeGAcY CoDe.
What if I told you there are degrees of legacy code though? Sure, this might be legacy code now, but it’s not beyond repair. With some hard work and good planning, this thing can be re-worked to support the new spec. Of course, if you defer that, and force the same speed as before, you will end up dealing with the second property
Prolonged Patching
Ok, so now you need to fix this code to support river pebbles. There is the real fix, and the fast and dirty patch to support the new needs. The real fix is to modify the application to support the categorization, counting, and identification of multiple families of products. In other words, you need to add a new abstraction layer. This takes some time of course, but its clear that it needs to be done. The fast and dirty fix is to patch the application with a bunch of if
statements and assorted conditional logic to change behavior depending on some hard coded constants (insert your favorite anti-pattern here).
This is a growing business, so the temptation to go with fast and dirty is strong. As an engineer, you know its wrong. As a business owner who is not so sure the business will survive without river pebbles, it’s next year’s problem. Fix it now! This is an understandable position to take, and its important that we as engineers understand the kind of pressure that creates; that fear and panic that drives a short sighted, near term solution. Typically, we capitulate to the panic, because while we know its quick and dirty, we can fix it once we have space and are beyond the initial problem!
But wait, there’s more! Did you hear one of our competitors is allowing people to rent seashells? We need to add support for this immediately! Damn, we have one client that we are contractually obligated to not allow to rent though, so let’s make sure we support this edge case. A month goes by in the middle of this quick and dirty patch and we find out that we can’t legally allow the rental of river pebbles. Better patch the patch to make sure we comply. Damn these functions are getting a bit big. let’s make a ticket for the backlog to refactor them.
Spoiler alert, you’ll never get to that backlog ticket. Eventually your talented engineers will get exhausted by this, and start to leave. This leads to…
Brain Drain
The original seashell app guys are all gone. A few of the guys who wrote the river pebble patch are still here, but most of them have moved on as well. There is one guy who sort of remembers Jerry from the original team telling him a bit about the seashell categorization, but he never worked with it before himself. There are some unit tests, but when the river pebble panic of 2018 came, the business pressure to compete meant that unit tests were neglected. Half the ones that exist don’t work anymore, and most of the code is not covered any longer.
Just yesterday, Max released a patch that brought production down for an hour. This is commonplace now. No-one really knows what is tech debt, and what is by design. The code has become tangled with patches, and no one is left that has the full picture in their mind to understand what sort of patches need to be undone. Sure, we might know what the business really needs now, but we don’t understand how to make that do this without a total re-write. No one has the appetite for this. There are better ways to solve these problems now, but whenever we ask the business for space to use them, all they think is “why would I pay to do all of this over again!?”
Older Technology
I don’t have a smooth entry into this one. This is sort of a side problem rather than stages of grief of your code base. The technology chosen 10 years ago vs what we have available now to solve these problems is wildly different. Sure, some of the old is new again, but the indisputable fact is that we now have better tools at our disposal to create the same thing we did 10 years ago. Too bad we can’t really leverage them without a re-write (this is not actually true, but that’s a different post altogether).
Old technology isn’t necessarily a _bad_ thing. If you managed to keep your talented engineers, and allowed them to curate and maintain the code base in the face of the changing spec; the older technology might not matter at all. If you have some website running on Perl scripts that is serving all its needs, doesn’t have security holes, and you have happy engineers supporting it; then you don’t really have legacy code, you just have old code. Old code is great. Old code means it was so well done that you didn’t need to change it. Old code means that it was robust enough to survive all the spec changes _and_ macro technology changes.
The Impact of Inexperience
My general opinion is that changing spec in combination with unchecked tech debt (which, if you didn’t figure it out, is caused by not responding with devoted intent when the spec changed). But, something that can make this far worse is when the original design wasn’t even appropriate for the original spec! This can happen to the best of us for a variety of reasons, but what I want to focus on here is how inexperience plays a major factor in this. Asking your friend’s high school aged son to make you a simple seashell database and program to input the data on the cheap isn’t the worst idea in the world if you don’t plan to ever grow, but if you think that kid (however smart and talented they are for their age) is going to have the experience to know what mess they are creating for the future, you have lost your mind.
Several times in my career, I have seen the seeds of legacy code get planted VERY early on (in fact, I am sure I planted a few myself as a young man) by those who lacked the experience necessary to start a project from scratch. I have also seen it exacerbated by granting privileges to inexperienced engineers that they should not have had yet (or maybe at all), like the ability to merge code with no oversight.
Before anyone accuses me of being a jerk, I just want to say that I strongly believe that anyone who wants to put in the work to learn the craft should be given the chance. They need guidance though, and they need to have the appropriate level of responsibility. It takes ten years for a sushi chef to go from apprentice, to chef. You need to learn to make the rice before you can make the sushi boat. Our junior engineers (of all ages) need the encouragement and guard rails to grow into seniors that build delicious sushi boats of software, but first, let them master the rice.