Working With Legacy Software
No matter what your reason for the existence of legacy software, at some point your organization will need to address it. There are several ways to decrease the risk of legacy software as well as ensure current software never reaches that state.
Legacy software is any software that is critical to an organization but does not have ongoing active development or people confident to make changes safely. Some may describe it as any valuable code that a developer is afraid to change. Most organizations have some form of legacy software and it can be a source of risk and cost if not maintained properly.
Reducing risk with legacy software
Organizations do not intend for software to reach legacy status but rather software will evolve into that state over time. Key indicators like fragile deployments, timeline overruns and difficulty onboarding new team members can be the first signs software is headed toward legacy status. These realities are typically caused by language and infrastructure deprecation, missed opportunities for refactoring, personnel changes and knowledge silos within the team and organization. Many of these factors are outside of the organization’s control, however it is important to be aware of them and make tactical investments to reduce risk long term.
There are several aspects of software systems that can be leveraged to reduce the fear of change and limit the negative consequences of making mistakes. Organizations should not wait until software qualifies as legacy software to address these gaps. However, if your software has reached legacy status, these techniques can help set you on the path of working your way out of it.
Sometimes the riskiest part of working with legacy software is the deployment process. Most software is highly configurable and dependent on the underlying infrastructure configuration it is deployed to. Deployment procedures should be automated to avoid human error and provide clear documentation of all the constraints.
In the event new software is added or existing software is migrated, automated processes ensure there is a good platform for applying changes in a systematic way. In the event automation is not feasible, well documented procedures should be in place for any manual steps required to deploy and configure the software. Leaning on various aspects of DevOps can go a long way to improving your overall automation strategy.
Automated tests can come in many forms. Some test the interactions within the software (unit tests), some test the interactions between components (integration tests) and some test the entire system (end to end tests). High levels of automated tests can provide assurance that existing behavior is not impeded by new development as well as increase confidence through regular feedback loops to the developers.
In most cases, the tests themselves provide the most reliable documentation about what the software does and can be very helpful in assessing new features. If you have the option, leveraging test-driven development will ensure that you have the right level of testing.
Quality of design
Many times, the existing architecture serves as a template when extending software. If there are many templates to choose from, it can be difficult to properly employ the right pattern for the task at hand. Therefore, it’s helpful if there’s consistent patterns used throughout the code base.
Leveraging well known design patterns will allow developers to quickly understand what something is doing. Separation of concerns also helps limit the impact of changes. Poorly designed software can be scary to modify and sometimes leads to further bad design.
Support for modern tools
It is important to have a clear path to getting started when maintaining and extending software. Outdated tooling and operating systems can cause provisioning problems to get developers up and running. Modern tooling allows for increased familiarity and therefore increased productivity for the developers working on the software.
Quality of logging
Software can have a variety of behaviors that span multiple components and systems. In order to do the proper analysis and verification, it is helpful if there is a robust logging system in place. A few aspects of logging that are useful include log aggregation, existing debug/trace logging and catchall error logging. Aggregation allows for querying and correlating logs across systems.
Existing debug/trace logging provides a base level of logging that can be expanded in a targeted way where needed. Catchall error logging provides a last resort mechanism to see where the issues are. These aspects are important because they provide developers and analysts a glimpse at the inner working of running software that will help them modify the code and triage potential mistakes.
At least one alternative environment that mirrors production is necessary to safely validate changes made to software. Best practice usually includes three or more parallel environments: dev/test, staging and production. Dev/test is least production-like because it’s meant for work in progress.
Some parts of the system can be faked out and given higher levels of access to try things out. Staging is generally equal to production and is meant to perform pre-production checks for things like the deployment, data migrations and final verification of new development. These environments create a safe place to sanity check changes before going live with them. The cost to correct mistakes increases as software gets closer to production.
Addressing legacy software
When companies need to extend or improve legacy software, there are several ways to go about it. Each option has its own set of issues and should be approached carefully. The best way to address legacy software is to ensure it never exists in the first place, however many organizations do not have that option. Below are a few ways organizations can address legacy software.
Re-write and replace
Some organizations will choose to replace entire legacy systems rather than dealing with the risk of changes. This can be a large endeavor as it requires feature parity of the existing system with new software before the old software can be turned off.
This approach may also require parallel maintenance both in the legacy software and the newly developed software. This approach benefits greatly from automated deployments, parallel environments and automated testing.
Improve in place
Some organizations will choose to continue to address legacy software in place. This approach can be a tedious exercise for developers because each change requires understanding the existing code and working in partnership with it.
Many times, it is a good idea to backfill gaps that exist as changes can leverage them. This approach benefits from quality of design, support for modern tools and automated testing.
Abstract interface and extend
Some organizations will choose to keep the legacy software in place and create an abstraction layer that allows new software to be built on top of it. Long term, this approach can require a large amount of time and expense, however it is the safest way to avoid regressions in functionality.
As the new technology is introduced, development teams can and should be looking for opportunities to replace parts of the legacy system as much as possible to reduce long term costs. This approach benefits from automated deployments, parallel environments and quality logging.