In this article

In March, SolarWinds issued their annual report for FY 2020. Although the report did not reveal many new details regarding the monumental hack that has embroiled the federal contractor, it did confirm some details that were previously only assumed regarding how the recent security incident occurred and how such an attack can be devastating.

The SolarWinds annual report shows that "malicious code, or Sunburst, was injected into builds of (the SolarWinds) Orion Software Platform." The annual report goes on to state that the threat actor who breached their systems, obtained access to source code across their product suite and exfiltrated that source code from their systems.

Based on earlier reports, and confirmed in the SolarWinds annual report, the SolarWinds security incident began with an unsecured development and deployment environment. Ultimately, this security weakness affected as many as 18,000 organizations and led to long term and pervasive intrusions into at least nine federal agencies. SolarWinds leaders recognize a need for improvement in their development and deployment pipeline, committing to deploy "significant resources" to improve their software development and build environments.

SolarWinds estimates that the cybersecurity incident has cost the company $3.5M to date. This amount will certainly increase, perhaps exponentially, due to SolarWinds being currently subject to numerous lawsuits and investigations including multiple class action lawsuits alleging, among other things, violations of the federal securities laws.  

Insider threats and the development pipeline

The SolarWinds incident highlights the importance of a secure code development and deployment pipeline. This holds equally or maybe even more true in the public sector, where in-house developed apps often store and process information such as personally identifiable information (PII), or perhaps even more sensitive data. And, as past incidents have shown, the development and deployment of these applications must also be secured against insider threat.  

A real-life example of potential adverse consequences from insider threat in the development pipeline of public sector applications is when a disgruntled programmer altered code in two criminal history databases. This incident could have brought down the entire system, with the complete loss of criminal history information forever. Luckily, the malicious code was caught before it executed, but the organization still had to spend $85,000 fixing the systems. A similar attack could be possible with a threat actor masquerading as a legitimate developer by using stolen credentials.

With so much at stake and so much to consider, it can seem like an overwhelming and potentially scary task for organizations to attempt to shore up vulnerabilities in their development and deployment environments. I will attempt to help organizations make sense of securing their development and deployment environments by offering seven best practices with some descriptions.  

  1. Implement a DevSecOps pipeline (with automated security checks and feedback loops): Interactive application security testing (IAST) should be a consideration in this area. The agent-based approach that IAST employs allows assessments to take place from within the application, which means a much broader range of issues can be identified while greatly reducing false positives as compared to either static or dynamic application security tools (SAST and DAST).
  2. Adopt robust access management: Lock down the access to all tools in your development and deployment pipeline so that users are authenticated, and appropriate set of permissions are given to them. Utilize just-in-time credentials through privileged access management wherever possible, especially for administrators or users with elevated credentials. And not to bury the lead here, but perhaps most importantly: implement Multi-Factor Authentication (MFA). Require developers to authenticate via MFA when checking in code and for any user, at any step of the process, where they have access to the source code.
  3. Establish guardrails: Ensure that no code can be inserted without going through the DevSecOps pipeline and all security checks. Create alerts that will trigger if anyone attempts to bypass the guardrails and use a red team to test that your guardrails and alerts are comprehensive. Additionally, perform periodic log reviews for any attempts at bypassing your established guardrails.
  4. Gain visibility: Make sure your systems are configured to log all the event types you are interested in monitoring. It may be helpful to bring in seasoned experts here to make sure you are capturing all you should. You don't want to be in a situation later where you can't identify potential issues such as how a change was made. And because the integrity, completeness and availability of these logs is crucial for forensic and auditing purposes, make sure logs are immutable. For instance, in Amazon Web Services this may mean taking steps such as ensuring CloudTrail log file integrity validation is enabled and sending all logs to a dedicated and centralized Amazon Simple Storage Service (S3) bucket. This allows you enforce strict security controls, access, and segregation of duties. Finally, periodically review logs to verify they are comprehensive and provide visibility through all stages of your DevSecOps environment.
  5. Enforce separation of duties: The point of separation of duties is to reduce the chance of insider threat as well as to mitigate the threat of stolen credentials. By implementing a separation of duties strategy, threat actors would at the very least have to know which two sets of credentials are required to perform an action and then steal both — a significantly more difficult proposition than a single credential. Actions should not be allowed to be carried out on sensitive systems or databases without two individuals both approving the changes. A related best practice is pair programming, which has been successfully used to thwart the insider threat of malicious code being written into a system. In an organization using pair programming a malicious developer would need to convince their pair to collude with them, and if pairs are rotated each subsequent partner would also need to be convinced to collude. As a bonus, organizations that have implemented pair programming typically realize better code, meaning less rework and therefore faster time to value and lower costs.
  6. Harden the pipeline: Harden everything. Many continuous integration/continuous deployment (CI/CD) tools are not hardened by nature in favor of ease of use for developers. So, harden them. Configuration management tools and orchestration tools are good examples of tools that often do not come hardened "out of the box." The good news is that manufacturers typically offer advice on how to better harden their tools and there are active communities sharing best practices for hardening these tools. For instance, Chef (a prominent configuration management tool) has on their website both a security page on hardening their products  as well as a blog for their community to stay informed with their peers. The message is to harden everything, including your source, build and binary repositories.
  7. Deploy signature-based authentication: Implement the signing of binaries, executables and scripts, as well as any other build artifacts to verify their authenticity and integrity. This capability could also be integrated to enable some of your DevSecOps guardrails we discussed earlier. Digital signatures can also have other benefits including prevention of namespace conflicts and can be dual purposed to provide versioning information or perhaps other useful metadata.

To truly mitigate risk, real action must be taken

This is by no means a comprehensive list of how to secure your code development and deployment environment, but hopefully it at least provides some openings for discussions on these topics. Although not wholly encompassing, if you are not following the best practices listed above, you may be putting your organization at an elevated and unacceptable level of risk.  

In any event, this seemed like a good time to get back on my soapbox for security. And given that the SolarWinds incident was originally caused by a less than secure development and deployment environment, I figured I would use this unfortunate event to help drive awareness of secure code development and deployment practices.

As Winston Churchill, former Prime Minister of the United Kingdom once said, "Never let a good crisis go to waste."