Security professionals tend to spend a lot of their time firefighting. People are pressured to bring a product to market quickly, and if an issue arises, people are under additional pressure to fix it fast. As a consequence, everyone has little time or resources to think about the big picture of lasting approaches to software security. This is an issue seen across the entire supply chain.

However, the modern market is a space where standalone vulnerabilities are no longer the most challenging threats in the software supply chain. We are seeing hostile actors stacking vulnerabilities to reach their goals and the emergence of sophisticated attacks that require a sophisticated approach to address. Instead of swatting an individual problem, you have to adjust for a larger threat vector in your supply chain.

Management of the current threat landscape puts a premium on software supply chain sustainability. In this context, sustainability is all about software lifecycle management: making security a continual series of processes to assess software provenance, quality, and risk, ideally supported by automation and tooling to allow for scale. This can start with the fundamentals of knowing the origin of your software, how you’re including it in your products and solutions, how you will maintain and support it, and how you’ll replace it when it reaches end of life or another need arises.

In this blog, I’ll discuss specific markers of sustainable software projects and products, and I’ll highlight several steps organizations can take to strengthen software supply chain security. But first, I’ll start with a deeper dive into today’s evolving threat landscape and why it requires organizations to re-think legacy approaches to software security.

Note: This piece is based on a webinar I recently hosted: The Path to a Sustainable Software Supply Chain. If you’re interested in this topic and would like more information, we’d recommend you view the on-demand version, which is linked below.

Evolution of Software Supply Chain Threats

Our threat vectors are much different today than in years past. If we look back 25 or so years, you had a threat vector which was basically a virus. It could be installed through a compromised piece of software — someone gives you a cracked piece of software on a floppy disk, your computer gets a virus, and that virus causes some degree of chaos. Over time, these vulnerabilities translated to the internet. People opened a poisoned email attachment, which caused damage, and so on. The scale of potential reach increased, but the construct remained fundamentally the same.

Those types of threats still exist and cannot be discounted as an issue, but new threats have arisen that are far more sophisticated, reflecting the fact that software itself is more sophisticated. It is also partly attributable to the rise of nation-state cybernetic warfare teams, which has led to a certain bleeding of technologies and skills into the private sector — and to the massive rise in profitability for large, well-funded private actors leveraging ransomware or similar at scale.

Along the way, our attack surfaces have increased from a single point of failure — the traditional, standalone CVE, for example — to a situation where multiple vulnerabilities are stacked to accomplish several objectives. One could be used to enter a system, another could be used to embed in the system, and another could be utilized to bring various downloads onto the system or send information off it. Unless you catch the stack, you may only be closing the loop on the first issue, and your systems would remain compromised.

Another example of increased sophistication is the software supply chain attack. The SolarWinds hack was one such instance, where a software service provider’s build system was compromised, neatly circumventing many of the failsafes offered by traditional security review. The company's own code looked OK on completion and ingest, but when it went through the build system and then onward to customers, it was compromised with a malicious payload.

A similar, less sophisticated attack just happened on WordPress, where a certain provider’s plug-ins were backdoored. The provider’s website was compromised to allow the “safe harbor” of provenance to be removed, leading to the failure of security for all parties relying on that as their primary gatekeeping mechanism.

Indicators of Sustainable Software

When we take a step back and reflect on the evolution of security threats, it is clear that simply responding to issues in an ad-hoc manner will always leave us a step behind. There are a few things we need to do to take a couple of steps forward, and the first is related to understanding the provenance of the software we rely on. We have to start with how we or other parties are curating software from the initial source code to the point of delivery. This is a subject adjacent to (but different from) issues related to code development.

The good news is that a lot of the contextual work has been done in the industry to support organizations of all sizes. Here are some things to ask about or to do in making sustainable security a reality for your use case.

  1. Check for the Open Source Security Foundation (OpenSSF) Best Practices Badge: In contrast to some of the other indicators on this list (which are more targeted to software companies), the OpenSSF Best Practices Badge covers open source projects. If a project has this badge, you know it has taken steps to cover key topics related to sustainability. The project owners will know how they make their code and how they maintain it. Projects like the Linux Kernel and many others have this badge, and it’s a great starting point for identifying good governance.
  2. Check for the ISO/IEC 5230: OpenChain Specification: Companies that achieve conformance with this standard demonstrate that they are using the key processes of a quality open source license compliance program that can ingest, develop, and export software effectively. You can think of it as identifying a series of inflection points where things can go wrong, and making sure you have a process to keep things on track.
  3. Check for a Software Bill of Materials (SBOM): Transporting software between people, teams, and organizations is no longer a practice that needs to have variance in terms of communicating what is being transported and in what context you can use it. These days, it is eminently reasonable to expect suppliers to provide you with an SBOM, preferably in a human- and machine-readable format like SPDX ISO/IEC 5962 or CycloneDX, and it is reasonable for your customers to expect the same. SBOMs enable companies in the supply chain to understand and evaluate the composition and provenance of a software product.
  4. Ask about engagement in the open source community: Companies that are engaged with open source projects and communities are often better positioned to quickly apply sophisticated solutions to difficult problems impacting open source code. This is partly because they have greater market insight due to the communication provided, and partly because the companies in the ecosystem — and the individuals — can quickly coalesce around and deploy shared solutions.

Questions to Ask Software Vendors

When you move the optics to addressing your supply chain and communication with your vendors, there are some questions that complement the above indicators. These can help frame how vendors understand provenance and whether this matches your own expectations. Let’s dig in.

  1. Where do they get their code? It’s important that your software supplier is able to coherently explain to you where they get their code. This will help you make decisions. If a vendor says they are using the Linux Kernel — but one modified by a third party somewhere else in the supply chain — you’ll know to ask for the precise nature of the modifications, and, ideally, the source code for review. It also elegantly addresses situations that may raise a red flag. For example, if a supplier suggests that they built complex software solutions entirely in-house, you have an item to place additional focus on. Companies that scan large volumes of software products such as FOSSA frequently note that open source is almost always found during the exploration of modern codebases.
  2. How are they maintaining their code? Bit rot is something that is frequently discussed in our industry. This is the idea that technology inevitably wobbles under its own complexity. There are only so many hours in a day, and even the largest software teams struggle with refactoring or refining code against the immediate pressures of deployment. Code is more elegant at the beginning of the lifecycle than at the end as teams patch and improve it for performance, features, and security. If a company or a project does not have at least some process for maintaining the code — or if the process is ad-hoc — that is a potential concern for sustainability and security.
  3. How do they build their code? Is it a specialized build system or is it a standardized build system? It is fine for an organization to use a specialized build system as long as they have the capacity and process to audit that system to make sure it has not been compromised. While using a standardized build system shared with the rest of the market does not remove the need for a degree of audit, the scale of the required audit surface is lower due to not carrying the codebase solo. It helps the supplier and the customer to more quickly check that everything is looking solid from a security perspective.
  4. How do they review code origin, maintenance, and builds on an ongoing basis? Security is a process, and it’s important that vendors continually assess the provenance and security of their software. This means having processes to know where code is from, how it is maintained, and how it is built both in their supply chain as well as internally. There are various approaches to accomplishing this, and different markets have different needs, but one constant is that eyes need to be on this topic to reduce security concerns.
  5. Are they using automation to identify and remediate vulnerabilities in their code? Companies can do great things manually, and a lot are operating with manual processes to manage compliance, security, and export control. But companies that automate with tools like software composition analysis are generally better equipped to deal with a greater volume of code and fidelity. It is not a key determinant of market maturity to use automation, but companies that have successfully implemented it tend to free up resources for application elsewhere.

Ultimately, if you’re using an open source project with the OpenSSF Best Practices Badge or you’re buying from a vendor that uses standards like OpenChain ISO/IEC 5230, you’ll know that the upstream party has thought about governance. If the vendor uses automation to manage security issues and can provide a software bill of materials in a format like SPDX ISO/IEC 5962 or CycloneDX, you can feel increased confidence that they have the capacity to quickly identify and respond to issues. And, if the supplier is engaged with upstream projects and other companies, they will likely be better suited to identify and address problematic components before they become serious issues.

The Software Supply Chain Sustainability Maturity Model

We have now discussed several specific markers of sustainable software projects and products. But, if we step back for a moment, what’s the best way for organizations to assess their progress toward the broader goal of maintaining a sustainable software supply chain?

I think it can be useful to view this as a framework with four distinct phases: identifying issues, responding to issues, pre-empting issues, and helping others.

  1. Identifying issues: The first step on the journey to supply chain sustainability is to pay attention to the issues that impact your software, such as CVEs related to the code you may leverage. For example, when a CVE opens against the Linux Kernel and you are using this code, you are now positioned to understand you have a security issue to close. This type of identification is the base level to be able to say you can address security concerns impacting your code, your products, and your solutions.
  2. Responding to issues: The next phase in this maturity model is the ability to respond to issues quickly. It is important to note that this does require a repeatable process for issue resolution. If you’re responding in an ad-hoc, issue-by-issue manner, your response is not going to be particularly quick and it will certainly not be reliably repeatable. Implementing a repeatable process for issue resolution can be challenging, but I think a few specific actions can help.
  • Establish your base process: Meet with your core security team and design a shared process for addressing security issues. Think through the specific steps that your company will take from issue identification to remediation: who, what, when, where, and how.
  • Automate the difficult lifting: Once you have that core concept of response, the next step is to automate if your current workflow allows, especially points like catching CVEs that relate to your code. It means that your programmers can keep their eyes on coding while the CVE database is monitored in the background.
  • Iterate over time: Keep refining your approach to reflect how you actually work and what your actual market demands are.
  1. Pre-empting issues: Identifying and responding to issues is the key to a solid security foundation. Pre-emption can sound like a lot to add, and it should be clear that a lot of companies have not reached this stage or have only done so to a limited degree. However, it is possible to gradually add processes that can flag issues before they become problematic. One example is that if an open source project is used in a critical product, service, or solution but has only a couple of maintainers, it is prudent to consider resourcing or otherwise addressing the dependency on that project to avoid obvious points of failure. An important point here is that no one is alone in this effort in the field of open source. Many organizations, like OpenChain and OpenSSF, are working on ways to support the whole open source ecosystem and make pre-emption much more accessible and visible to all parties regardless of their individual organizational resources.

  2. Helping others: The final phase of this maturity model goes one step further, where companies adopt an ongoing, continuous approach of working together and supporting the open source ecosystem. It is not uncommon in open source to see competitors working on code together, and it’s increasingly common to see companies solving security issues together. This should not be framed in your mental model as altruistic. It is prudent to share development on common but non-differentiating infrastructure code, allowing lower individual organization resourcing when considered against the codebase gained. It is a strategic measure that we see growing every year, and it provides a solid end game for maximizing the potential for sustainability in code. We are back to the point that security is an ongoing process where continual improvement is of the utmost importance, and it is beneficial to have as many organizations improving code and sharing notes as possible.

Ultimately, whether you’re selling a device or service, your customers expect security. Security is not a selling point but rather a given. Like the wheels on a car, it has to be there. Collaborating on security around shared functional code will improve rather than erode your competitive advantage, freeing up mindshare and resources to work on differentiating aspects of products and solutions to market.

Let’s sum it up. To address security sustainability you need to think about processes, evolution, automation, and engagement. Start by identifying issues. Develop the ability to respond quickly with repeatable processes. Create a strategy to pre-empt issues. And, finally, collaborate with your peers and competitors so that you can close off such issues more effectively. It is a simple, clear maturity model that can help your supply chain and your customers starting today.

About the Author

Shane is the general manager of OpenChain, a Linux Foundation project dedicated to building trust in the software supply chain. He has an extensive background in copyright, patents, and open source, and currently serves as an advisor to World Mobile and on the management board of OpenForum Europe.