I recently had the opportunity to participate in a webinar with FOSSA to talk about SBOM regulations and various stages of SBOM program maturity. The webinar was focused on the financial services industry, but we covered several regulations impacting other sectors (along with general guidance applicable to all types of organizations) as well.
My previous blog post from this webinar included several actionable SBOM program recommendations. This piece will focus on some of the bigger-picture history and context related to regulations such as PCI DSS, DORA, the CRA, and more.
What follows is my attempt to organize the key regulatory themes we covered — why SBOMs became part of policy, what specific regulations are asking for, and how those requirements differ in scope and ambition — into a single piece focused on helping practitioners make sense of the landscape.
But before diving in, a quick note on my background for those who may not know me. I’ve spent much of my career working on software supply chain security, both inside and outside government. I previously served as a senior advisor and strategist at CISA, led foundational SBOM and vulnerability disclosure work at NTIA, and today advise organizations across industries on software supply chain risk, regulatory expectations, and SBOM programs.
What Led to Modern SBOM Regulations
The idea that we should know what’s in our software has been around for a long time, and for good reason. If you’re trying to manage risk, you start with: what do I have? You can’t defend what you don’t know about.
Early on, different communities pushed this in parallel. Some of the early OWASP folks cared deeply about it. The Linux Foundation leaned in early, especially because open source forces you to track what you’re actually using — licenses, provenance, and the inevitable “wait, how did that end up in this project?”
And then we hit the first real regulatory moment, in 2014. There was a proposal in the U.S. Congress, backed by cybersecurity activists like Josh Corman, that would’ve required an SBOM as a matter of US law.. However, due in part to significant industry pushback, that proposed regulation wasn’t adopted.
So we tried a different approach. Around 2017–2018, instead of fighting it out in the usual lobbying channels, the U.S. government (through the United States’ National Telecommunications and Information Administration, which is part of the Department of Commerce) took a more pragmatic route: bring experts into the room and focus on making it easier and cheaper to do the right thing. The goal wasn’t to shame people into transparency. It was to build a community that could say: We all want this outcome, now how do we get there?
Then came a crisis that changed the temperature of the room: the 2020 SolarWinds software supply chain attack. That event raised awareness of something we’d been saying for years: it’s not enough to say “I trust this vendor.” You need to understand their supply chains too.
Now, I always want to be honest about this: SBOMs would not have prevented SolarWinds. But there’s an old saying in U.S. government circles that’s particularly relevant here: Never let a good crisis go to waste. SolarWinds became the forcing function to “level up” software expectations, especially through purchasing power. The White House approach wasn’t “here’s a regulation,” it was “if you want to sell to the U.S. government, you’ve got to meet a baseline.” And I’ll give you a hint: the U.S. government buys a lot of things.
That led to a key 2021 move: if you’re going to require an SBOM, you must be clear about what an SBOM is. So we defined a floor — minimum elements for SBOM data fields, automation, and processes. And yes: in 2021, that floor was intentionally basic. For example, it only asked for top-level dependencies, because at the time, it would have been a struggle to demand “tell me all of your dependencies” from everyone. Today, that’s far more feasible, and you’re seeing regulators evolve accordingly.
The bigger shift is that SBOM expectations are now appearing across industries and geographies. This includes medical devices (in the US and around the world), European requirements (including DORA and CRA), guidance in Japan, and strong activity in places like Korea. The common feature is straightforward: supply chain risk is real, and someone — an auditor, regulator, customer — will ask you to show you’ve done the work.
One more point that matters more than people think: no one is saying SBOMs must be public. A lot of folks panic: “I don’t want to show everyone what’s in my software.” Fine. You don’t have to. But you should look in the mirror and ask yourself why you don’t want to show people. That’s the spirit behind where this is going.
PCI DSS
PCI DSS is the Payment Card Industry Data Security Standard. It’s a private standard, not a government regulation, but it has teeth because it shows up in contracts and liability. If credit card data is running through your systems, the basic expectation is: protect the systems.
PCI DSS Requirement 6 is about developing and maintaining secure systems and software.
As it relates to SBOMs, Section 6.3.2 in PCI DSS 4.0 (and 4.0.1) effectively demands three things:
- An inventory of components in your software: What did you write? What did someone else write for you? What are the third-party components?
- That inventory has to be maintained: This can’t be a one-time compliance artifact.
- The point is vulnerability and patch management: PCI is trying to move organizations beyond “we ran a scan at the top level” toward “please look into your dependencies and keep them updated as new versions come out.”
The operational reality that makes PCI different for a lot of teams is that you get audited. An assessor can come in and say: Show me your documentation, show me the artifacts, walk me through the process. They want to see that the inventory is used, or at least usable, and that it fits into a living process, not a PDF sitting somewhere.
This is where I often draw a clean distinction: all good SBOMs are inventories, but not all inventories are SBOMs. An SBOM has to support automation. Structured, machine-readable data (think JSON) can feed vulnerability management and patch workflows, scale across lots of products, and support diffing and analysis that spreadsheets simply don’t handle well.
A spreadsheet can list “components,” sure, and this can theoretically be sufficient for PCI compliance. But when you’re trying to answer, “Why do I have two versions of the same crypto library?” or “Which downstream dependency dragged this in?” — that’s where an SBOM mindset pays off. And once you have it, you get bonus use cases people routinely underestimate: technical-debt metrics, end-of-life exposure, refactor forecasting, and more.
DORA
DORA — the Digital Operational Resilience Act — is European, and it’s focused on the financial sector. The theme is exactly what it says on the label: operational resilience. The goal is to ensure trusted institutions can deal with security risks as they emerge.
Like PCI, DORA doesn’t always wave an SBOM flag in plain language. But it does require you to track usage of third-party libraries, explicitly calling out open source libraries, for services that support critical or important functions.
That “critical function” scoping matters. For software that performs a critical or important function, the expectation is that you need to know what’s in it — whether you built it or it came from somewhere else. If it’s not critical, or it’s more like commercial off-the-shelf software, the expectation becomes “to the best of your ability,” because DORA is balancing ambition with practicality, and because some of the supplier risk is handled elsewhere in the act.
One thing I’ll flag: DORA was among the first enacted approaches that very explicitly called out open source, basically acknowledging the reality that modern software is not built out of whole cloth. Even the stuff you “write” is stitched together from libraries, frameworks, and components you didn’t create.
And a forward-looking note: if you operate in Europe, you can’t think about DORA in isolation, because CRA will cover digital products broadly. So DORA is a major driver in finance, but it’s not the whole story.
CRA
In contrast to industry-specific regulations, the Cyber Resilience Act is quite broad. The basic scope is: If you sell something in Europe that has digital elements, and it’s not explicitly exempted, you’re covered. That’s not just “European companies.” That’s global companies, suppliers into Europe, organizations selling to European manufacturers, and yes, SaaS providers serving the European market.
I’ll also give the European Commission credit: The CRA has been a testament to the willingness to listen to technical expertise. Early drafts were ambitious in ways that were not always workable or grounded in modern engineering. One early idea would have effectively covered every piece of open source software on the planet that might end up in Europe. That got rolled back: If an organization takes ownership of it (in practical terms), certain obligations attach — but random individuals throwing a library into the world aren’t being handed impossible liability.
The CRA’s goals include a cross-cutting focus on vulnerability handling. The mindset is simple: Don’t introduce products into the European marketplace with obvious known vulnerabilities. Framed that way, it’s hard to argue with the intent. We learned long ago that you don’t get good outcomes by depending on end users to make critical security decisions.
Within that vulnerability-management scope, CRA expectations include basics like having a vulnerability disclosure policy; when someone finds a serious bug, there needs to be a clear path to report it.
And then we get to the part that SBOM practitioners are watching closely: CRA language includes requirements for both an SBOM and a list of components, although it’s worth noting that the language is still evolving. Drafts floating around suggest the SBOM will have familiar “required fields,” and it explicitly calls for at least top-level dependencies. But it also discusses a “list of components” that, in some draft language, has no wiggle room for “we don’t know about these components.”
That’s a good example of why I keep encouraging organizations to orient around the SBOM mindset, not just compliance checkboxes. SBOMs emphasize machine processability, commonly maintained data, and an operational workflow. An SBOM snapshot is useful, but SBOM processes give you continuous risk management — the kind of posture CRA is really trying to get organizations to demonstrate.
SEBI
SEBI (Securities and Exchange Board of India) is essentially analogous to the U.S. SEC in terms of the role it plays. The motivation is similar to DORA: ensure that the institutions under SEBI’s purview have a decent level of security and resilience.
SEBI’s SBOM-related requirements are more ambitious and a bit more complicated. They break expectations into multiple buckets and go beyond a minimal “what’s in the box” view. For example, SEBI calls for:
- Licensing information
- Cryptographic hashes (which can be challenging if you don’t have the original artifact)
- Additional systematic/software-approach details, like capturing aspects of encryption usage (an area where industry is still learning what “good implementation” looks like)
And if SEBI feels intense, here’s some perspective: [https://fossa.com/blog/sboms-india-analyzing-cert-in-guidelines/](India’s National CERT has proposed an SBOM vision) with 21 fields, which is frankly maximalist. More data is often helpful — but yes, it’s possible to have too much data, especially if you don’t have the tools or processes to keep it trustworthy and current.
In practice, this is why a lot of organizations treat SEBI as the “if we can satisfy this, we can satisfy a lot else” benchmark. But the key isn’t just producing a document. It’s building a program that produces reliable data and keeps it current, because once you’re operating at SEBI-level stringency, you’re not really doing compliance anymore. You’re building a supply chain transparency capability that will get reused everywhere.
The Bottom Line on SBOM Regulations
If there’s one thread I hope comes through across all of these, it’s that the regulations differ in scope and wording, but they rhyme philosophically. They’re pushing the industry toward visibility that’s real, actionable, and maintained — not a one-time artifact, not a PDF you dust off during an audit, and not a box you check without changing how you actually manage software risk.
For more information on SBOM compliance requirements and SBOM program best practices, I’d encourage you to check out the full on-demand recording of my webinar with FOSSA. Note that the first part of the webinar is geared toward the financial services industry, while the second part includes more general SBOM guidance.
Additionally, teams in the process of exploring SBOM tools can reach out to FOSSA’s team for a demo of how their SBOM management solutions can automate the end-to-end SBOM lifecycle.
