Recently, Anthropic announced Project Glasswing, a joint initiative with Microsoft, AWS, Apple, Google, and a coalition of others that aims to help defend against rising threats from AI-enabled vulnerability exploitation.
Project Glasswing is a direct response to the growing evidence that AI models have crossed a threshold where they can autonomously find and exploit software vulnerabilities at a never-before-seen speed and scale. The particular model that seems to have triggered this reaction, Claude Mythos Preview, has apparently already identified thousands of previously unknown vulnerabilities across every major operating system and browser.
While different corners of the security world have different takes on the current threat posed by advanced AI models, one indisputable fact is that these models are accelerants across multiple dimensions.
AI enables open source maintainers to build and release versions more quickly than before. AI enables attackers to find and exploit vulnerabilities more quickly than before. And, when funneled properly, AI can also be an important part of the security solution.
A Problem That Predates the AI Moment
Although models like Claude Mythos are poised to be significant accelerants, it is important to keep in mind that the trend toward velocity in software development and security is not a new one.
For one, like all parts of the software development ecosystem, open source has been shifting toward more frequent release cycles for quite some time. While specific data is hard to come by, one study reported a 1,466 percent increase in release frequency between 2014 and 2023. (Which obviously pre-dated the creation of Mythos-like models.)
Relatedly, the volume of known vulnerabilities has been growing relentlessly. Before 2016, there had never been a year in which over 8,000 CVEs were published. In 2023 (again, before the publication of modern AI models), that number climbed past 28,000.
What AI Changes
While the velocity of software development and attacks significantly increased before the release of advanced AI models, there was generally a relatively lengthy gap between the time when a vulnerability is disclosed and when it was actively weaponized at scale. That gap gave defenders time to patch.
New attack methods, including the use of AI, are narrowing the defensive window.
Rapid7's 2026 Global Threat Landscape Report found that the median time between vulnerability publication and its inclusion in CISA's Known Exploited Vulnerabilities catalog dropped from 8.5 days to 5 days in a single year.
This chart from Zero Day Clock (a wonderful resource that covers several dimensions around vulnerabilities and exploitability) takes a broader view than Rapid7 but arrives at a similar, albeit even starker, conclusion. While the time from disclosure to exploitation had been gradually decreasing over time, the increase in attack velocity from even 2025 to 2026 was remarkable.
In addition to general trends, we've also seen concrete, documented instances of AI-enabled attacks. In November of 2025, Anthropic reported that a Chinese state-sponsored group used Claude to attempt to infiltrate multiple targets across the globe, and “succeeded in a small number of cases.” Earlier this month, a new report came out detailing specifics of a prolonged, AI-powered breach of government agencies in Mexico. We expect similar occurrences to continue in the months and years ahead.
Of course, AI isn't fundamentally changing what attackers want; rather, it's automating previously manual techniques (reconnaissance, decision-making, and social engineering) to significantly increase the speed of execution.
Why Outdated Dependencies Are Prime Targets
If you're an attacker with AI-enabled vulnerability discovery at your disposal, you probably won’t go looking for zero-days first. Rather, you'll search for known vulnerabilities in commonly used packages that your targets haven't patched yet. The ratio of effort to reward is dramatically better.
Consider that, for example, a codebase that's 10 versions behind on a core dependency is almost certainly carrying known CVEs. Those CVEs have public records. They have documented attack patterns. And now, they can be systematically discovered and chained together in novel ways to exploit apps at a pace no human team could manually defend against.
At the same time, our view is that the consumers of open source (the teams actually building and shipping products) have struggled to find ways to keep their dependencies current.
The reason for this is understandable: dependency updates are disruptive. A major version bump can require hours of engineering time to research, test, and safely land. Automated tools like Dependabot and Renovate help surface the work but don't do the work. They generate a pile of PRs, and they often cause breaking changes.
Of course, you can't manually research every upstream changelog, assess every breaking change, and write every compatibility fix across hundreds of components per codebase. No engineering team has the bandwidth. The tools that exist today mostly surface the work without completing it, which is how backlogs grow instead of shrink.
How AI Can Be Part of the Solution
Keeping dependencies up to date isn't a new challenge. But the combination of AI-enabled threats that target outdated dependencies — and, critically, AI-enabled solutions that accelerate the update process — is making it more of a priority for our customers.
That reality is reflected in our work over the past year building fossabot. fossabot is an AI-powered, automated dependency update tool that approaches upgrades the way a skilled senior engineer would: researching new versions, identifying breaking changes that actually affect your code (not just the package in the abstract), adapting the codebase when needed, and delivering completed work directly as a pull request.
It works alongside the tools your teams already use and handles the class of updates that require real engineering judgment to complete.
Of course, keeping dependencies updated is only part of the solution for defending against AI-enabled vulnerability exploitation. Although we don’t yet know Project Glasswing’s precise outcome, we’re encouraged by the coalition’s commitment to helping open source maintainers build more secure software (and reduce the number of potential vulnerabilities in published releases). There’s also a role for traditional SCA platforms (like FOSSA’s solution), along with other security tools.
But it certainly feels crystal clear to us that outdated dependencies will only become more of an attractive attack vector as AI models continue to become more advanced. And, whether you’re using fossabot or another approach for dependency updates, we highly encourage your team to seriously consider ways to accelerate the process.
If you are interested in trying fossabot, we're offering credits for your first 100 upgrades/remediations for the next four weeks. Please get in touch with our team (either by requesting a demo on our website or by emailing me aaron@fossa.com) if you’d like to get started or learn more.
