FOSSA Logo

Slopsquatting: AI Hallucinations and the New Software Supply Chain Risk

April 21, 2025 · 8 min read
Software Supply Chain Security
Slopsquatting
AI Coding
Slopsquatting: AI Hallucinations and the New Software Supply Chain Risk

Generative AI coding assistants like ChatGPT and GitHub Copilot are reshaping how developers write software, but they also have the potential to introduce new software supply chain security risks. One emerging threat is what’s known as “slopsquatting,” which refers to AI’s tendency to hallucinate software package names.

Security researcher Seth Larson coined slopsquatting to describe this attack, with "slop" referring to the sloppy, incorrect output of AI.

Slopsquatting refers to the scenario where an AI hallucination presents a developer with a code snippet importing a fictitious library as if it were real. Threat actors can take advantage of this by registering those made-up package names as malicious packages.

In this article, we’ll analyze the slopsquatting phenomenon, see how AI hallucinations create supply chain vulnerabilities, examine real-world examples, and discuss mitigation strategies.

What is Slopsquatting?

Slopsquatting is a new variant of software supply chain attack where malicious actors capitalize on AI-generated phantom dependencies. Large language models (LLMs) sometimes confidently recommend code that uses non-existent libraries or packages​. For example, an AI might suggest import fastjson in Python, even though no library by that exact name exists.

Case in point: According to a recent study, roughly 20% of generated code samples from various AI coding models included at least one recommended package that didn’t actually exist. Crucially, these aren’t all random one-offs — a majority of the fake names recurred frequently. Over 58% of hallucinated package names re-appeared in multiple runs, and 43% showed up consistently across ten different attempts with the same prompt​. In other words, many hallucinated names are systematic and predictable rather than sheer noise​.

It’s this repeatability that makes slopsquatting practical for attackers. Unlike traditional typosquatting (which relies on common misspellings of real package names), slopsquatting targets the “slop.” Many hallucinated packages sound legitimate or “on-brand” for the ecosystem, so a developer might not immediately recognize them as fake. For instance, an AI might invent a Python package called "dataframe-utils" (when the real popular library is pandas), which looks superficially credible.

How Slopsquatting Works

The slopsquatting attack cycle involves both the AI’s behavior and software package ecosystems (like npm or PyPI). Here’s how it typically unfolds in practice:

  1. AI Hallucinates a Dependency: A developer asks an AI coding assistant for help (for example, “How do I parse YAML in Python?”). The AI generates code and includes an import or dependency on package X — but, unknown to the developer, “X” doesn’t exist in the official package index. The AI essentially made it up based on learned patterns​.

  2. Initial Installation Failure (Signal to Attacker): If the developer immediately tries to install or run the code, the package manager will complain “package not found.” At this point, the developer might realize something’s off and search the package name. Here is the window of opportunity: attackers monitoring for such names (or even running their own AI queries to collect hallucinated names) notice that “Package X” is unclaimed on the repository​. In some cases, the AI’s answer or user’s online discussion might leak the fact that people are looking for “X.”

  3. Attacker Registers the Package: A malicious actor quickly creates a new project on the package repository using the exact name X and uploads a malicious version of it. This can be done in minutes on open registries. They may design the package to look legitimate – adding a convincing README, perhaps even a fake GitHub repo link – but include a payload (for example, code that steals environment variables or opens a backdoor on install).

  4. The Trap is Set in the Supply Chain: Now that “Package X” exists on (say) PyPI or npm, anyone who searches or tries to install it will actually get the attacker’s code. The next developer who uses an AI assistant and gets the same suggestion won’t see an error — instead, their build will successfully download the dependency. From the developer’s perspective, the code “magically” works (the AI recommendation appears validated), but under the hood they’ve pulled in malware. This completes the compromise.

Because many hallucinated names recur (thanks to the AI’s consistent patterns), attackers don’t have to guess blindly. Researchers note that “a majority of hallucinations are not just random noise, but repeatable artifacts of how the models respond to certain prompts,” which “increases their value to attackers” – one can observe a handful of AI outputs and identify viable targets to squat​. All an attacker needs is for the AI model to repeat that phantom name to other users, and their malicious package can start propagating.

It’s worth emphasizing that slopsquatting is closely related to known supply chain attacks like typosquatting and dependency confusion, but with a GenAI twist. Typosquatting traditionally involves registering mistyped variants of popular packages (for example, publishing requestss to catch someone typo-ing requests). Dependency confusion involves using the same name as an internal package so that build systems grab the attacker’s version from a public repo. Slopsquatting, on the other hand, targets names that no human ever used, only an AI did. In effect, the AI’s imagination creates a new supply chain vulnerability surface. As one report from TechRadar succinctly puts it: “GenAI can hallucinate open source package names… and cybercriminals can use the names to register malware”.”

Mitigation Strategies and Best Practices

As of this writing, we’re not aware of a broadly successful slopsquatting exploit. So the risk is somewhat theoretical for now — which isn’t a surprise considering it’s such a novel attack — but may not be for much longer.

Tackling AI-induced supply chain risks like slopsquatting requires a mix of technical controls and cultural practices. The good news is that many best practices for software supply chain security still apply — they just need renewed emphasis in the context of AI-assisted coding. Here are key strategies being recommended and adopted in the industry:

  • Verify and Monitor Dependencies Diligently: Developers must assume any package name from an AI could be wrong or malicious until proven otherwise. Never install a dependency blindly because “the AI said so.” Always check if the package actually exists on the official registry and inspect its provenance. Confirm the exact spelling and repository URL. If it’s unfamiliar, search for the package online — is it on GitHub? Does it have a community? In enterprise settings, use automated dependency auditing tools to flag anomalies. For example, vulnerability management tools like FOSSA inventory open source components and can warn of known vulnerabilities or suspicious packages

  • Tune and Validate AI Code Suggestions: If you’re using an AI coding assistant, configure it and use it in a way that minimizes hallucinations. Many AI tools have settings like temperature (which controls randomness). Lowering the temperature can reduce hallucinated outputs significantly — research showed that more deterministic settings led to far fewer made-up packages​. Some commercial code assistants might also have guardrails; for instance, newer LLMs have been found to sometimes recognize their own hallucinated package suggestions and could potentially warn about them Even then, human oversight is crucial: review AI-generated code before running it in your project. For example, it’s wise to test AI-generated code in a safe, isolated environment first​ — don’t let the AI directly install packages on your system without your confirmation.

  • Leverage SBOMs: An SBOM (software bill of materials) is essentially a list of all components (libraries, modules, etc.) in your software, along with their versions and origins. Up-to-date SBOMs can be important tools in helping teams gain the ability to quickly spot unexpected or unauthorized dependencies.

Slopsquatting: A Final Word

The rise of slopsquatting illustrates that even our AI assistants can introduce security pitfalls — in this case, by imagining code that our package ecosystems aren’t prepared for. The software supply chain was already under siege from those who would exploit any weakness; AI hallucinations present a new kind of crack that we need to seal before it widens.

The good news is that the community is responding. Awareness of slopsquatting — along with other novel software supply chain threats — is rapidly growing, and with it comes action: better AI model training (and perhaps future AI that knows to check its facts), improved tooling for dependency vetting, and stronger cultural norms about verifying code.

And, ultimately, the combination of technology (like vulnerability scanners, SBOMs, and secure AI tools) and culture (training, policies, and vigilant practices) can go a long way toward minimizing threats like slopsquatting moving forward.

Subscribe to our newsletter

Get the latest insights on open source license compliance and security delivered to your inbox.