GitHub Copilot and other generative AI tools have become increasingly popular in software development, and for good reason. Developers report significant productivity gains, plus greater professional fulfillment, when using AI coding tools.

But as tends to be the case with early-stage technologies, there’s still a lot we don’t know about generative AI, including potential legal, security, privacy, and maintainability risks.

For organizations with very high risk tolerances, these concerns may not be a barrier to adoption. On the other end of the spectrum, risk-averse businesses may determine the uncertainty with generative AI is too great to justify its use.

There’s also a third category: organizations that want to adopt ML coding tools, but only if they can get appropriate risk-mitigation measures in place. And, in this blog, we’ll explain several of the biggest areas of uncertainty with tools like Copilot — along with tools and best practices to manage them.

Note: Many of the processes and best practices we cover in this blog will be specific to GitHub Copilot, but the higher-level concepts are applicable regardless of your specific AI coding tool.

1. Legal and Intellectual Property

Open source license compliance and copyright law are perhaps the biggest areas of uncertainty around AI coding tools. This is the case for several reasons.

A common concern we’ve heard from in-house legal teams relates to the fact that ML models are trained on open source libraries, including copyleft-licensed ones. This has raised questions on whether generative AI output is also copyleft — and, as such, whether users would need to comply with the original copyleft licenses.

Additionally, a March 2023 decision from the U.S. Copyright Office held that some works created with generative AI aren’t copyrightable. This, of course, could limit an organization’s ability to protect the IP they create.

Strategies to Reduce Legal and IP Risks

In response to concerns over potential IP risk stemming from GitHub Copilot, Microsoft recently announced that it would offer certain legal protections to Copilot customers. Specifically, Microsoft will assume legal responsibility for any copyright infringement claims directed at paying GitHub Copilot customers (stemming from the use of Copilot). The offer is valid only for customers who use Copilot’s built-in “guardrails and content filters,” which we’ll discuss more later in this section.

Although this commitment will certainly be welcomed by paying Copilot users, it doesn’t erase all risks that can come with Copilot (nor does it impact other AI coding tools, of course). Here are a few other steps organizations can take to further manage IP-related risk.

Scan generative AI code output

As you would with any open source code, it’s a best practice to conduct license scanning on generative AI output. Tools like FOSSA help detect copyleft-licensed files and surface the accompanying compliance obligations. Additionally, FOSSA’s new snippet scanning feature can detect and match copyleft-licensed snippets potentially included in AI output.

Enable GitHub Copilot’s optional duplication detection filter

Configuring GitHub Copilot to avoid using exact matches of training libraries in your code can help reduce license compliance risk. It’s also worth noting that to the extent your company gets an IP indemnity from GitHub, GitHub will only honor it if you do have the duplication detection filter turned out. (See Clause 4 in this document for details.)

You can turn the duplication detection filter on by clicking your profile picture in the upper-right corner of any page and then selecting “Settings.” Next, select “GitHub Copilot” in the left sidebar. Finally, under “Suggestions Matching Public Code,” select “Block,” and then save your updated settings.

Tag AI-produced code

It’s a good practice to consider implementing a code-tagging system to differentiate between AI- and human-created code. This will come in handy in the event you need to rip and replace the AI-created portions.

2. Security

Code AI tools like GitHub Copilot train on a wide range of open source libraries, some of which are impacted by known vulnerabilities. This can, in theory, mean that the ML tool will produce output that’s also affected by security issues. For example, a Cornell University study found that roughly 40 percent of programs completed with Copilot were vulnerable.

Strategies to Reduce Security Risks

Use Scanning Tools

Scanning AI coding tool output with an SCA tool (like FOSSA) is a best practice. If and when FOSSA matches an AI snippet to a dependency, we’ll surface known vulnerabilities associated with that dependency along with suggested fixes.

It’s also a good practice to add any confirmed matches to your software bill of materials (SBOM) and to request SBOMs from your software suppliers when and where possible.

3. Code Privacy

In some situations, GitHub collects user data — suggestions and prompts — to retrain the Copilot model. This can result in GitHub accessing, storing, and using data that you may not want it to.

Copilot defines prompts as “the bundle of contextual information the GitHub Copilot extension sends when a user is working on a file and pauses typing, or when the user opens the Copilot pane.”

Copilot defines suggestions as “one or more lines of proposed text returned to the GitHub Copilot extension after a Prompt is received and processed by the AI-model.”

GitHub doesn't train Copilot on private repos (aside from the engagement context described in this section), but it does train Copilot on public repos. So, although you can't tell GitHub to unlearn what it's trained from existing public repos, you can consider taking new repos private if this is a concern.

Strategies to Reduce Code Privacy Risks

GitHub Copilot for Business has a standard policy not to capture suggestions and prompts, but users of the free version will need to opt out. You can do this by going to “Settings” and deselecting “Allow GitHub to use my code snippets for product improvements.”

Additionally, if you’ve used Copilot before opting out of prompt and suggestion retention, you can reach out to GitHub support to request that Copilot delete prompts and suggestions associated with your account.

Both free and business versions of Copilot do retain “user engagement data” — defined as “information about your interactions with the IDE or editor… actions like accepting or dismissing suggestions, as well as general usage data and error information” — and there’s no way to opt out of this. However, GitHub notes that this user engagement data is “stored in a way that does not directly identify you.”

4. Maintainability

If you don’t understand your code, you likely won’t be able to maintain it. This can be a concern when using Copilot, especially when the AI output represents a large portion of your program.

It’s important to pay particularly close attention to auto-generated comments; Copilot can’t necessarily be trusted to explain the code being emitted. For example, Copilot will happily generate comments that aren't at all valid (e.g. code examples that don't actually work). And, we’ve seen cases where users write a comment header and then generate a certain comment and code that’s not related to that comment at all.

Strategies to Reduce Maintainability Risks

First and foremost, you should carefully review comments generated by Copilot (or other AI coding tools) to make sure you understand and agree with them. If you don’t (or if you’re uncertain), consider erring on the side of deleting and writing your own.

Ultimately, from a maintainability perspective, the most important thing is that any engineer is able to later understand why code is included. If the engineer checking it in doesn’t understand this now, you can’t possibly understand it later.

5. Code Quality and Blind Trust

For all of the ways they help developers build more efficiently and effectively, GitHub Copilot and similar AI coding tools aren’t foolproof. There’s still a risk the tools will output biased, offensive, or low-quality code.

Strategies to Reduce Code Quality Risks

It’s best to view AI coding tools as basically a more powerful autocomplete, not fire and forget, and that certainly applies to ensuring code quality. For that reason, our view is that code should be reviewed by an engineer before it goes into the codebase; blindly accepting code is a no-go. Consider making sure engineers are responsible for code they check in, regardless of how it’s generated. If engineers check in biased or offensive code, they should be prepared to accept responsibility.

Additionally, although GitHub does have filters in place to prevent offensive suggestions from being output, the organization does ask that users report any incidents to copilot-safety@github.com.

Managing GitHub Copilot Risks: The Bottom Line

Different organizations will have different risk tolerances, so there’s no one-size-fits-all set of strategies or tools to address the concerns described in this blog. But we hope you find some of the strategies we discussed useful.

Another step we’ve seen some organizations take involves a conversation about potentially starting a cross-functional generative AI office. This group (which should include representation from at least legal, DevOps, and engineering teams) can create and implement policies governing the use of generative AI tools in software development and beyond. It might also be responsible for evaluating risk reduction strategies and tools (like the ones discussed in this blog) and staying on top of new developments in the generative AI landscape.

For more information about using AI coding tools like Copilot in a safe and responsible manner, we’d recommend you consider viewing our on-demand webinar “Managing GitHub Copilot Security and Legal Risks with FOSSA.” During the webinar, we discussed the suggestions from this blog in more detail and demoed new FOSSA features designed to help manage AI coding tool-related security and legal risks.