A cartoon person holding hands with a robot both with multiple speech bubbles

Empower developers with AI policy and governance

Ryan Salva
Ryan Salva // VP of Product // GitHub

A modern developer organization must have a clear and detailed policy that outlines the proper use of AI tools. Developers are already using AI, both for personal and professional work, and these policies are your opportunity to protect your code's integrity and security, boost productivity, and encourage efficient best-practice use of AI tooling.

In this guide, you will learn:

  • How to use internal developer policies to encourage safe and efficient use of AI 

  • What to include in an internal developer policy for AI


When integrating new tools, including AI technologies, it's crucial to ensure the security of your codebase. Your internal policies should not only specify which tools are approved for use, but also outline any limitations on their usage. Philips has achieved success with GitHub Copilot by granting their developers broad access while stressing the accompanying responsibility:

We emphasize the importance of developers taking full responsibility for implementing suggestions. The proposed code must meet the same strict quality requirements as self-written code. Understanding the code is a fundamental prerequisite to ensuring the highest possible quality and safety of our software products.

Marko Beelmann
Marko Beelmann // Senior Architect - Software Excellence // Philips

Accelerating productivity through governance and policy

Effective governance and policies in your organization should focus on directing your team’s energy, as well as setting boundaries. A well-designed internal governance model is about providing a clear direction for developers, and aligning their efforts with the organization's objectives. It's not just a set of rules for AI usage, it's a guide for achieving your business goals.

Creating a policy framework

In terms of practical application, this means implementing a governance strategy that has a comprehensive policy framework covering various critical aspects, such as:

  • Purpose: Establish the core objective of the policy, communicating the importance of maintaining data security and code integrity when using AI tools. This sets the stage for responsible and secure usage of AI in coding practices.

  • Scope: Clarify the applicability of the policy. Ensure that all relevant personnel, particularly developers using generative AI tools, are aware of their responsibilities and the boundaries of the policy.

  • Responsible use: Emphasize the importance of using AI tools as an aid rather than a substitute for human expertise. Remind developers that they are accountable for the code generated with AI assistance, and it must adhere to existing coding standards and practices.

  • Intellectual property: Ensure compliance with intellectual property laws and internal policies. This includes using features in AI tools that prevent copying or replicating public code.

  • Output validation: Mandate thorough scrutiny of AI-generated code. Developers must ensure that such code adheres to the company’s standards, doesn't introduce security risks, and aligns with project requirements, just like any manually written code.

  • Performance monitoring: Require ongoing assessment of the AI tool's effectiveness. Developers need to track the quality of the code produced and the tool’s impact on productivity, and address any arising issues or limitations.

  • Documentation: Stress the need for detailed record-keeping regarding the use of AI tools. This documentation should cover the user, purpose, and manner of usage, to aid in tracking effectiveness and ensuring compliance with the policy.

  • Training: Highlight the necessity of adequate training for developers using AI tools. This includes understanding the tool's capabilities, limitations, and the principles outlined in the policy, ensuring competent and responsible usage.

  • Policy review: Underline the importance of keeping the policy current and effective. Regular reviews and updates are necessary to adapt to evolving technologies and coding practices, maintaining the policy's relevance. It's also important to communicate this to your employees. 

For more information, see our Sample Policy for Internal Developer Use of Al Tools.

Ensuring best-practice use of AI tooling

Training should be tailored to your business needs. Developers will likely have questions about how their code and data are handled, and these topics should be addressed. But training can go beyond the basics of do's and don’ts to actively inspire and guide developers toward realizing specific business objectives. 

For example, developers could be directed to begin using GitHub Copilot to shorten their organization’s test-driven development (TDD) feedback loop. This training could not only improve the developers' ability to write effective tests, but also familiarize them with best practices in TDD. This approach has the dual effect of improving the core metric of concern while increasing your team’s understanding of best practices, which can impact the durability of these improvements.

Governance and policy set the groundwork for fostering adoption of AI tooling. In our next guide, we will explore how to build on this foundation to optimize the onboarding process for GitHub Copilot.

Up next: Tips for a successful rollout of GitHub Copilot

Get started with GitHub Copilot