CTOs guide on AI-assisted development

Considerations for building AI into engineering processes

Generative AI has numerous uses, but one of the more promising for organizations is the incredible benefits of AI-assisted development. With recently announced innovations like Devin AI, the world's first AI software engineer, integrating AI is an enticing proposition for technology leaders looking to gain efficiencies across their engineering processes and deliverables.

Organizations can approach integrating AI into code development, specifically, in several ways. One option is to use tools like GitHub Copilot to automatically create boilerplate code and implement an AI-based form of pair programming. Another option is to leverage autonomous AI developers like Devin, which have the potential to work independently alongside human counterparts.

AI-aided development offers several benefits for companies and will likely continue to shape the future of development. However, it also introduces various security risks that must be managed in order to take full advantage of the technology.

The advantages of AI-enhanced development

An estimated 81% of developers already use AI-based coding assistants such as GitHub Copilot, Tabnine, and Amazon CodeWhisperer to answer questions and write difficult code. As AI becomes more advanced and ubiquitous, development teams will likely become even more reliant on it.

Embracing the use of AI for development has numerous potential benefits for dev teams, such as:

  1. Enhanced efficiency: AI can automatically perform various development tasks, ranging from answering basic questions to automatically writing code to automatically designing UIs. These capabilities can speed up time to market, lower development costs, and enable companies to do more with a lower headcount. One research study found that the use of GitHub Copilot enabled developers to complete coding tasks 55% faster. This improved efficiency may allow an organization to either reduce headcount or dramatically expand the volume of code that it produces.

  2. Decreased developer burnout: GenAI’s ability to automate the creation of boilerplate code eliminates the need for developers to write simple, repetitive code. Improving the developer experience reduces the risk of burnout or employee turnover if developers grow bored with these tasks. In 2023, 73% of developers reported experiencing burnout at some point in their careers. According to 60-75% of developers, AI’s ability to automate these tasks improves job satisfaction.

  3. Better code quality: In addition to writing code, AI can also be used to automatically detect bugs, vulnerabilities, and other issues with code during the development process. Finding and fixing issues before they reach production reduces the risk that an organization will waste time and resources issuing patches or remediating a data breach.

  4. Specialized development: AI can write specialized code that a development team lacks the knowledge and experience to write in-house. Eliminating the need for specialized expertise reduces the need to hire external specialists or offer large incentives to attract subject matter experts.

  5. Developer education and upskilling: GenAI has the potential to provide “just in time” training to developers, enhancing code security, and getting novice developers up to speed more quickly. On average, developers accept 30% of code suggestions, and novice developers have a higher acceptance rate. This reduces the cost of vulnerability management and enables companies to maximize the capabilities of their development team.

The risks of AI development

GenAI is a powerful tool, but it’s not a perfect solution. When weighing if and how to build AI into development pipelines, CTOs should also consider the risks and potential impacts on the organization.

A recent Samsung breach is a prime example of the risks of failing to properly manage AI usage. The company had previously banned the use of generative AI but lifted that ban to enhance employee productivity. Within three weeks, the company allegedly leaked sensitive semiconductor R&D data, source code, and internal meeting recordings to ChatGPT in an attempt to fix code errors and auto-generate meeting minutes. This data is accessible to OpenAI and might have been used to train its model, potentially resulting in it being exposed to other users as well.

Some of the most significant risks of AI-aided development include the following:

  1. Sensitive data leaks: GenAI tools are designed to be constantly learning and may train themselves on user inputs. Sensitive data and code samples entered into a GenAI tool may become part of the model and be displayed to other users. In 2023, Samsung accidentally leaked sensitive semiconductor data via ChatGPT, and research has found that these tools can also include other sensitive data in suggestions, such as private API endpoints.

  2. Lower-quality code: GenAI is imperfect and can create incorrect answers and low-quality code. Using auto-generated code may negatively impact the user experience or introduce vulnerabilities into an application.

  3. Reduced code understanding: Developers are frequently “on call” because they write the code, understand it, and can address any issues. AI-generated code is more of a black box, which can hinder a development team’s ability to fix it if something goes wrong.

  4. AI hallucinations: In development, AI hallucinations can lead to GenAI recommending the use of non-existent packages and libraries. If attackers create these packages, they can inject vulnerabilities or malicious code into the applications of anyone who uses those recommended packages.

  5. Licensing issues: AI recommendations or auto-generated code may use third-party code with incompatible licensing terms. For example, if AI uses code with a copy-left license, an organization may need to open-source the code using it or face IP lawsuits.

  6. Increased cloud spend: AI-generated code may make requests for paid services or inefficiently use storage, computing, and other cloud resources. As a result, cloud spending may increase due to unoptimized code.

Solution lock-in: AI-enabled coding assistants are in their infancy, and new solutions are emerging regularly. Baking a particular solution into a workflow may reduce efficiency or force an expensive switch when new technology becomes available.

Managing the risks of AI development

AI has numerous potential benefits and is likely the future of development. Ideally, GenAI enables organizations to create more sophisticated, efficient, and secure code with fewer resources and a tighter timeline than would be possible with a fully human development team. Companies that refuse to allow AI-aided development risk being outcompeted and losing top talent to those that do.

However, CTOs looking to use AI to enhance their development processes also need to consider and manage the risks of the technology. Here are 8 best practices that can help to reduce or eliminate the biggest risks of AI-aided development:

  1. Validate AI recommendations: AI hallucinations are a common problem, and these mistakes can be extremely damaging in software development. Double-check code and information generated by an AI system for accuracy before using it.

  2. Manage AI data risks: Asking AI a programming question or having it generate boilerplate code is relatively low-risk. Providing it with sensitive IP to check for errors or ask a question risks data exposure. Train developers on acceptable use of AI and implement data loss prevention (DLP) safeguards to manage the risk of leaks.

  3. Implement DevSecOps: AI can rapidly accelerate code development and reduce developers’ understanding of the code that they are writing. Implementing DevSecOps best practices, such as creating and testing for security-focused requirements, reduces the risk of security slipping through the cracks.

  4. Automate security testing: AI-generated code may introduce vulnerabilities or non-functional code into an organization’s codebase. Building security testing — including static application security testing (SAST) and dynamic application security testing (DAST) — into automated CI/CD pipelines limits the potential impacts of insecure, AI-generated code.

  5. Perform functional and non-functional tests: In addition to introducing vulnerabilities, AI-generated code may be lower-quality or even nonfunctional. Performing unit testing for both functional and non-functional requirements can help to ensure that code is both functional and performant. Enforcing compliance with style guides and programming best practices can also help to identify and avoid problematic code.

  6. Monitor software supply chains: AI may recommend or use vulnerable, malicious, or non-existent packages or libraries with incompatible licensing terms. Performing software composition analysis (SCA) and maintaining a software bill of materials (SBOM) provides visibility into application dependencies and allows them to be validated before launch.

  7. Implement runtime protection: Despite developers’ best efforts, vulnerable code will likely reach production. Applications should be protected using WAFs, WAAP, RASP, and other defenses that can block attempted exploitation at runtime.

  8. Consider in-house AI: Open-source LLMs enable companies to host their own AI systems. Keeping AI in-house reduces the threat of data leaks, poisoned training data, and other risks of using GenAI.

Cloudflare for AI offers CTOs the tools that they need to build, host, and defend AI-aided development pipelines and the applications that they produce. Learn more about harnessing the power of AI while implementing security by design.

This article is part of a series on the latest trends and topics impacting today’s technology decision-makers.

Dive deeper into this topic.

Learn more about how to simplify and secure AI initiatives in the The connectivity cloud: A way to take back IT and security control ebook.

Key takeaways

After reading this article you will be able to understand:

  • How AI-assisted development benefits engineering processes

  • The risks of using AI for software development

  • 8 best practices to reduce or eliminate the risks of AI-aided development

Related resources

Receive a monthly recap of the most popular Internet insights!