theNet by CLOUDFLARE

Ensure security and governance for AI agents

Implement a comprehensive plan before wide-scale agent deployment

Interest in “agentic AI” is growing rapidly. Technology leaders recognize the tremendous potential for enhancing efficiencies across their organizations by using AI agents to execute tasks and make decisions autonomously, with minimal or no input from humans.

You are probably hearing and reading about agentic AI everywhere you turn. It appears that agentic AI will be the next step in a fast-moving progression of democratized AI adoption, which accelerated dramatically with the rise of generative AI (GenAI) services. Internet traffic patterns show how quickly these new technologies are being adopted by businesses and individuals alike. At Cloudflare, we saw a 250% increase in traffic flowing through our network to AI services from March 2023 to March 2024.

Implementation of AI agents has been modest so far, but it’s likely to explode in the next few years. According to a recent report, 96% of IT leaders plan to expand their use of AI agents in the next 12 months. Another report shows that 80% of developers surveyed believe that AI agents will become as essential to software development as traditional software tools. Many of the apps that developers build will themselves include agentic AI: Gartner reports that by 2028, one third of enterprise software applications will include agentic AI. Those AI-driven apps will make 15% of day-to-day work decisions autonomously.

While many organizations are poised to start implementing AI agents soon, they have not established adequate security or governance. Now, it’s true that security and governance often lag large business transformations. But given the magnitude of risks involved with AI agents, we all need to define our approach for securing and governing AI agents before undertaking wide-scale deployments.

The right security and governance framework can help guide the capabilities and processes that teams need to implement. Because safeguarding an organization in the AI era is not the responsibility of the CISO alone.


Leaving AI agents exposed to accelerating attacks

While AI agents can deliver important benefits to organizations across industries, they are also new targets for attackers. In the rush to implement AI agents, many enterprises don’t ramp up security to sufficiently protect the models, data, and other tools used for building those agents.

Attackers are already targeting the large-language models (LLMs) that serve as the foundation for AI agents. They are manipulating the prompts used for LLMs to steal information, produce faulty decisions, and launch social engineering schemes. They are also poisoning the data used for training LLMs, which generates inaccurate results and erroneous actions by AI agents.

We went through the same experience previously when we started leveraging open-source code at large scale. Rapid adoption without proper security vetting led to supply-chain vulnerabilities. With AI agents, we’re repeating this pattern but facing more complex risks since attacks can be subtle and harder to detect than traditional code exploits.

In addition, attackers are targeting vulnerabilities within the supply chain for AI-driven apps. Since many organizations adopt third-party models and tools to build their AI agents, they need to be aware that those agents are only as secure as the weakest link in the chain.


Establishing accountability for agent decisions and actions

As we defend ourselves against external threats, we also must prepare for problems produced by the agents themselves. The proliferation of AI agents is creating new operational challenges beyond security — similar to how endpoint agents for antivirus and endpoint detection and response (EDR) solutions strained system resources a few years ago. Organizations now must manage dozens of AI agents running simultaneously, each consuming significant compute resources.

AI agents are also imperfect. If humans provide vague or misaligned objectives, or fail to include specific guidelines on how to operate, agents will make mistakes.

Who is responsible for ensuring that AI agents perform as they should? Who is accountable when something goes wrong? Consider a few scenarios:

  • Procurement: Let’s say an AI procurement agent responds to a supply shortage by ordering new components — but those components have a 300% markup. Who’s accountable: The manager who set the agent’s priority to “ensure production continuity” or the purchasing teams that didn’t build in price sensitivity guardrails and necessary business controls?

  • Finance: What if an AI agent in finance approves all invoices below certain spending thresholds — including invoices that have been flagged as suspicious. The result could be payments to fraudulent vendors.

  • Technology: An AI agent with admin access could automatically implement software patches across critical infrastructure. But what if the agent starts accessing employee email metadata, network traffic patterns, or financial system logs to “optimize” its patching schedule? The agent might delay critical security patches on finance servers during month-end processing, inadvertently exposing the organization to known vulnerabilities while using data it was never authorized to access.

For these and other scenarios, we need governance or oversight mechanisms that go beyond human-in-the-loop controls. AI agents learn and adapt over time, expanding their scope beyond original parameters. An agent designed to optimize patch deployment might start incorporating network traffic patterns, performance metrics, and user behavior data to refine decisions. While this improves outcomes, it also means the agent’s risk profile continuously grows. Organizations need governance strategies that regularly audit what data agents access, how their decision-making evolves, and whether expanded capabilities still align with business objectives — because human reviewers may not understand the full context of additional data sources the agent has incorporated.


Three-pillar framework for large-scale AI agent adoption

Once the technology to build and operationalize AI agents matures, there’s no doubt that adoption will be rapid. Organizations should start working now to implement a framework for both enabling the use of AI agents and mitigating risks. That framework should be built on the three key pillars of technical security, operational security, and governance and compliance.

1. Technical security
CISOs and CIOs need to ensure the right technical security capabilities are in place to protect every element in AI agent systems.

  • Input and access control: Validate inputs, prevent prompt injections, and control who and what can interact with AI agents. Implementing least-privilege access policies will be essential.

  • Data-access transparency: Clear visibility into what data AI agents are accessing, analyzing, and incorporating into their decision-making processes.

  • Model and data protection: Safeguard the AI models and implement data encryption for information processed or stored within the agent environment.

  • Interface security: Build security into APIs, agent interfaces, and authentication for agent-to-agent interactions. As organizations implement AI apps and agents at the edge to improve user experiences, they also need to safeguard those apps and agents at the edge.

  • Supply-chain integrity: Because organizations often use multiple third-party elements in building and deploying AI agents, they must be sure to verify those third-party components and dependencies through continuous monitoring (as opposed to point-in-time validations and certifications).

2. Operational security
IT and security leaders should collaborate with COOs to protect the operation of agents and processes in real time.

  • Monitoring and detection: Real-time visibility into agent activities, identifying any anomalies or potentially malicious patterns — including patterns in the actions and decisions of AI agents.

  • Incident response: Establish procedures for addressing security incidents. They also should create processes for updating or modifying agents in close collaboration with business leaders, so they can improve security in the wake of incidents.

  • Resource protection: Placing limits on resource usage can prevent denial-of-service (DoS) attacks and help ensure sufficient performance levels.

  • Exception handling: Define processes for managing edge cases and unexpected scenarios that fall outside normal operations.

3. Governance and compliance
IT, security, and operations leaders need to govern the use of AI agents while also working with compliance officers to ensure adherence to regulations.

  • Decision framework: Set clear boundaries for autonomous actions with defined escalation and intervention points.

  • Accountability structure: Allocate responsibility for agent actions and classify risks based on the potential impact of actions. There might be multiple accountable parties, including third-party developers, the teams deploying agents, the teams using agents, and external users.

  • Audit and compliance: Tools for recording and reviewing agent actions and monitoring compliance can help organizations adhere to relevant policies and regulations.

  • Continuous improvement: Conduct regular assessment of security controls and put feedback mechanisms in place to strengthen security posture.


Moving forward with AI agents

AI agents have significant potential for enhancing the speed and efficiency of a variety of tasks, from providing customer support to identifying fraud. As these systems evolve, they will increasingly access and analyze data beyond their original scope, creating both opportunities and risks that require careful management.

The three-pillar framework of technical security, operational security, and governance and compliance I outlined provides a foundation for addressing this challenge. However, the future may lie in companies building AI systems that can self-govern — automatically adjusting security controls and maintaining compliance as they evolve, rather than relying on traditional oversight approaches that often lag behind execution.

Cloudflare offers a wide range of capabilities to help your organization build, scale, and protect AI agents. For example, Cloudflare Workers AI enables developers to build and deploy AI applications at the edge, close to users. Developers can build remote model context protocol (MCP) servers to enable AI agents to access tools and resources from external services.

Meanwhile, Cloudflare’s extensive security services, available in every edge location, enable organizations to protect AI agents and AI-based applications against attacks everywhere. In particular, the Cloudflare for AI suite provides comprehensive visibility, security, and control for AI applications. Cloudflare also offers tools for auditing and controlling how AI models access content across websites. Together, all of these capabilities enable organizations to rapidly build and deploy AI agents while minimizing the risks they introduce.

This article is part of a series on the latest trends and topics impacting today’s technology decision-makers.


Dive deeper into this topic.

Learn how to prepare for the secure deployment of AI capabilities across the enterprise in the Ensuring safe AI practices for CISO's guide.

Get the guide!

Author

Grant Bourzikas — @grantbourzikas
Chief Security Officer, Cloudflare



Key takeaways

After reading this article, you will be able to understand:

  • Top cybersecurity threats targeting AI agents

  • Why establishing AI agent governance and accountability is critical

  • 3 key pillars of a security and governance framework for AI agents



Receive a monthly recap of the most popular Internet insights!