Technology leaders are under intense pressure to deliver AI-powered applications. Their goals? Deliver more engaging, personalized customer experiences that drive revenue and achieve greater levels of operational efficiency.
With KPIs tied closely to these AI initiatives, these leaders are racing forward with new projects. In fact, according to the 2026 Cloudflare App Innovation Report, 74% of organizations ahead of schedule on app modernization efforts plan to integrate AI capabilities into existing applications within the next 12 months. Even 58% of those behind schedule with app modernization still intend to integrate AI into software within the next year.
Despite all this drive for AI, not all organizations are ready. They aren’t fully prepared to address the new risks and expanded attack surface created by the integration of AI tools and models into apps, agents, and chatbots.
AI integration does not necessarily require more security — it requires different security. IT and security teams need to review control relevance and efficacy. They need ways to observe and prevent newly created attack paths while avoiding inadvertent introduction of security weaknesses.
How can you move forward with AI and achieve your performance objectives — without exposing your organization to new threats?
The first step is a clear assessment of the potential risks of integrating AI into public-facing software. Then you can start implementing the key capabilities required for protecting AI-powered software.
Connecting applications with large language models (LLMs) adds new risks to the myriad of existing application and API threats. In particular, using generative AI (GenAI) opens the door to multiple cybersecurity threats, such as prompt injection and data poisoning. You are also introducing new attack paths for access control threats, potential cryptography issues, supply chain risks, and AI-powered threats.
Meanwhile, you have to watch out for excessive AI consumption. When you pay a model provider for processing tokens (the units of data processed by AI models), you need to monitor and control AI usage to prevent accidental — or deliberate — overuse of AI tokens. When overuse is intentional, it can shut down access to specific models in so-called “denial of wallet” attacks.
AI-powered software and traditional software operate differently.
Traditional applications are deterministic. They produce the same output every time a specific input is provided, making their behavior highly predictable. To protect these apps, you monitor inputs and outputs. You can largely rely on rules and pattern-matching techniques to observe attack patterns and attempts at system subversion and data exfiltration.
AI-powered applications, however, are probabilistic and inherently unpredictable. The output can vary even when the same input is provided since the models’ response depends on a complex structure of underlying data and probabilistic weights. Because that underlying data can be manipulated, you have to watch for model and data poisoning, when inaccurate or sensitive data is input into the model. And you also have to monitor for harmful or out-of-policy content, or misinformation, in the outputs.
To protect AI-powered apps, you need probabilistic security — AI-driven detection that can understand the context and intent behind a prompt or response. For example, a prompt might look like a harmless question but can actually be a sophisticated attempt at jailbreaking, intended to bypass safety policies.
That contextual analysis enables you to recognize AI-native threats as well as out-of-policy model poisoning and data exfiltration on top of the traditional rules to identify app-layer attacks. This is especially beneficial in highly regulated industries, where data leakage brings the additional risk of regulatory violations.
Many technology leaders see the need for new security models to protect AI-powered software, yet haven’t made implementation a priority. The drive to quickly deploy AI has caused real security risks to be overlooked. Organizations have left AI systems vulnerable to attack and subversion, which can lead to costly retrospective control placement and policy alignment.
Organizations that implement AI-specific security often choose capabilities offered by AI model vendors. For example, OpenAI has created guardrails aimed at preventing prompt injection, jailbreaking, input of sensitive information, and output of inappropriate content. Developer and security teams might see this model-based security as “good enough.” But those “good enough” solutions are inflexible and incomplete.
The first problem is that guardrails offered by OpenAI and other model providers only protect the model they are associated with. But your organization might need to protect multiple LLMs from a variety of vendors. Second, these guardrails only solve part of the problem: While they help you protect the messaging interface, you still need security capabilities to safeguard the app and API you are using with that model.
Protecting AI-powered apps requires a comprehensive approach, one that augments traditional application and API protection with AI-specific security capabilities. With the right capabilities, you can protect apps from key risks without slowing developers or impacting the user experience.
That AI security should include these six best practices:
Governance and visibility: As distinct teams integrate AI into apps and adopt new AI tools, you need ways to automatically discover and label these tools and models to prevent the use of unsanctioned shadow AI. Improved AI governance and visibility also help reduce misconfiguration risks.
Model flexibility: AI security should be model-agnostic, so you can protect multiple models without implementing multiple, model-specific guardrails, agents, or other tools. With a model-agnostic solution, developers and infrastructure teams don’t have to become experts in how each model works to protect it against threats. Choosing a model-agnostic solution also gives developers more flexibility to transition between models as their use cases change and models improve.
Inline protection: Analyzing and filtering HTTP requests to and responses from AI-powered applications can prevent malicious traffic from reaching AI models or your infrastructure. Inline protection eliminates the need to build custom infrastructure for the AI-enabled app in production: The app will be globally available and secure by default. Inline security also provides early threat detection without slowing down AI app performance.
That inline protection should include volumetric attack security that ensures distributed denial-of-service (DDoS) attacks do not reduce app and AI feature availability. It should also adopt a “positive” security model for LLM endpoints, so attackers cannot easily target the URLs for LLM endpoints.
Input and output monitoring: AI security should monitor both LLM prompt inputs and LLM outputs. Monitoring the inputs safeguards the model from prompt injection and jailbreaking. Output monitoring prevents data leakage, helps protect your brand reputation, and prevents faulty decision-making by blocking inaccurate and “off-script” output. To address the probabilistic nature of AI-powered apps, new security capabilities should use LLMs to understand the context of prompts of their responses, ensuring they match content abuse rules.
Observability: In addition to protecting the LLM, you need to monitor usage and performance. You should be able to track and control the costs of using an AI model — by capturing usage in logs and applying rate limiting — while also ensuring your software is performing reliably.
Integration with traditional app security: By integrating AI security with traditional app security, you can avoid requiring security teams to toggle between multiple tools or platforms. Integrating AI security with traditional app security also results in fewer runtime security risks and performance issues. That traditional app security should include, for example, capabilities that prevent SQL injection (SQLi), cross-site scripting (XSS), and other attacks.
The Cloudflare AI Security Suite enables you to move forward with your AI integration plans while strengthening security and maintaining compliance. You can defend against external threats, protect data in AI prompts, and safeguard workforce use of AI — enabling secure AI innovation. As part of this suite, Cloudflare AI Security for Apps defends AI applications and APIs against common vulnerabilities with model-agnostic, inline, integrated protection.
Because Cloudflare AI Security Suite is available on the unified Cloudflare platform, you can access these critical AI security capabilities alongside more traditional app security, all without having to manage disparate point solutions. As a result, you can start capitalizing on AI fast and meet your aggressive goals for AI-driven results while minimizing complexity.
This article is part of a series on the latest trends and topics impacting today’s technology decision-makers.
Learn more about the specific capabilities you need for protecting AI-powered software in the Modernizing security for the AI era ebook.
James Todd
Field CTO, Cloudflare
After reading this article you will be able to understand:
GenAI features create new risks and expand the attack surface
How to launch AI fast – without sacrificing security
Why model-agnostic inline security is essential