theNet by CLOUDFLARE

The AI talent trap

Why business context beats coding in app modernization

C-level executives face a constant paradox: how do you maintain the furious pace of innovation required to stay ahead while also ensuring the stability of high-traffic infrastructure used by tens of millions of people worldwide? This is the challenge I’ve tackled for over a decade. That tension only grows with AI-powered apps, something I’ve navigated firsthand across multiple products and platforms at Caliente.mx. This has given me an unvarnished view of what works, and what causes catastrophic failure, at scale.

When I saw in the 2026 Cloudflare App Innovation Report that 87% of organizations say their internal staffing is sufficient to support AI development, it immediately stood out. Confidence like that can be misleading. In my experience, successful AI adoption depends on how teams work, what institutional knowledge they bring, and whether the infrastructure supports fast, safe experimentation. It’s a theme I explored with Trey Guinn, Cloudflare Field CTO, on their Beyond the App Stack show about how AI capability depends on more than just headcount.

Over the past few years, I’ve seen the same pattern with companies trying to adopt AI; the technology is exciting, the talent looks solid on paper, but the results don’t land. That’s because the real barrier to successful AI adoption is strategic rather than technical. Too many organizations treat AI as a binary status: either you’re “doing AI” or you’re behind. That pressure tends to trigger one of two responses; hiring AI specialists who don’t know the business, or pushing out tools internally without a clear purpose or plan for adoption. Neither approach works for long.

Most companies approach AI in one of two ways: as a foundational product built into offerings, or as a toolset to optimize internal workflows like HR or finance. You might assume the product-first approach is more complex, but I see more risk when AI is introduced internally. That’s where tools get mandated without training, expectations are misaligned, and teams start to push back. That’s when adoption stalls.


The core dichotomy: knowledge vs. new skill

If AI is going to stick, leadership needs to rethink how it’s introduced, and more importantly, who’s actually equipped to lead that change.

When you’ve built a team from scratch, like I did at Caliente, you see the value of institutional knowledge firsthand. It’s not written down, it lives in the tradeoffs people make, the systems they’ve shaped, and the instincts they’ve built through experience.

That context is essential when integrating AI. External hires may bring expertise, but without product history or architectural fluency, progress slows.

Upskilling the team you already have is often the faster path. These are the people who understand how the app evolved, what users depend on, and where the stack can flex. When your architecture supports modular, low-risk experimentation, that context becomes a competitive advantage and upskilling becomes the obvious choice.


Managing resistance to AI

Whenever new tools are introduced, especially something as hyped as AI, people naturally worry: “Is this replacing me?” That fear is amplified when AI is framed as a standalone initiative or handed off to external hires who aren’t part of the day-to-day. The best way to counter that fear is to keep the conversation grounded in real problems. At Caliente, we start with the pain point, something that’s slowing us down or adding friction, and only then ask, can AI help us solve this?

If the answer is yes, it becomes about augmentation. We don’t say, “automated QA.” We say “AI-assisted testing.” We don’t frame new initiatives as “building AI apps.” We talk about extending the apps we already have; making them faster, smarter, more useful, without breaking what’s working. We enhance the code review process, giving engineers a second set of eyes so they can focus on higher-value work.



The innovation imperative: the cost of inflexibility

If you want the freedom to build with the team you already have, your infrastructure has to support that choice. At some point in every AI conversation, people forget the basics, that AI doesn’t run on ambition, it runs on infrastructure.

If you’re still working with rigid, on-prem systems, even the best ideas will get bogged down in delays, compatibility issues, or procurement loops. I’ve seen this firsthand, trying to retrofit legacy stacks to run GPU-intensive workloads is not only painful, it’s expensive. You end up chasing hardware fixes, driver updates, and OS patches before you’ve even started experimenting.

What’s often overlooked is that application modernizations are a prerequisite for AI. The more flexible your infrastructure, the cheaper it is to try things. The faster your environment spins up, the quicker you learn what works and what doesn’t. Serverless architectures, in particular, let us test multiple configurations without upfront spend; just run the experiment, get the data, and move forward.

And this agility pays off later, too. Hardware and software in this space evolve fast. If you’re maintaining everything on-prem, you’re in a constant upgrade cycle. With serverless or cloud-native platforms, we can keep pace with that evolution without burning time or budget just to stay current.

What’s worked for us is creating a structured sandbox for AI trials. That means clearly separating experimental work from production releases, setting short timelines for testing hypotheses, and defining upfront what “good enough” looks like before scaling.

A few principles I stick to:

  • Isolate experiments from core product breaches

  • Time-box testing cycles so they don’t drift

  • Pair AI leads with product owners to ensure every idea maps to real app impact

AI isn’t isolated, it has to live within the systems you already maintain. When done right, AI experimentation accelerates app innovation instead of slowing it down. You get smarter features, better automation, and more responsive products, without risking stability.


Three AI infrastructure must-haves

Before AI can move beyond pilot phase within app development, I’ve found there are three non-negotiables:

  1. Clear data governance. Clearly defined access rules are critical. What data can AI touch? Under what conditions? Without this, you’re opening the door to compliance issues.

  2. Scalable compute. Pilots can run on low-power machines. Production workloads can’t. Infrastructure that scales is key, especially for GPU workloads.

  3. Strong monitoring. Once AI is live, full visibility and oversight is needed. That includes tracking drift, detecting anomalies, and catching unintended consequences before they escalate.

Get these right, and experimentation becomes sustainable. AI is a capability, not a product. In my own role, I’ve seen how much of that journey comes down to structured experimentation, clear education, and a culture that values transparency over hype.

But infrastructure alone doesn’t drive progress. Your team needs to be ready too.

Three signs of real AI readiness

When I think about whether an engineering team is ready to scale their AI efforts within app innovation, I look for three things:

  1. Open conversations: If the team is comfortable sharing both wins and failures, and talking honestly about what’s working and what’s not, that’s a good sign.

  2. Foundational understanding: They don’t need to be experts, but they should understand the basics, at a minimum, of how LLMs work, terms like number of parameters and tokens, and why AI hallucinations happen. It shows curiosity and an appetite for deeper understanding.

  3. Awareness of boundaries: They know what kind of data is safe to share, what content can be reused, and where the legal or ethical lines are.

These aren’t just nice-to-haves; they’re indicators that your team is prepared to build and sustain real AI-driven app innovation.



Stop solving for the hammer

One of the traps I’ve seen across teams is the “solution-first” mentality. Leadership gets excited about an AI tool and tells the team to reverse-engineer a use case to justify it. But starting with the tool often means solving imaginary problems, or ones that don’t matter to the business. Real progress starts when you're solving a real, high-impact problem and letting that guide whether AI belongs in the solution at all. You can then move with focus; testing, learning, and course-correcting with clarity because you're aiming at something specific.

From there, it’s about iterating fast, especially when you’re bringing AI into live applications. Define a narrow use case, test it outside of production, measure the impact, gather feedback, and adjust. If it works, you scale. If it doesn’t, you’ve still learned something valuable; without overcommitting resources or building tech no one needs.


Behavioral guardrails, not technical blocks

Locking down public-facing AI tools might reduce short-term risk; but it stifles learning. A better approach is reinforcing behavioral standards teams already follow. For example, developers already redact secrets in code; the same applies to AI reviews. But policies alone don’t create progress. Real momentum comes from systems, technical and cultural, that enable safe experimentation without slowing delivery.

That starts with app innovation. The right architecture gives teams room to test, solve real problems, and move fast. The 2026 Cloudflare App Innovation Report nailed what I’ve seen in practice: when your systems are ready, the smartest AI investment is the team you already have.



Accelerate modernization with Cloudflare

As app modernization becomes a continuous strategy — not a one-time initiative — tech leaders need a platform that helps teams move faster without compromising control or security. Cloudflare’s connectivity cloud supports every stage of your application journey — from rehosting legacy systems to replatforming for agility or building entirely new apps and AI services. With integrated app delivery, security, observability, and development tools on a unified, programmable platform, Cloudflare helps your teams reduce complexity, ship faster, and unlock innovation — without inflating costs.

This article is part of a series on the latest trends and topics impacting today’s
technology decision-makers.



Dive deeper into this topic.

Learn more about how modern infrastructure unlocks AI success in the 2026 Cloudflare App Innovation Report.


Author

Lior Gross
CTO, Caliente.mx



Key takeaways

After reading this article you will be able to understand:

  • Choosing existing teams vs. new hires for AI apps

  • Using AI experimentation to boost app innovation

  • The role of business knowledge in AI app success



Receive a monthly recap of the most popular Internet insights!