Challenges of Network Infrastructure

Considerations for your migration to the cloud

Cloud migration rarely happens in a single clean sweep. A limited number of organizations are able to adopt a single cloud provider, but far more find themselves with some combination of public cloud, private cloud, and on-premise infrastructure.

There are a variety of good reasons to maintain heterogeneous infrastructure. The same cannot be said for the networking functions which support that infrastructure:

Somewhat paradoxically, a hybrid approach to protecting and accelerating complex cloud environments actually creates performance issues, security gaps, and support challenges. To avoid these issues, IT and security teams must find ways for these networking functions to work together as seamlessly as possible.

Security and performance infrastructure — models and common challenges

Securing and accelerating hybrid cloud and multi cloud infrastructure has typically required one or more of the following:

Unfortunately, both of these approaches bring considerable challenges:

Challenges with on-premise hardware appliances

It’s common knowledge that data center hardware is costly and time-consuming to own. It also often faces capacity limitations. For example, 76% of all L3/4 DDoS attacks in Q2 2020 delivered up to 1 million packets per second (pps), according to a Cloudflare study. Typically, a 1 Gbps Ethernet interface can deliver anywhere between 80k to 1.5M pps.

Assuming the interface also serves legitimate traffic, even these ‘small’ packet rate DDoS attacks can easily take down an Internet property. The alternative is maintaining enough capacity for the worst-case scenario, but this is an expensive proposition.

Hardware also creates security gaps. Patches and updates are one example. Patches rely on manual implementation, and might not be installed promptly due to logistical delays or forgetfulness. And once a patch is released, the corresponding vulnerability becomes a higher-profile target for opportunistic attackers.

What’s more—when deployed in a hybrid cloud approach, network hardware appliances create security gaps. It’s obviously impossible to install your own hardware in a third-party cloud provider. This means different parts of the infrastructure are protected in different ways, giving security and IT teams less visibility into and control over incoming attacks and legitimate traffic.

Challenges with certain cloud-based solutions

Cloud-based services have a lower total cost of ownership than hardware, but they can slow down application and network performance if deployed incorrectly. For example, many of them rely on a limited number of specialized data centers — e.g. scrubbing centers for DDoS mitigation. If you or your end users are not near one of those data centers, your traffic will have to travel a long distance to reach it, even if the final destination is nearby.

This backhauling process can add considerable latency. And the problem compounds when an organization uses different providers for different networking functions, and traffic must make many network hops before reaching its destination.

Using different providers for different functions also creates support challenges. When something goes wrong, it can be hard to tell which provider is the cause of congestion or outages. In addition, the time (and thus the costs) required to manage all of these providers can still be high.

How can organizations get around these challenges?

Distributed cloud networks close security gaps and reduce latency

The previously mentioned strategies for securing and accelerating cloud infrastructure share several weaknesses:

The solutions to all of these problems are integration and global reach—making these networking functions work together as seamlessly as possible, all across the globe.

In practice, this typically means using a distributed cloud network. A distributed cloud network is a network with:

Distributed cloud networks often rely on Anycast, a network addressing and routing method in which incoming requests can be routed to a variety of different locations or “nodes.” Anycast allows such networks to route incoming traffic to the nearest with the capacity to process the request efficiently—a critical component in reducing latency for end-users.

With such interconnection, intelligent routing, and redundancy, challenges like inconsistent controls, latency, and support challenges are far less likely to stand in the way of cloud migration.

Cloudflare’s distributed cloud network has data centers in 200 cities across 100+ countries, each of which is able to enforce firewall rules, mitigate DDoS attacks, load balance traffic, and cache content for fast delivery to end users, among other capabilities.

This article is part of a series on the latest trends and topics impacting today’s technology decision-makers.

Dive deeper on this topic

Learn more about security strategies for migrating networking functions from hardware to the cloud, in The Death of Network Hardware Appliances white paper.

Get the white paper