Cloud migration rarely happens in a single clean sweep. A limited number of organizations are able to adopt a single cloud provider, but far more find themselves with some combination of public cloud, private cloud, and on-premise infrastructure.
There are a variety of good reasons to maintain heterogeneous infrastructure. The same cannot be said for the networking functions which support that infrastructure:
Security, e.g. firewall, DDoS mitigation, and user access management
Performance & reliability, e.g. load balancing, traffic acceleration, and WAN optimization
Somewhat paradoxically, a hybrid approach to protecting and accelerating complex cloud environments actually creates performance issues, security gaps, and support challenges. To avoid these issues, IT and security teams must find ways for these networking functions to work together as seamlessly as possible.
Securing and accelerating hybrid cloud and multi cloud infrastructure has typically required one or more of the following:
On-premise hardware appliances
Multiple point solutions delivered in the cloud
Unfortunately, both of these approaches bring considerable challenges.
It’s common knowledge that data center hardware is costly and time-consuming to own. It also often faces capacity limitations. For example, 98% of all L3/4 DDoS attacks in Q4 2021 delivered up to 1 million packets per second (pps), according to a Cloudflare report. Typically, a 1 Gbps Ethernet interface can deliver anywhere between 80k to 1.5M pps.
Assuming the interface also serves legitimate traffic, even these ‘small’ packet rate DDoS attacks can easily take down an Internet property. The alternative is maintaining enough capacity for the worst-case scenario, but this is an expensive proposition.
Hardware also creates security gaps. Patches and updates are one example. Patches rely on manual implementation, and might not be installed promptly due to logistical delays or forgetfulness. And once a patch is released, the corresponding vulnerability becomes a higher-profile target for opportunistic attackers.
What’s more—when deployed in a hybrid cloud approach, network hardware appliances create security gaps. It’s obviously impossible to install your own hardware in a third-party cloud provider. This means different parts of the infrastructure are protected in different ways, giving security and IT teams less visibility into and control over incoming attacks and legitimate traffic.
Cloud-based services have a lower total cost of ownership than hardware, but they can slow down application and network performance if deployed incorrectly. For example, many of them rely on a limited number of specialized data centers — e.g. scrubbing centers for DDoS mitigation. If you or your end users are not near one of those data centers, your traffic will have to travel a long distance to reach it, even if the final destination is nearby.
This backhauling process can add considerable latency. And the problem compounds when an organization uses different providers for different networking functions, and traffic must make many network hops before reaching its destination.
Using different providers for different functions also creates support challenges. When something goes wrong, it can be hard to tell which provider is the cause of congestion or outages. In addition, the time (and thus the costs) required to manage all of these providers can still be high.
How can organizations get around these challenges?
The previously mentioned strategies for securing and accelerating cloud infrastructure share several weaknesses:
Performance issues: Hardware is limited in capacity, while cloud-based services can cause latency—especially when traffic passes through multiple cloud networks for multiple services.
Inconsistent controls and monitoring: Using separate services for different networking functions makes it hard to apply consistent rules, and to monitor global traffic.
Support: Using separate services makes it hard to diagnose where problems lie.
The solutions to all of these problems are integration and global reach—making these networking functions work together as seamlessly as possible, all across the globe.
In practice, this typically means using a distributed cloud network. A distributed cloud network is a network with:
Many points of presence with global distribution. This means end users are always close to the network, eliminating the latency that comes from traffic traveling to and from a distant scrubbing center, VPN server, or other service.
The ability to perform multiple security and performance functions at every point of presence. This means traffic can be scrubbed, routed, and accelerated in a single data center, rather than having to jump from location to location for each function—which also creates redundancy. If one data center becomes overburdened or otherwise goes down, others can take over instantly.
The ability to work with cloud and on-premise infrastructure. This capability, along with the previous ones, allows IT and security teams to set consistent controls and monitor global traffic from a single place.
Distributed cloud networks often rely on Anycast, a network addressing and routing method in which incoming requests can be routed to a variety of different locations or “nodes.” Anycast allows such networks to route incoming traffic to the nearest with the capacity to process the request efficiently—a critical component in reducing latency for end-users.
With such interconnection, intelligent routing, and redundancy, challenges like inconsistent controls, latency, and support challenges are far less likely to stand in the way of cloud migration.
Cloudflare’s distributed cloud network has data centers in 200 cities across 100+ countries, each of which is able to enforce firewall rules, mitigate DDoS attacks, load balance traffic, and cache content for fast delivery to end users, among other capabilities.
This article is part of a series on the latest trends and topics impacting today’s technology decision-makers.