How can serverless improve performance?
One of the advantages of serverless computing is the ability to run application code from anywhere. By definition, in a serverless architecture there are no origin servers; therefore, it is possible for code to run on edge locations close to end users. Two serverless platforms that take advantage of this ability and the resulting reduction in latency are AWS [email protected] and Cloudflare Workers. By measuring Lambda performance compared to the performance of Cloudflare Workers and [email protected], it is possible to observe the effects of deploying serverless applications at the edge. Test results (below) indicate that Cloudflare Workers usually have a faster response.
What is AWS Lambda?
AWS Lambda is a serverless infrastructure service provided by Amazon Web Services. Lambda hosts event-driven application functions written in a variety of languages, and it starts up and runs them when they are needed.
Where is AWS Lambda deployed?
AWS offers a number of regions for deployment around the world. Typically a Lambda-hosted application will be hosted in only one of these regions.
What is AWS [email protected]?
For example, suppose a user in Chicago requests some information using an application with a serverless architecture. If the serverless application's infrastructure is hosted using AWS Lambda within the US-East-1 region (in Virginia), the request will have to travel all the way to an AWS center in Virginia, and the response will travel all the way from there back to Chicago. But if the application is hosted using AWS [email protected], then the request and response will only have to travel to and from the closest AWS region, US-East-2, which is in Ohio. This decrease in distance reduces latency compared to AWS Lambda.
AWS [email protected] vs. Cloudflare Workers
What is latency? How does latency affect user behavior?
In networking, 'latency' is the length of the delay before requested information loads. As latency increases, the amount of users who leave the webpage increases as well.
Even small decreases in load time greatly increase user engagement. For example, a study by Walmart showed that each improvement of one second in page load time increased conversions by 2%. Conversely, as latency increases, users are more likely to stop using a website or an application. Latency becomes lower as the distance information has to travel is reduced.
What are points of presence (PoP)?
A point of presence (PoP) is a place where communications networks interconnect, and in the context of the Internet it's a place where the hardware (routers, switches, servers, and so on) that allows people to connect to the Internet lives. When speaking about an edge network, a point of presence is an edge server location. More PoPs on the edge result in faster responses for a greater number of users, because the likelihood that a PoP is geographically near a user increases with more PoPs.
How quickly do serverless functions respond on average?
Cloudflare performed tests comparing AWS Lambda, [email protected], and Cloudflare Workers in order to demonstrate serverless responsiveness and test the effectiveness of deploying serverless functions across multiple PoPs. (The test functions were simple scripts that responded with the current time of day when they were run.)
The chart below displays function response times from AWS Lambda (blue), AWS [email protected] (green), and Cloudflare Workers (red). For this test, the AWS Lambda functions were hosted in the US-East-1 region.
In a serverless architecture, where code runs (geographically speaking) has an impact on latency. If application code runs closer to the user, the application's performance improves because information does not have to travel as far, and the application responds more quickly. Though response times varied for all three services, Cloudflare Workers responses were usually the quickest. [email protected] was next-fastest, exemplifying the benefits of running serverless functions in multiple locations.
Although AWS regions are spread out across the globe, Cloudflare has more total PoPs. Cloudflare also performed tests that were limited to North America, and delays caused by DNS resolving were filtered out. The results, displayed below, are another example of how more PoPs reduce latency and improve performance. Note that Cloudflare Workers responses take the least amount of time.
Serverless cold starts: How quickly do new processes respond in a serverless architecture?
In serverless computing, a 'cold start' refers to when a function that has not been run recently has to respond to an event. Such functions need to be 'spun up' before they can run, which takes a few milliseconds typically. This can cause additional latency issues. The chart below compares cold start response times across the three services (as tested by Cloudflare):
Cloudflare Workers respond very quickly, typically in under 200 milliseconds, when cold starting. In contrast, both Lambda and [email protected] functions can take over a second to respond from a cold start.
The differences are largely due to the fact that Cloudflare Workers run on Chrome V8 rather than Node.js. Node.js is built on top of Chrome V8, takes longer to spin up, and has more memory overhead. Usually V8 instances take less than 5 milliseconds to spin up.
Learn more about Cloudflare Workers.