Optimizing application development with serverless

The opportunity in optimizing application development

Innovations like virtual machines (VMs), containers, and the public cloud have improved application development in many ways, but still place several configuration, maintenance, and optimization decisions on developers rather than the technology itself.

The more that these responsibilities are placed on developers, the less time they have to build products and internal applications. Unfortunately, many widely adopted technologies task developers with performance optimization, application scaling, security patches, load balancing, and more. These responsibilities introduce the risk of suboptimal choices or mistakes that could deplete budgets or cause vulnerabilities and downtime.

This inefficiency has serious consequences. Alarmingly, the time developers spend on non-coding tasks costs organizations over $85 billion annually.

Thus, removing complexity from application development can improve the developer experience while also saving organizations a significant amount of money.

The industry shift to serverless

Serverless technology was designed to address these issues, specifically improving application development by reducing the burden placed on developers. However, not all serverless platforms are created equal. Early iterations of serverless platforms inherited many of the configuration, scalability, and performance issues associated with the technology they were built on — containers, regions, and the public cloud.

Thus, “serverless” as we know it today is often a leaky abstraction on top of an old model.

Advanced serverless platforms have evolved past these issues with several important architectural improvements. These improvements remove time-consuming decisions from the development process, so teams can spend more time building great products and applications.

Building upon previous development methods

Before serverless, there were VMs and containers. VMs are software-based computers that exist within another computer’s operating system, and containers are standard units of software that hold all the elements an application needs to run.

Both of these technologies allow developers to focus more on their applications and less on managing hardware. However, VMs and containers still saddle developers with management and configuration duties that slow down the overall development process.

To varying degrees, VMs and containers require developers and their partnering IT and security teams to:

  • Manage security patches and Identity and Access Management (IAM) permissions

  • Configure load balancing and networking

  • Ensure availability and bake in redundancy

Container orchestration systems like Kubernetes alleviate many of the configuration requirements associated with containers, including managing scale and redundancy. However, Developer Operations (DevOps) teams, who focus on solving internal development problems rather than customer-facing issues, require Kubernetes expertise to effectively manage it. Without Kubernetes and a properly trained team, container limitations still apply.

VMs and containers are only a part of a larger picture. Both of these technologies can be used in the public cloud, which introduces limitations of its own.

The public cloud helps simplify various aspects of development, but still leaves layers of configuration to the customer organization like selecting regions, managing security, designing networking solutions, and ensuring availability. The public cloud also requires the manual combining of multiple services like databases, message queues, and storage. Manually configuring and connecting these services is time-intensive, increasing overall time to deployment.

First-gen serverless development brings its inefficiencies

Serverless development was designed to overcome challenges associated with VMs, containers, and the public cloud. But, early serverless methods were only partially successful.

The primary challenges with first-gen serverless development include:

  • Latency and scalability. Many of these serverless platforms operate in the public cloud, which relies on centralized data centers to reduce overhead costs. This model requires customers to select deployment regions where their resources will be physically located. Centralizing data introduces latency because code runs far away from end users. Additionally, scaling and deploying over multiple regions is possible but complex to configure.

  • Cold starts and CPU throttling. Serverless platforms built on containers struggle with cold starts and central processing unit (CPU) throttling. Cold starts are the loading delays that happen when a serverless function is executed for the first time. Cold starts happen because containers can take several seconds to warm up. On the other hand, CPU throttling happens when a platform hits its designated limit of serverless instances and delays requests.

  • Poor developer experience. Developers often have to manage time-consuming tasks such as setting up orchestration templates, sizing the application, and determining memory tiers. These tasks introduce the possibility for expensive mistakes and reduce the amount of time developers spend coding, costing organizations over time.

  • Limited observability. Many serverless development platforms are difficult to monitor because they do not offer adequate observability. Observability is the extent to which an organization can understand what is happening in a distributed system through performance metrics, event logs, and more. Without adequate observability, an organization is not able to efficiently diagnose and fix issues within a serverless application.

  • Stateless nature limits application functionality. The first-generation of serverless platforms are effectively stateless. The stateless nature facilitates scalability but can make it difficult to build applications that require strong consistency or live coordination between multiple clients like interactive chat, video games, or collaborative editing tools.

  • Cost. Many cloud serverless platforms are subject to additional and often hidden costs, such as API Gateway fees or charges to keep containers warm. As a result, building applications with these first generation platforms can be expensive, especially at scale.

The serverless movement’s purpose has always been to make the application development process easier, but serverless platforms running on the centralized public cloud do not fully live up to that promise.

Rethinking serverless: how serverless has evolved

The next-generation of serverless development platforms have evolved past many of the shortcomings of earlier offerings. By not relying on legacy infrastructure like containers and the public cloud, these solutions offer several improvements and put time back into developers’ hands.

These improvements include:

  • Running at the network edge. The most advanced serverless platforms run on ‘the edge’ — meaning a distributed network of many data centers. The larger the network, the better it addresses performance and scalability issues. This is because, in edge networks, computing takes place at the point of presence that is closest to the end user. This distributed approach reduces latency and is fundamentally different from centralized regions in the public cloud. Thus, deploying code to a network of hundreds of data centers will offer better performance than a network of 20 data centers. The most advanced edge platforms will offer long CPU runtimes to build complex workloads.

  • Using isolates rather than containers. This approach removes container-based architecture issues — cold starts and CPU throttling — which are expensive to mitigate. Unlike containers, isolates do not need to be kept warm, so cold starts are not an issue. Isolates also consume less memory, reducing overhead and CPU throttling issues.

  • Fewer upfront decisions. Some newer edge serverless platforms automatically optimize applications for performance and security. Solutions with global edge networks also do not require developers to choose regions to host their workload in, because they deploy code to all data centers on their network. Removing these tedious tasks improves the developer experience.

  • Detailed analytics and logging. Whereas earlier serverless platforms did not provide much analytics, debugging, or logging functionality, advanced edge serverless platforms offer increased observability. Detailed analytics and logging make it easier for development teams to gather the information they need to troubleshoot issues. Additionally, some platforms integrate with more sophisticated monitoring tools, which more complex applications may require.

  • Integrated coordination and storage. This feature makes stateful architecture possible with serverless. Stateful architecture requires consistent data storage, unlike stateless applications, in which data is transitory. It is not possible to create interactive, real-time applications without stateful architecture.

  • Cost-effective. Lightweight isolate-based edge serverless platforms cost less compared to their container-based predecessors. Isolate architecture brings all the scalability and flexibility benefits associated with the cloud, but without the hidden fees and spikes in cost.

With these improvements, next-generation serverless platforms optimize the overall application development process; eliminating tedious tasks and enabling developer focus while offering cost savings to the organization.

Optimizing application development with Workers

The right serverless platform removes scalability limitations while unburdening developers and improving the overall efficiency of the application development process. Cloudflare Workers is an edge-based serverless platform that uses smart infrastructure to relieve developers of many upfront decisions. Thanks to Cloudflare’s infrastructure, applications built on Workers are always optimized for security, performance, and reliability.

Scalability is never an issue as Workers runs on the global Cloudflare network which spans over 320 cities in more than 120 countries. Code is automatically deployed to all regions, with no additional cost or configuration required. Development teams can build advanced applications on the edge that require long CPU runtimes by using Workers Unbound. Because the Workers platform runs on isolates rather than containers, there are no cold starts or CPU throttling. Workers offers built-in observability and integrates with more advanced monitoring tools like New Relic and Sentry, in addition to debugging and logging tools available through the Workers Command Line Interface (CLI). Durable Objects provides the Workers platform with low-latency coordination and consistent storage, making stateful serverless applications a reality. At the same time, Workers saves customers money by removing hidden fees and offering industry-leading pricing.

Workers enables development teams to focus on building products rather than maintenance and configuration, improving the developer experience and benefiting the company financially over time.

This article is part of a series on the latest trends and topics impacting today’s technology decision-makers.

Key takeaways

After reading this article you will be able to understand:

  • What’s at stake in an inefficient development architecture

  • How development methods have evolved bringing us to serverless

  • How early iterations of serverless failed to simplify application development

  • The key differences in next-gen serverless

Related resources

Dive deeper into this topic.

Learn more about serverless platforms like Workers in The Forrester New Wave: Function-as-a-Service Platforms report.

Receive a monthly recap of the most popular Internet insights!