Serverless has taken the tech industry by storm with major cloud providers investing heavily in offerings like IBM OpenWhisk, AWS Lambda, Azure Functions and Google Cloud Functions.

With the promotional push behind this type of architecture, also known as function-as-a-service (FaaS), and its strong appeal to software developers, serverless is anticipated to continue to grow in the coming years. 

Making the switch to serverless can be daunting as it's a new approach to developing applications and there's a lot to learn. Not to mention, several misconceptions have already formed within the industry about serverless, deterring companies from making the switch.

While it's not for everyone and there are still instances when a traditional architecture is a better fit, most mid-sized companies will come to find that serverless is a good option to meet their project needs and budget.

What is serverless?

At its core, the concept of serverless is quite simple: The cloud provider manages the infrastructure, freeing up valuable time that software developers previously spent provisioning, scaling and maintaining servers, to instead focus on coding and improving the application.

Using a supported coding language – Node and Python are typically the leading languages in runtime for serverless – the developer writes and uploads the code, and the service provider handles the rest. This allows for a much faster go to market compared to the traditional approach.

But while this is coined as "serverless," let's be clear – the servers and virtual machines still exist, they're just managed entirely by the service provider.

Is serverless right for my application? 

There are three key areas to consider when evaluating whether serverless is the right fit for your application: pricing, scalability and performance. Let's look at each. 

Pricing 

The most common misconception about serverless is that it's expensive. In fact, its affordable pricing is typically a reason companies make the switch from a traditional architecture. 

Serverless is based on a pay-per-execution model with the intent to minimize costs by only charging when your code is running. Once the code has been executed, it terminates and you stop paying. Not to mention, the cost to execute a request in a serverless architecture is a fraction of a penny. 

For example, AWS Lambda offers the first million requests free and the second million requests for just 20 cents. Additionally, tiered pricing is offered for caching. With AWS Lambda, caching is charged by memory size per hour. For 0.5 GB of memory, it costs two cents per hour. Depending on the number of users, most mid-sized applications could operate for just a few dollars per month with serverless.

Alternatively, in a traditional architecture, costs can quickly add up. Servers must always be ready for requests, which causes significant back-end costs. Companies are often forced to over-purchase pre-configured compute capacity to account for any potential spikes in traffic. Plus, there's the expense of provisioning servers and employing a dedicated server team. Serverless eliminates all this spend.

Lastly, when comparing serverless to a multicloud architecture, it's important to be aware of another common misconception: multicloud is automatically cheaper. Oftentimes, companies assume migrating workloads across on-prem data centers and the public cloud saves money. However, it frequently ends up costing about the same – just spread across more servers. While this isn't to say serverless is always the best option, from a pricing perspective, it is generally a good option as companies can rest assured they are only paying when code is running.

Scalability

Another serverless misconception is that it's infinitely scalable. 

Serverless does automatically scale, which is especially useful during influxes of demand. Gone are the days of coding to scale or adding additional servers. But there is a catch. 

In a serverless architecture, functions are only accessed by private APIs. These APIs can be created through tools like API Gateway, SNS, CloudWatch Events and several others, which have throttle limits in place. API Gateway is by far the most popular with 57.6 percent of triggers in AWS Lambda coming from it. API Gateway's throttle limit is 10,000 requests per second or 5,000 requests per millisecond. For most mid-sized applications, this is a considerable amount and will most likely never be reached. 

However, for large applications with millions of users, intense networking needs, high volumes of video streaming or other latency-sensitive activities, this throttle limit is an issue because of the high volume of larger functions. While these types of applications would not be an ideal fit for serverless, some function volume can be addressed through caching. If the same data is being pulled multiple times and repeatedly delivering the same request for a function, caching can help consolidate these requests to prevent the throttle limit from being reached – but it can come at a price that may not make sense for the business. 

Overall, most applications will find the scalability that serverless provides as a suitable, convenient option. 

Performance 

The third key element to evaluate when determining whether serverless is right for your application is performance. 

In general, applications that consist of short-term tasks, such as sending emails, perform best in a serverless architecture. Long-term tasks that require more time to execute, like uploading video files, are not ideal for this architecture. In fact, AWS Lambda allows up to five minutes to execute a task – any additional time will require another function to be called. 

It's also important to understand that serverless applications run in stateless compute containers which do not store or reuse requests. Once an event is triggered, it creates a new container, executes the task and stays available to receive subsequent requests for only a few minutes before terminating. Writing code that takes advantage of these containers while they are still active – referred to as "warm containers" – is key to decreasing latency and improving performance in a serverless architecture. If code is not written in this way, it can negatively impact performance by causing "cold starts," a latency in triggering a request when it hasn't been deployed recently.

This is one of the biggest concerns companies have when it comes to choosing serverless, but cloud providers are continuing to make improvements to decrease the amount of cold starts. For example, AWS Lambda announced it will be leveraging AWS Hyperplane to improve the way customers' functions connect to their own Virtual Private Clouds (VPCs). This will help reduce the latency related to creating and attaching a network interface at cold start.

Summary

While serverless is not a widespread architecture yet, it is making significant strides in becoming more mainstream. In its current offering, serverless proves to be an overall good choice for new and mid-sized applications with its low operational costs, convenience of automatic scaling and ability to focus on finetuning user experience.

However, for companies with larger, established applications and legacy infrastructure, migrating to serverless can be a challenge and more costly than a new application just getting started. A massive number of users and high-latency functions can risk the quality of performance current users are accustomed to with a traditional or multicloud environment. But with cloud providers continuing to make improvements and the anticipated growth in the coming years, serverless is certainly something to watch. 

To learn more about serverless architecture, check out my video walk-through of a Continuous Integrations and Continuous Deployment pipeline for AWS Lambda