IDG Contributor Network: Are you really building a serverless system? | Computing

computing (or simply “serverless”) is an up-and-coming architecture choice in software development organizations. There are two flavors of serverless computing: back end as a service, which provides app developers with APIs for common services such as push notifications, and function as a service (FaaS), which allows developers to deploy code (“functions”) in the cloud that executes in response to events, without having to worry about its execution environment, be it servers or containers.

When developers use FaaS, the cloud provider is responsible for automatically deploying the function and scaling it as the workload changes. In this article, I focus on the FaaS flavor of serverless and use the “FaaS” and “serverless” terms interchangeably.

Serverless proponents claim that it makes the development and deployment of software easier, faster, and cheaper than ever. Serverless architectures enable development teams to focus on business value, rather than operational overhead. But sometimes, in the rush to adopt a promising new architecture, organizations lose track of the goals of serverless architectures—a common problem with architectural paradigm shifts. As a result, the systems they build don’t quite deliver on the benefits or are more challenging to maintain than they should be. Just because your code implements a FaaS interface doesn’t mean you are doing serverless right.

Before I dive into some ways serverless implementations go wrong, let’s take a step back and ask what you are trying to achieve by using FaaS. What is really your goal here?

Organizations typically adopt serverless architectures with two goals in mind:

  • Simple operations model. Serverless architectures let you focus on the one thing that matters: the business process that your application is handling. FaaS frameworks offered by cloud providers all promise to scale up and down with usage. If your application has to handle more events, they run as many copies of your function as required to handle the demand. You don’t need to worry about scaling. You don’t need to plan your capacity for the next quarter. Monitoring is much simpler as well—you still need to monitor the business outputs of your application, but you don’t need to monitor the application and its underlying infrastructure.
  • Pay-per-use model. Cloud providers charge for FaaS based on actual usage; for example, number of requests handled, and memory-seconds used. The calculation can be a bit complex, but the ultimate logic is difficult to argue with. Compare this to more traditional computing (both on-premises and cloud) where you pay (and capacity plan) for fixed capacity whether you fully use it or not.

The bigger, overarching goal is to reduce operational complexity and let the development teams focus on delivering business value at reduced costs.

If you agree that these are the main goals of serverless architectures, you need to watch out for antipatterns that prevent you from either simplifying the ability to scale or paying only for what you use. Let’s look at three such antipatterns I’ve seen in the wild and some solutions that can help counter these patterns:

Antipattern No. 1: Misuse of scheduled events

AWS Lambda provides the ability to schedule functions, so they run every 15 minutes, for example. But just because you can schedule functions doesn’t mean it is a good idea. Some tasks do need to happen every day or every hour—database backup, for instance—in which case the use of scheduled events is justified. But I’ve seen scheduled events used to check whether there is any work to be done.

Imagine an order processing function that is scheduled to run every ten minutes to check if there are any orders and if there are to process them. In this scenario, you pay for the function invocation whether or not there are any orders to process, so you will not achieve the “pay per use” goal. In addition, you are always triggering only one function, so if you need extra resources to handle more orders than usual, you won’t have them. As a result, you are missing out on the promise of easy scalability. Not to mention introducing ten-minute lag on processing of new orders.

Solution

A much better implementation approach is to make sure each arriving order triggers a function that will process it. You can do this via an API Gateway, by using the built-integration with other cloud services such as S3 or via a Kafka Connector. This way, if there are no orders, you don’t pay. If there are more orders than usual, you trigger more functions and have more resources to handle them all. Best of all, you are now handling orders as they arrive, in real time, rather than in arbitrary batches.

This is basically the definition of stream processing—handling events, as they arrive, rather than by arbitrary timelines. This is also event-driven. You are using events to drive the functions, with events being the primary mechanism for both sending notifications and sharing data. By designing an architecture that takes full advantage of the serverless framework, you are also taking your first steps into an event-driven stream processing architecture.

Antipattern No. 2: Dependency on a database

Serverless functions are almost always stateless (Azure Durable Functions is a very interesting exception). You can’t reliably maintain any sort of state in the application itself between invocations. Which means that any nontrivial function will rely on an external database to maintain state.

In most cases, databases don’t have a “pay per use” model, which means that you’ll be paying for storing the state whether you use it or not. And what about scalability and operations overhead? Operations professionals generally agree that stateful services and data stores are where they put in 90 percent of the effort, with operations, capacity planning, and scalability for stateless applications being much less difficult. If you are going to run the database yourself, your serverless architecture as a whole will not achieve operational simplicity. So, you have to find a managed service. Not only that, you need to find a managed service that is elastic enough to scale with the serverless functions and not require capacity planning. This isn’t trivial—even the most scalable managed data stores out there have complex pricing models that require you to plan your capacity well in advance. But one of the reasons you chose serverless architecture in the first place is to avoid capacity planning.

Solution

Instead of worrying about database scalability, some experts recommend that functions connect  to services instead of data stores. This is a good choice, assuming that there exists a managed service that can handle the data you need, with APIs that work for you and elastic scaling. For example, if your function can read and write data to Salesforce.com rather than to a database, that is a good idea. Using the pub/sub features of Apache Kafka, which scales as well as FaaS does, is also a good solution, provided that someone else is running it for you.

But keep in mind that if your application is unique enough that you need to write your own data layer service to basically act as a connection pool and a data layer for your database, serverless is probably the wrong architecture. When you add this service, you now create the same operational complexity and fixed costs that you thought you would avoid by going serverless.

Antipattern No. 3: On-premises serverless

There are many on-premises serverless frameworks out there: OpenFaaS, Kubeless, OpenWhisk, Knative, and probably more. This is justified if you are using one because you need to run the same applications in your own data center as in the cloud. But you need to be aware that on-premises serverless has none of the benefits (ops simplicity, cost) that serverless promises.

No matter which framework you use, if you are running serverless in your own data center, you are paying for the infrastructure whether you use it or not. Your own operations team is running the infrastructure, monitoring it, maintaining it, scaling it, and planning its capacity. There are no cost savings, and operationally it is probably more complex than other alternatives, because you are running your application on top of serverless framework, running on top of your container orchestration framework running on top of containers on top of your operating . You can simplify a lot by stripping a layer or two.

Solution

I recommend running on-premises serverless only if the benefits of using the same serverless architecture for both on-premises and in public cloud outweigh the pains of running an additional framework that isn’t necessary for the on-premises deployment.

There are strong benefits to building serverless architectures. Done right, they let your team focus on business value rather than on operations, and in the right use cases it can represent significant cost savings. Serverless seems particularly successful for simple, stateless, applications—simple ETL transformations, serving static web content, handling simple business events. When the requirements become more complex, it is important to keep the big picture in mind and see if the architecture as a whole still delivers the promised benefits. Just because you implemented a FaaS interface, doesn’t mean you built a serverless architecture.

This article is published as part of the IDG Contributor Network. Want to Join?

You might also like
Leave A Reply

Your email address will not be published.