skip to Main Content

In the thriving world of enterprise cloud computing, moving new development to serverless functions in AWS Lambda has become a very productive new workflow. With it, development teams are finding new agility, and organizations are saving on compute resource costs. However, as the number of deployed AWS Lambda functions increases ever larger, it becomes important to establish practices that can optimize their collective running costs. Here are six important considerations you should be making to ensure wasteful spending is minimized:

1. Knowing the Factors at Play

The first thing to understand is how Amazon charges customers for executing Lambda functions. The first factor is the memory and CPU allocation setting. This is a single selection configured on a per function basis, so selecting a higher amount of memory includes a faster CPU allocation and a higher cost per execution. The second factor is the execution time of the Lambda function. The cost of an execution increases as runtime increases.

An important note to make here is that the pricing increases for every 100ms of execution time. This means that a decrease in a function’s run time from 299ms to 200ms would not create any savings, and a function that already runs consistently under 100ms does not need any further optimization.

2. Finding the Sweet Spot

Tuning the memory allocation for a Lambda function requires some intuition into the nature of the code. Most of the time, it is enough to take a memory-centric approach to tuning by leaving the allocation at the cheapest level and increasing the setting if executions fail with an out-of-memory error. However, it’s best to approach a function performing more CPU intensive processing a little differently. In these cases, increasing the memory parameter to get a more expensive CPU allocation can provide large enough improvements in execution time that it saves money in the long run. While this is an effective manual process for tuning these parameters, if you have many functions, you’ll want to invest in a more automated approach. A great resource for getting started on that using AWS Step Functions can be done found here.

3. Understanding Cold Start Overhead

Since AWS Lambda is a serverless compute service, there is overhead on fresh executions as the service takes care of deploying and starting your code. This is called a cold start scenario, and that overhead goes into the execution time for which you are billed. To reduce this overhead, unnecessary dependencies, like those only used for testing and local development, should be scoped in the build such that they are not packaged into the deployed function. Minification should be applied when possible to reduce your package size.

4. Optimizing Warm Starts

When a previous invocation of a function has completed but its execution environment hasn’t been reallocated yet, the container can be reused for a subsequent execution. This is called a warm start scenario, and it’s AWS Lambda’s way of attempting to bypass the overhead involved with a cold start scenario. It allows the function to start almost instantly. It also reuses without re-instantiating the variables and services defined outside the function’s main event handler code. By keeping as many object definitions scoped outside the event handler as you can, you optimize the function’s execution time in warm start scenarios.

5. Considering Provisioned Concurrency

Another way to increase the number of executions in warm start scenarios and circumvent the overhead of cold starts is to “pre-warm” containers using Amazon’s provisioned concurrency option. This is useful in cases when large bursts of executions can be predicted ahead of demand.

6. Matching Runtime Choice to Use Case

The primary runtime options for an AWS Lambda function include NodeJS, Python, Java, Ruby, Go, and .Net. Besides personal language comfort and preference, there are considerations to make when choosing a runtime that will affect execution time and, by extension, cost. The consideration to make here is whether a compiled language such as Java or an interpreted language such as Python will execute faster for the task at hand. Compiled languages execute faster than the interpreted languages, but with the expense of additional cold start overhead. This makes runtimes using interpretive languages much better suited for the shortest and simplest of functions. Runtimes using compiled languages should be preferred in applications with heavier computational complexity, or in cases that can be expected to get a lot of warm start scenarios such as high frequency APIs or applications utilizing provisioned concurrency.

For your data streaming and modern data integration needs, reach out to Zirous today. We’re partnered with and certified in AWS and several other vendors to provide you capabilities to drive your business forward.

This Post Has 0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top