Overview:

AWS Lambda functions are great for keeping your server cost low and availability high. However, one huge drawback is that there is a large warm up time when your function needs to increase its number of concurrent handlers. This delay can be as long as 10 seconds. If you are hosting a user-facing endpoint or website, that delay is often unacceptable.

We will keep a set number of your Lambda functions constantly pre-warmed by using multiple servers to send scheduled and precisely-timed concurrent requests to your Lambda function. That way, you can handle your specified number of concurrent users without forcing those users to wait for the warm-up delay. We only ping your server once every few minutes. That is frequent enough to keep the Lambda functions warm - but not so frequent as to tie up the servers or cause high Lambda usage costs.

You will know it has worked by looking at your AWS CloudWatch logs. Each uniquely-invoked Lambda instance will have its own log stream. If instead the same Lambda instance was invoked multiple times, you would only see multiple requests in the same log stream.



Usage Instructions:

All you have to do is tell us what endpoint you want us to target, the request type (GET or POST), and the number of concurrent Lambda functions you want to keep warm. We recommend you set that number to the minimum number of handlers you need to serve your peak traffic.



Example GET Request

Variables can be passed to your endpoint using URL encoding.





Example POST Request

Variables can be passed to your endpoint using a JSON.





Feedback

Your feedback and feature requests are very important to us. All feedback is taken seriously and greatly appreciated! Please send your comments to: support@lambdawarmer.com and someone from our team will get back to you shortly.