Harnessing AWS Lambda: Essential Tips for Custom Runtime Use
Written on
Creating an AWS Lambda function may be a familiar task for Cloud or DevOps professionals, often developed using supported runtimes like Python or Node.js. Typically, these functions perform basic operations or integrate with AWS services such as Cognito or API Gateway for identity validation. Through this process, you likely grasped fundamental AWS Lambda functionalities, including selecting a runtime, uploading code, managing environment variables, and assigning roles with execution permissions.
However, what if your code is written in a runtime that AWS does not natively support, like Shell or Go? You might be tempted to convert that code to Python or seek help from AI tools for translation. While this may suffice for simple functions, strict security policies in larger organizations often prohibit sharing code with third-party AI services.
This article will guide you through creating a Docker image that provides the necessary custom runtime for your function. A basic understanding of Docker and AWS Lambda concepts will be all you need.
AWS Lambda: Understanding the Basics Before we delve into configuring a custom runtime, let’s demystify the serverless execution process of AWS Lambda.
AWS Lambda operates through three main components:
- Lambda Service: This manages provisioning, scaling, monitoring, and logging, overseeing the function's lifecycle, including code deployment and invocation.
- Execution Runtime: This is the environment where your code executes. AWS Lambda supports various runtimes like Node.js, Python, and Java, while also allowing custom runtimes for other programming languages.
- Runtime API: This interface facilitates communication between the Lambda service and the execution runtime.
Our primary focus will be on the runtime and function code, which we need to modify to ensure our Lambda function operates correctly. Generally, AWS Lambda runtimes have an event loop that polls the Runtime API for incoming events, executes the function handler upon receiving an event, and returns the response back to the client via the Runtime API.
AWS provides an OS-only runtime, offering language support and additional configurations like environment variables and certificates necessary for executing your script. All that’s required is to add a small script, bundle it with your code, build a Docker image based on the AWS OS-only runtime, upload it to ECR, and use it in your AWS Lambda. It’s straightforward!
Creating a Custom Runtime In this section, I will share the essential scripts and code needed to construct a Docker image for your AWS Lambda custom runtime. Prepare for an exciting coding adventure!
Here’s the bootstrap script:
#!/bin/sh
set -euo pipefail
# Initialization - load function handler
source $LAMBDA_TASK_ROOT/"$(echo $HANDLER | cut -d. -f1).sh"
# Processing
while true; do
HEADERS="$(mktemp)"
# Get an event. The HTTP request will block until one is received
EVENT_DATA=$(curl -sS -LD "$HEADERS" "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
# Extract request ID by scraping response headers
REQUEST_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)
# Execute the handler function from the script
RESPONSE=$($(echo "$HANDLER" | cut -d. -f2) "$EVENT_DATA")
# Send the response
curl "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$REQUEST_ID/response" -d "$RESPONSE"
done
The initial line, set -euo pipefail, configures the shell for robust error handling and ensures immediate termination if any command fails.
This script initiates by loading the function handler. It then enters a loop to query the Runtime API for new events. Upon receiving an event, it extracts the request ID, executes the main shell script with the event data, and sends the response back to the Runtime API.
Here’s the Dockerfile:
FROM public.ecr.aws/lambda/provided:al2023
ENV HANDLER="function.handler"
COPY ./bootstrap ${LAMBDA_TASK_ROOT}/bootstrap
COPY ./function.sh ${LAMBDA_TASK_ROOT}/function.sh
RUN chmod +x ${LAMBDA_TASK_ROOT}/bootstrap
RUN chmod +x ${LAMBDA_TASK_ROOT}/function.sh
ENTRYPOINT ["/var/task/bootstrap"]
This Dockerfile creates an image based on the AWS public Lambda image: lambda/provided:al2023. This image supplies the necessary language support and settings for executing your script.
We define the environment variable for the shell script handler, which should follow the format <script name>.<function name>. The bootstrap script and our shell script are copied to $LAMBDA_TASK_ROOT (the defined Lambda runtime environment variable pointing to the function code directory /var/task). We make our scripts executable and set the entry point to our bootstrap script, which will loop over the Runtime API waiting for new events.
Here’s the function.sh script:
function handler() {
EVENT_DATA=$1
echo "$EVENT_DATA" 1>&2
RESPONSE="Echoing request: '$EVENT_DATA'"
echo "$RESPONSE"
}
This sample shell script takes EVENT_DATA as an argument and prints it.
Directory Structure:
runtime-tutorial
??? bootstrap
??? Dockerfile
??? function.sh
Testing Your Custom Runtime Congratulations on reaching this point! You now understand one of the more intricate aspects of AWS Lambda functions. Remember, practical application is key to solidifying your knowledge!
To facilitate your learning, I have created a Terraform module containing the previous scripts and the necessary AWS resources to establish a custom runtime Lambda function for you to test.
Here’s the Terraform script you need to deploy and explore your Lambda function with its custom runtime:
data "aws_caller_identity" "current" {}
data "aws_region" "current" {}
locals {
current_account_number = data.aws_caller_identity.current.account_id
region = data.aws_region.current.name
}
module "custom-lambda" {
source = "github.com/Ilyassxx99/lambda-custom-runtime.git"
custom_bash_function_name = "custom-bash-function"
custom_bash_ecr_repository_name = "custom-lambda"
current_account_number = local.current_account_number
docker_image_custom_bash_uri = "custom-lambda:latest"
lambda_image_custom_bash_version = "3.12.2"
lambda_image_arch = "amd64"
region = local.region
tags = {
Terraform-module = "custom-bash-runtime"}
}
Conclusion In this inaugural post of my blog series, Advanced AWS Lambda, we've explored how to tailor your Lambda runtime to accommodate any programming language utilized for your scripts. This flexibility is particularly beneficial when working with AOT compiled languages like Go or Rust, where execution efficiency is critical.
I hope you found this article informative. Should you have any feedback or questions regarding this topic or other technical subjects you’d like me to cover in future posts, feel free to connect with me on my LinkedIn: https://www.linkedin.com/in/ifezouaniilyass/
Thank you for your attention, and I look forward to seeing you in the next post. Cheers!
Notes: 1. AWS supports the OS-only provided.al2023 runtime that allows you to import a zip directly instead of using a Docker image. I opted to use a Docker image here to address cases where additional dependencies may be required. 2. For AOT compiled languages like Go or Rust, AWS offers libraries that implement the runtime with functions for wrapping your main process. This article focuses on deploying a Shell script; thus, you will need to create a script to handle the various Runtime API calls and process execution.
References: 1. https://docs.aws.amazon.com/lambda/latest/dg/runtimes-provided.html 2. https://docs.aws.amazon.com/lambda/latest/dg/runtimes-api.html 3. https://aws.amazon.com/fr/blogs/compute/introducing-the-amazon-linux-2023-runtime-for-aws-lambda/ 4. https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html#configuration-envvars-runtime 5. https://docs.aws.amazon.com/lambda/latest/dg/runtimes-api.html