Skip to content

Best practices: how to deploy Lambdas using Terraform

In 2014, Amazon Web Services (AWS) introduced a new service called AWS Lambda. This service allows developers to create stand-alone functions that are triggered by external events, such as:

  • an HTTP request to an API Gateway
  • an IoT sensor reporting a reading
  • an upload to an S3 bucket
  • a CloudWatch timer event
  • etc.

At launch, AWS customers could only write lambdas in a small set of languages, but, as of 2018, they can use any runtime.

Stitching all the pieces and parts necessary for creating and maintaining a functioning set of lambdas is very challenging with the AWS CLI alone. This post will show you how Terraform can make your life significantly easier.

Doing it the hard way

For a simple lambda, you’d need to run the following commands.

Each step of the process needs information copied from previous steps. In addition, there will be one ‘create-function’ call per lambda, followed by the necessary connection commands. It very quickly gets unwieldy.

In addition, each time you update a function, you need to make sure that it, and any related files, are uploaded to AWS Lambda. Staying on top of the changes can get tricky.

Some Lambda details

Before we get into how Terraform can help, we need to consider some Lambda details that come into play when you go beyond basic Hello World lambdas. We’ll talk about Layers, VPC access, and execution roles. For each of these details, there are a host of additional commands to run to get things right.

Lambda layers

An added complexity to all of this is how to deal with shared dependencies among your various lambdas. You could upload the same dependencies over and over for each lambda, but this can get unwieldy, slow down deployments and raise storage costs.

Instead, you can create layers. A layer is a set of files accessible to lambdas in a “well-known” place. The Lambda Service places the contents of a layer in the “/opt/<layer-name>” directory of a lambda’s runtime environment.

The Lambda service designates several specific layer locations as library dependency locations. For instance, for NodeJS, this is “/opt/nodejs”. Whenever a NodeJS lambda is run, the NODE_PATH environment variable includes “/opt/nodejs/node_modules”. 

As a result, creating a layer called nodejs, which includes your lambdas’ dependencies, will allow you to have a single layer for library dependencies that each lambda can share. You can find the specific layer name for your runtime in AWS documentation.

VPC access

Lambdas normally run within a VPC owned by the Lambda service. Crucially, this means not your VPC. If your lambdas don’t require access to resources in your VPC such as EC2 or RDS instances, then you’re good to go!

However, suppose your lambda does need to access resources on your VPC. In that case, you’ll need to configure necessary access using NAT routers in your VPC or AWS PrivateLinks. We’ll cover VPC configuration in a future post.

Execution roles

When a lambda runs, it needs an “identity” to determine its authority to perform various tasks. To do this, you must assign an IAM Role as the lambda’s execution role. You can then attach policies to this role, controlling what the lambda can do. AWS provides a collection of managed roles that you can use; you can also write your own.

Using Terraform to manage lambdas and lambda security

We will show how to create and maintain a set of lambdas that rely on services outside the Lambda service’s VPC. We’ll leverage layers for dependencies and create the appropriate roles and policies for our lambdas.

Our sample application

First, let’s consider a sample application. We’ll use one written in NodeJS. Consider the following directory structure:

In this example, we have two lambdas; let’s say both rely on dep1 and dep2. We will place our terraform files in a terraform directory. The artifacts directory will contain our deployment artifacts.

Getting started with Terraform

Since we’re managing AWS resources, we need to set up the AWS Provider for Terraform. In addition, we’re going to need the “archive” provider since we’ll be uploading zip files to the Lambda service.

The specific details of authenticating this provider are beyond the scope of this post, but you can learn more in the Terraform Registry. In our example, we will rely on an AWS profile called “lambdas”, which defines the access key, access secret, and region.

Here’s the first part of our terraform file:

To make this more portable, we’ll add one more line:

This line gives us a Terraform data resource that we can use to find the current AWS region that is being used, rather than hard-coding it in our Terraform files.

Populating the artifacts directory

We’re going to create two directories under “artifacts”

The “lib/nodejs” directory is where the required NodeJS dependencies go. The “lambdas” directory is where the lambda code goes.

Step one is to get a copy of your dependencies into “lib/nodejs”. The following commands, run from the “app” directory, will suffice.

If you need to build, compile or otherwise transform your lambda code before deployment, make sure that the final files all go into the “artifacts/lambdas” directory.

The rest of this post assumes that you have done this. You’ll need to do this each time you modify your lambda code or dependencies.

Getting the code over to AWS

The next step is to get the code into AWS so that the Lambda service can deploy your layer and your lambdas.

The next step is to get the code into AWS so that the Lambda service can deploy your layer and your lambdas.

A combination of the Terraform “archive” provider and an AWS S3 bucket is all we need here

With those resources, we’ve created an S3 bucket and then created two S3 objects. One contains the code for our lambda layer. The other contains the lambdas themselves.
What’s great about this approach is the source_hash property in the aws_s3_object resources. If any files that make up the archive_file change, this hash changes and tells Terraform that it needs to upload the new archive file.

Creating a layer

The AWS provider treats a layer version as the resource of interest (rather than simply a layer). With your archive file in place in your S3 bucket, you can create a layer with this one resource.

As with the S3 bucket objects, we have another hash property to ensure that the layer is updated whenever the code changes.

Creating Lambda permissions

Before we create our lambdas, we need to create the appropriate permissions to let it do what it needs to do. It can be useful to think of a lambda function as an entity, much like a human user, that wants to interact with AWS services and, as such, needs the necessary permissions to do so. Permissions (in the form of policies) are attached to a User or a Role. Since the Lambda is not a User, we create a Role that the lambda can then assume in order to execute. This Role is known as a Service Role.

First, we create the role.

This says that we’re creating a Role called “lambda-execution-role”. Resources belonging to the “lambda.amazonaws.com” service, aka the Lambda functions, can assume this Role as needed.

Next, we need to attach policies to this Role to provide our lambda functions with the permissions they need. For most simple lambdas, the following is enough.

This policy contains the permissions necessary for the Lambda to log to CloudWatch.

If you want to run your Lambda in your own VPC, then you’ll need to add this policy instead.

This contains the permissions necessary for the Lambda to log, as before, but it also includes the permissions necessary to create network interfaces in your VPC so that the Lambda function can connect.

If your lambda is interacting with other services, then you’ll need to create and attach relevant policies. For instance, let’s say your lambda was going to send emails via SES. You would create and attach a policy like this.

For each lambda’s service role, you should apply the principle of least privilege. This principles says that you should:

  • Allow the minimum number of actions
  • On the minimum number of resources

This means choosing only the actions that the lambda needs on the resources it needs to interact with and resisting the urge to put asterisks everywhere. It can be tedious to get is exactly right, but it’s the most secure approach.

Creating the function

At last, we’re ready to create our lambda function: the code is in place in an S3 bucket, a layer containing dependencies has been created, and we have a service role ready to go. We bring these all together in a single resource.

Once again, we see our old friend, the source hash and, once again, that’s there to ensure that the lambda is updated whenever a file changes.

Once this has all been deployed, your function will be ready to invoke.

Securing secrets with your lambdas

Some lambdas will need access to secrets to do their job. These could be passwords for databases or client IDs for OAuth clients or something else.

The worst practice here would be to hard-code the secrets in the lambda itself. This would expose the secrets both in your code repository and the lambda.

Using environment variables for injecting secrets into the lambda takes them out of your code repo, but still exposes the secrets in the lambda console.

The most secure method is to use the AWS Secrets Manager. It does require a little more work in the lambda itself, but it keeps secrets hidden. The AWS Parameter Store is also an option, but we’ll focus on the Secrets Manager here since it’s specifically designed for handling secure secrets.

To keep the secrets out of all of your code, we also need to keep it out of your Terraform files. To do this, we’re going to use Terraform variables.

This creates the secret in the Secrets Manager. Next you need to add a policy to the lambda service role so that it can read this value. We’re abiding by the principle of least privilege here.

Finally, we want to pass the ARN of the secret to the function, so that it can access it. The simplest approach here is to use environment variables.

Accessing this secret requires the use of the AWS SDK in your language of choice. If you access the secret via the AWS console, you’ll see some sample code that you can use.

The overall security of the Lambda Service

AWS follows a Shared Responsibility Model (SRM) when it comes to security. They are responsible for the security of the cloud, ensuring that the infrastructure and services that they provide are well secured. Customers are responsible for security in the cloud, which means managing access controls and keeping secrets safe.

When compared to EC2 instances, lambdas have some significant inherent advantages when it comes to security. The Lambda VPC environment is maintained by AWS and their security experts. There are no servers for you to patch or secure against unauthorized access. In this way, you’re delegating some of your security burden to AWS’s side of the SRM,

You are still responsible for keeping your secrets safe and ensuring that your lambdas only have the appropriate permissions. Even with Terraform to make the management simpler, it can be helpful to have an ally in the complex world of security. Try one of oak9’s security tests to see if they can make things even simpler for you.

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay up to Date
Learn more about new features, company updates, and be the first to read new blogs.