Serverless Backends With AWS Cloud: Email Lambda and DynamoDB

Rob Sherling - Apr 24 '17 - - Dev Community

This is part of a series of articles that comprise a tutorial of an end-to-end production AWS serverless system. If you're joining partway, please read this series from the intro article, available in its original formatting on my blog J-bytes.

This is a very, very long piece.

Email, Lambda, Mailgun, Dynamo, S3

There is going to be a lot coming at you all at once in this post. This is a large project with a lot of moving parts, and while I actually built this incrementally, we'll just be building pretty much everything final-stage here (with a few exceptions). Take your time and read everything. We'll be touching S3, Lambda, IAM, KMS, and DynamoDB services in this post.

Email DynamoDB Tables

We're going to just go right into things and create a DynamoDB table to store all our registered info.

Navigate to DynamoDB in the AWS control panel. Click create table. You should see this screen:

dynamo table create screen

The primary key is simply a key that each database entry is guaranteed to have. There are other things to know about dynamo DB for different projects, but that's basically it for this one.

Let's call our table name production_emails and our primary key email.

We're going to leave our Use Default Settings checked every time. This will limit our table to 5 reads and 5 writes per second, and keep things nice and cheap. For production, your needs will obviously vary depending on your anticipated workload.

We do not require a sort key, so simply click create.

Create another table with table name staging_emails and a primary key of email. Click create again.

A NOTE ON STAGES: We would normally make separate versions of the tables for testing, staging, production, and dev but we'll only make two in this tutorial for brevity.

Give it a few minutes to make the tables.

IAM

Let's make the role that our lambda will use next.

Open IAM

  • Click Roles
  • Click Create new role
  • Name this role what you want (master_lambda is good)
  • Role-Type is AWS Lambda
  • Attach policy AmazonDynamoDBFullAccess
  • Attach policy AmazonS3ReadOnlyAccess
  • Attach policy CloudWatchFullAccess
    • Note: Even with this policy, it takes a long time sometimes to actually be able to see CloudWatch logs. See the end of this page.
  • Save

Lambda

Open up Lambda in your AWS control panel, and hit get started now.

I hit some major snags with Lambda here when I first did this project. A word of advice: do not choose the default Dynamo templates. They are a trap. As of the time I did this project, they used an old way of connecting to dynamo that isn't supported anymore, and I lost nearly a full day trying to troubleshoot what I thought were problems with my code. The old templates try to force to you specify the type of data you are saving and it all falls apart for some reason.

Select Blank Function.
You will now see Configure triggers on this page to link your API. Do not do that. It is also a trap. It will create a new blank API resource for you, and we don't need that at the moment. Leave it blank and just hit next.

We're going to go whole hog, here. In reality, I made the basic email function first, then encryption, then I finished out with the ability to send mails. We, however, are going to benefit from my knowledge and just do it right the first time so we can write this one-and-done.

The following steps are slightly different depending on which mail provider you chose to send email with. I originally did the project with Sendgrid, but Mailgun is much easier to demo and really they are almost identical from an implementation standpoint. A little google will do you well here.

We'll need to upload a few files: index.js for sending the email, email.html for contents we want to send, and loader.js to load all our environment variables. We also need to upload the library that we'll use for Mailgun to Lambda. This was actually surprisingly easy, as Mailgun has documentation that is great for the job we need (docs).

In order to upload the files, minimize your browser window and do the following on your local system.

The Lambda Index File

For the index file itself, create a folder called email_lambda in version control. Inside that, create a file named index.js and paste the following code:

index.js

On line 8, change us-west-1 to your actual dynamo location.

Read the comments if what is happening in the file is unclear.

The reason all code paths eventually throw an error is because that is the only way to redirect to a website from Lambda using the API gateway. This will be explained in more detail when we make the API.

Loading our configs

This requires S3, which we will setup after. I will also explain after why we are using an S3 loader for environment variables and not the newly added environment variables function at the bottom of each lambda in the AWS console. For now, trust.

In reality, you would actually create this file outside of your individual lambda folders and put a hard system link into each lambda folder so that they all have the same version of this file. If you don't know how to do that, google or just put it directly inside the folder each time. Anyway, create a file next to index.js called loader.js and put the following code inside:

loader.js

Edit the file as follows:
Replace CONFIG BUCKET NAME on line 61 with your actual S3 bucket name that we'll make later. I strongly suggest something like <project-name>-dev-config.Please check this for the original guide. I made some modifications to make it more usable for our project.

The Email Template

Simply put a file called email.html in the folder as well, with whatever you want. Suggested contents (RECIPIENT will be replaced with the correct email by our code in the index.js file):

email.html

Installing the Mailgun Library

Note: The library that Mailgun uses is huge and because it will take you over a designated size limit you will not be able to use the inline code editor in AWS lambda after you perform the next few steps. This can be a bit frustrating if you only need to make a small edit because you have to re-zip the file each time, but you get used to it quickly (and also can make an alias to do it for you to save you time).

Time to install the relevant mailer node package.

Run the following commands from inside the email_lambda folder. Copy-pasting always messes node installs up for me, so be careful that when you copy that it actually copied correctly.

npm install –prefix=./ nodemailer
npm install –prefix=./ nodemailer-mailgun-transport
Enter fullscreen mode Exit fullscreen mode

tree should spit out:

.
|-- email.html
|-- etc
|-- index.js
|-- loader.js
`-- node_modules (with a ton of files in here, omitted for brevity)
Enter fullscreen mode Exit fullscreen mode

Go to the Mailgun account that you made during installs and get an API key for your project (and verify a domain if you can, highly recommended but cumbersome). Set these aside for the environment variables portion.

Zip the files, not the folder. On Mac, if you just zip the folder itself it will create a folder within the zipped folder so your files will be two levels down and not work. The zip should open to node_modules/, index.js, loader.js, email.html and etc/ directly. If you use the unix tool to zip, make sure to specify the -r option.

On the Lambda console screen that we had open, change Name to email_service (or whatever you'd like) and Code entry type to Upload a zip file. Then click upload and select the zip we just made. Leave environment variables empty, we'll talk about those in a minute.

Handler tells lambda what file name and what function to execute first when called. The format is filename.function. Leave it as is.

For Role, choose existing role, and choose the lambda role we made earlier (master_lambda if you followed the guide).

Leave memory at 128 mb/s, and set the timeout to something healthy like 15s. Remember, we get charged on the time we actually use, not the maximum that we set. DynamoDB also very occasionally has some weird delays when reading or writing, so you want this to be pretty long just in case.

Leave the rest as default, and click Next, then Create Function.

Environment Variables

If you don't know what AES is or what an IV is, read roughly a page on it. Basically, AES is a series of encryption standards and an IV is a unique-per-data-item piece of information that makes the security on each item harder to crack.

On the AWS screen for your lambda under the Code tab (should be the screen you see after uploading the zip), you are going to see a space for Environment Variables. We are not going to use these.

Normally, Lambda will read these in with process.env.KEY_NAME. We can't really use these because while they are fantastic for a single lambda, they don't really work well for shared information like AES keys or email addresses across multiple lambdas, or for variables that differ by stage (production keys need to be different from all other stages). If you forget to change a single key on a single version of a lambda, it can break in really terrible and subtle ways.

So what we are instead going to do is load all of our environment variables from a JSON file that we will make for each stage and encrypt with KMS so that only our admins and our lambda can read it. The encryption happens when we store it in an S3 dev bucket.

KMS

First, let's make a key. Head on over to KMS in the console (IAM -> Encryption Keys).

  • Click get started/create key.
  • For the alias, you can use anything (S3-encrypt-key) and click next step.
  • Under Key Administrators, choose who you want to be able to rotate/edit/delete the key
  • Under Key Usage Permissions, pick the master_lambda role that we made earlier, as well as any console users/role that you want to be able to access the file.
  • Click Next Step, then Finish

S3

We need to make a JSON config object, then we need to upload it to our staging and production folders. We will encrypt it with the KMS key we just made, because encrypting your sensitive data at rest is just good practice.

First, go to S3 and make a bucket (this should have the same name that you set in your loader.js file, ideally <project-name>-dev-config).

s3 bucket creation

Inside that bucket, make two folders, staging and production.

S3 bucket folders

At this point, we're ready to upload our environment configs. Some of these in the example file I'll link we actually don't need yet, but it's no problem to leave in dummy data and update when you need it. Please keep backups of this data outside of S3 in case an angry admin deletes it and you lose your AES keys.

Remember: never commit environment variable config files to the repository. That would completely defeat the point of environment variables. This is only in the repo as an example.

Download the following file and configure it according to the instructions below.

env-config.json

Explanations:

  • site_callback

This is where you will put the page that you want your user to be redirected to once they've registered their email or twitter. For example: http://robsherling.com/jbytes.

  • email/twitter/session_table_name

The table name that you want to store data in / read data from. Examples are staging_emails and production_twitter.

  • aes_password

This is the password that you will use for your encryption key.
AES is very, very picky about keys. They have to be of a specific byte length. At least for testing, you can get keys from http://randomkeygen.com

Just go to CodeIgniter Encryption Keys and grab one you like and save it somewhere in addition to your config file, because if you lose it you lose access to all the information that it was guarding.

  • from_email

The email we want this to look like it was sent from.

  • mail_api_key/mailgun_domain_name

Use the API key and domain name you have in your Mailgun account settings.

Now that you know what the variables do, fill in the site_callback, email_table_name, aes_password, mail_api_key, mailgun_domain_name, and from_email fields.

You will have to upload this file twice; once to the production folder and once to the staging folder. After you select the env-config file for upload, click the Set Details button in the lower-right hand corner. Check Use Server Side Encryption, then Use an AWS Key Management Service Master Key, then our created key from the dropdown. Upload (don't worry about permissions, they work fine as is). If you do not see your key, check that the key region and bucket region are the same.

Please change your AES keys and DynamoDB tables for staging and production (staging_emails vs production_emails); I also would change the env-config.json in the staging folder to use a different from_email address so you know staging is being called correctly.

Lambda Alias

Finally, let's get our lambda aliases to correspond to our stages, edit our test action and get this bad-boy fired up!

First, let's set the test action in the lambda console. Click our email_service lambda that we made, click the Actions drop-down at the top of the page near the blue Test button, and then click Configure test event. Delete any code in the event and copy and paste the following, changing name and domain to the appropriate values for your receiving test email:

{ "body-json": "email=<user>%40<domain.com>"}

Please note that every time we manually test this, the above email is going to get an email, so be comfortable with that.

  • Click Save, not Save and test

  • Click Actions, then publish new version. For the description, put tested and ready! or whatever else you want there. Click Publish.

  • Click Actions again, then click Create alias. For the alias name, type production. For the version, select 1 (the version we just made) and hit create. Now, repeat this again, but for the name type staging and for the version select $LATEST. This is how you can point aliases to different versions of your code.

Now, on the left hand side, click qualifiers, then staging under alias. Then hit the blue Test button.

1) It SHOULD say execution failed, with error message Email.MovedPermanently: Redirecting. This is because later we will catch that in AWS API Gateway and use it to redirect.

2) If you check your dynamo db staging_emails table that we created and click on the items tab, there should be an item with email and email_iv, and it should be a bunch of gibberish. This is good.

3) Your email can take a few seconds to arrive. If it hasn't arrived, check your CloudWatch logs for errors and your Sendgrid/Mailgun logs.

4) If you have errors, after you try to fix them and re-upload any files, make sure to select staging again from the alias list before running test. We haven't made a $LATEST dev-config folder and json, so if you don't correctly set the right alias it won't load any environment configs.

CloudWatch Logs

If your CloudWatch logs aren't showing up when you try to find them, good luck. The Amazon Support team's general take on this seems to be: If the roles don't work, mess with them until they do. Seriously, I tried to put the CloudWatchFullAccess policy on my master_lambda role and it did nothing. I fiddled with roles for hours and it did nothing. So, I just put the CloudWatchFullAccess back on and walked away. I came back ~ five days later, and without changing anything it just magically started working.

This marks the completion of our first lambda. Next, we'll hook it up to an API so we can test it with Postman.

. . . . . . . . . . . . . . . . . . .