There are many different compute services within Amazon Web Services. There is the serverless route with AWS Lambda where you can provision your workload and run it only when you need it. Elastic Compute Cloud (EC2) allows you to run any workload inside of virtual machines that you pay for by the hour.
But, today many folks are creating containerized workloads using Docker. So what options do you have when it comes to running your containers in AWS? In this post, we are going to create a sample Docker image. We are then going to create the AWS infrastructure to host that image and run it via AWS Elastic Container Service (ECS). From there we will explore how you can deploy new versions of your image right from your terminal.
Let's get started by creating a sample Docker container that we can deploy.
Sample Docker Image
For our purpose let's create a sample Node Express app that contains a single endpoint. We are going to get things started by first setting up our project. Everything in the yarn init
is left as the default except for the entry point, for that we enter server.js
.
$ mkdir sample-express-app
$ cd sample-express-app
$ yarn init
question name (sample-express-app):
question version (1.0.0):
question description:
question entry point (index.js): server.js
question repository url:
question author: kylegalbraith
question license (MIT):
question private:
Once the project is setup via yarn
, let's go ahead and install express
into the project.
$ yarn add express
info No lockfile found.
[1/4] ๐ Resolving packages...
[2/4] ๐ Fetching packages...
[3/4] ๐ Linking dependencies...
[4/4] ๐ Building fresh packages...
success Saved lockfile.
โจ Done in 0.79s.
Awesome! Now we can go ahead and configure our endpoint by adding the following to the server.js
file.
const express = require('express');
const PORT = 8080;
const HOST = '0.0.0.0';
const api = express();
api.get('/', (req, res) => {
res.send('Sample Endpoint\n');
});
api.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
Now that we have our sample express
API, let's go ahead and setup up our Dockerfile
. Here is what our Dockerfile
is going to end up looking like.
FROM node:10
WORKDIR /src/api
COPY package.json ./
COPY yarn.lock ./
RUN yarn install
COPY . .
EXPOSE 8080
CMD ["node", "server.js"]
To sanity-check ourselves, let's build this image and launch a container with it.
$ docker build -t sample-express-app .
$ docker run -p 8080:8080 -d sample-express-app
5f3eaa088b35d895411c8d60f684aeba5d68d85f3bc07172c672542fe6b95537
$ curl localhost:8080
Sample Endpoint
Great! We see that when we run the container on port 8080
we can call our endpoint via curl
and get back the response Sample Endpoint
.
Now that we have a Docker image to build and deploy, let's get set up with a container registry on AWS that we can push our images to.
Publishing Docker images to Elastic Container Repository (ECR)
To continue on our theme of learning AWS by using it we are going to configure an ECR repository in AWS. We will use this repository to host our Docker image. Before we can do that though you are going to need to have the AWS CLI installed and configured. We are also going to be using AWS CDK to represent our infrastructure as code, so get that setup as well.
Got the AWS CLI and CDK installed and configured? Awesome, let's initialize our CDK project to start representing our infrastructure in Typescript. Create a new directory called infrastructure
at the root of your repo. Then initialize a new project via CDK in a terminal.
$ cdk init --language typescript
Applying project template app for typescript
Executing npm install...
# Useful commands
* `npm run build` compile typescript to js
* `npm run watch` watch for changes and compile
* `npm run test` perform the jest unit tests
* `cdk deploy` deploy this stack to your default AWS account/region
* `cdk diff` compare deployed stack with current state
* `cdk synth` emits the synthesized CloudFormation template
We now have a CDK project stubbed out inside of our infrastructure
folder. The key file we are going to be adding our infrastructure too is lib/infrastructure-stack.ts
. Right now we don't have much in here, so let's change that.
To get things started let's add our ECR repository resource. First, we need to add the ECR module to our CDK project.
$ npm install @aws-cdk/aws-ecr
Now we can provision our ECR repository by updating our infrastructure-stack.ts
to look like this.
import cdk = require('@aws-cdk/core');
import ecr = require('@aws-cdk/aws-ecr');
export class InfrastructureStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
// ECR repository
const repository = new ecr.Repository(this, 'sample-express-app', {
repositoryName: 'sample-express-app'
});
}
}
To deploy our CDK infrastructure, we need to run a deploy
command from our command line.
$ cdk deploy
InfrastructureStack: deploying...
InfrastructureStack: creating CloudFormation changeset...
0/3 | 15:51:50 | CREATE_IN_PROGRESS | AWS::CloudFormation::Stack | InfrastructureStack User Initiated
0/3 | 15:51:53 | CREATE_IN_PROGRESS | AWS::CDK::Metadata | CDKMetadata
0/3 | 15:51:53 | CREATE_IN_PROGRESS | AWS::ECR::Repository | sample-express-app (sampleexpressapp99ADE4E3)
0/3 | 15:51:54 | CREATE_IN_PROGRESS | AWS::ECR::Repository | sample-express-app (sampleexpressapp99ADE4E3) Resource creation Initiated
1/3 | 15:51:54 | CREATE_COMPLETE | AWS::ECR::Repository | sample-express-app (sampleexpressapp99ADE4E3)
1/3 | 15:51:55 | CREATE_IN_PROGRESS | AWS::CDK::Metadata | CDKMetadata Resource creation Initiated
2/3 | 15:51:55 | CREATE_COMPLETE | AWS::CDK::Metadata | CDKMetadata
3/3 | 15:51:57 | CREATE_COMPLETE | AWS::CloudFormation::Stack | InfrastructureStack
๐ฅ We now have an ECR repository in our AWS account to host our new Docker image. With our repository created we need to login to it before we can push up our new image. To do that we run the command below in backticks so that the docker login
command gets invoked once the get-login
returns.
$ `aws ecr get-login --no-include-email`
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded
$ aws ecr describe-repositories
{
"repositories": [
{
"registryId": "<aws-id>",
"repositoryName": "sample-express-app",
"repositoryArn": "arn:aws:ecr:us-west-2:<aws-id>:repository/infra-sampl-1ewaboppskux6",
"createdAt": 1571007114.0,
"repositoryUri": "<aws-id>.dkr.ecr.us-west-2.amazonaws.com/sample-express-app"
}
]
}
Now that we are all logged in we can tag and push our image up to our new ECR repository. Grab the repositoryUri
of your repository from the describe-repositories
command up above. We are going to use that in the tag and push command below.
$ docker tag sample-express-app <aws-id>.dkr.ecr.us-west-2.amazonaws.com/sample-express-app
$ docker push <aws-id>.dkr.ecr.us-west-2.amazonaws.com/sample-express-app
0574222c01c4: Pushed
28c9fa7f105e: Pushed
8942b63b65a1: Pushed
b91230b492da: Pushed
6ad739b471d2: Pushed
954f92adc866: Pushed
adca1e83b51a: Pushed
73982c948de0: Pushed
84d0c4b192e8: Pushed
a637c551a0da: Pushed
2c8d31157b81: Pushed
7b76d801397d: Pushed
f32868cde90b: Pushed
0db06dff9d9a: Pushed
Great, our Docker image is in our ECR repository. Now we can move on to deploying it and running it via Elastic Container Service (ECS).
Setting up our ECS infrastructure
Before we can run a Docker container in our AWS account using our new image we need to create the infrastructure it will run on. For this blog post, we are going to focus on running our container on the Elastic Container Service (ECS) provided by AWS.
ECS is a container orchestration service provided by AWS. It removes the need to manage the infrastructure, scheduling and scaling for containerized workloads.
ECS consists of three core terms that are important to keep in mind when thinking of the service.
- Cluster: This is the logical group of underlying EC2 instances that our containers are running on. We don't have access to these instances and AWS manages them on our behalf.
- Service: A long-running process like a web server or database runs as a service within our cluster. We can define how many containers should be running for this service.
- Task Definition: This is the definition of our container that can run on the cluster individually or via a service. When a task definition is running in our cluster we often refer to it as a task, so a running container === a task in ECS.
The order of these definitions is relevant as well. We can think of the cluster being the lowest level of the service, the EC2 instances our containers run on. While a task is an instance of our container running on one or more of those instances.
With that terminology in our memory banks let's jump into actually provisioning our infrastructure. We are going to update our infrastructure-stack.ts
to have the resources we need for our ECS cluster. First, we need to add a few other modules.
$ npm install @aws-cdk/aws-ecs
$ npm install @aws-cdk/aws-ec2
$ npm install @aws-cdk/aws-ecs-patterns
Now we can add the resources we need for our cluster that will run our new Docker image. Our infrastructure-stack.ts
should now look like this.
import cdk = require('@aws-cdk/core');
import ecr = require('@aws-cdk/aws-ecr');
import ecs = require('@aws-cdk/aws-ecs');
import ec2 = require('@aws-cdk/aws-ec2');
import ecsPatterns = require('@aws-cdk/aws-ecs-patterns');
export class InfrastructureStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
// ECR repository
const repository = new ecr.Repository(this, 'sample-express-app', {
repositoryName: 'sample-express-app'
});
// ECS cluster/resources
const cluster = new ecs.Cluster(this, 'app-cluster', {
clusterName: 'app-cluster'
});
cluster.addCapacity('app-scaling-group', {
instanceType: new ec2.InstanceType("t2.micro"),
desiredCapacity: 1
});
const loadBalancedService = new ecsPatterns.ApplicationLoadBalancedEc2Service(this, 'app-service', {
cluster,
memoryLimitMiB: 512,
cpu: 5,
desiredCount: 1,
serviceName: 'sample-express-app',
taskImageOptions: {
image: ecs.ContainerImage.fromEcrRepository(repository),
containerPort: 8080
},
publicLoadBalancer: true
});
}
}
The first thing we notice is that we first define our ECS cluster, app-cluster
. Next, we need to add an instance to our cluster, app-scaling-group
. This is an autoscaling group of t2.micro
instance types that our containers can run on. We then use a load-balanced service pattern provided by the ecsPatterns
module.
Notice that we point our ECS service to the image hosted in our ECR repository. We also set the containerPort
to 8080
as that is the port our express app is running on inside of the container.
This pattern creates a public-facing load balancer that we will be able to call from curl
or our web browser. This load balancer will forward calls to our container on port 8080
running inside of our ECS service.
We deploy these changes via another deploy
command. This time we are going to specify --require-approval never
so that we don't get prompted about the IAM changes.
$ cdk deploy --require-approval never
InfrastructureStack: deploying...
InfrastructureStack: creating CloudFormation changeset...
0/46 | 16:41:46 | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | InfrastructureStack User Initiated
0/46 | 16:42:07 | CREATE_IN_PROGRESS | AWS::EC2::EIP | app-cluster/Vpc/PublicSubnet2/EIP (appclusterVpcPublicSubnet2EIPD0A381A3)
0/46 | 16:42:07 | CREATE_IN_PROGRESS | AWS::ECS::Cluster | app-cluster (appclusterD09F8E40)
0/46 | 16:42:07 | CREATE_IN_PROGRESS | AWS::EC2::InternetGateway | app-cluster/Vpc/IGW (appclusterVpcIGW17A11835)
0/46 | 16:42:07 | CREATE_IN_PROGRESS | AWS::EC2::EIP | app-cluster/Vpc/PublicSubnet1/EIP (appclusterVpcPublicSubnet1EIP791F54CD)
...
....
.....
44/46 | 16:46:04 | CREATE_COMPLETE | AWS::Lambda::Permission | app-cluster/DefaultAutoScalingGroup/DrainECSHook/Function/AllowInvoke:InfrastructureStackappclusterDefaultAutoScalingGroupLifecycleHookDrainHookTopic2C88B6D3 (appclusterDefaultAutoScalingGroupDrainECSHookFunctionAllowInvokeInfrastructureStackappclusterDefaultAutoScalingGroupLifecycleHookDrainHookTopic2C88B6D3036C2EFB)
45/46 | 16:46:33 | CREATE_COMPLETE | AWS::ECS::Service | app-service/Service (appserviceServiceA5AB3AA1)
45/46 | 16:46:37 | UPDATE_COMPLETE_CLEA | AWS::CloudFormation::Stack | InfrastructureStack
46/46 | 16:46:38 | UPDATE_COMPLETE | AWS::CloudFormation::Stack | InfrastructureStack
Outputs:
InfrastructureStack.appserviceLoadBalancerDNS0A615BF5 = Infra-appse-187228PB273DW-1700265048.us-west-2.elb.amazonaws.com
InfrastructureStack.appserviceServiceURL90EC0456 = http://Infra-appse-187228PB273DW-1700265048.us-west-2.elb.amazonaws.com
By using the AWS CDK module for Elastic Container Service we created all the resources needed for our new cluster. As you can see from the output a new VPC with associated subnets has been created on our behalf. This is a nice benefit of a tool like CDK, it created sensible defaults for our new cluster without us needing to specify them.
We should now see that we have created a new ECS cluster with our service running our current task definition. CDK has helped us out by outputting the url for our service load balancer. Let's grab that url and check that our container is running and accepting traffic by hitting that url via curl.
$ curl Infra-appse-187228PB273DW-1700265048.us-west-2.elb.amazonaws.com
Sample Endpoint
Where to go from here
Now that our initial container is running in our ECS cluster we can take a step back and explore where we can go from here.
Our ECS task definition is currently set up to point at the latest
tag of the Docker image we publish. This means that we can update our image and the changes can be deployed to our cluster. Let's update the response we return for our API.
api.get('/', (req, res) => {
res.send('New Response\n');
});
Now let's build and push a new version of our Docker image.
$ docker build -t sample-express-app .
$ `aws ecr get-login --no-include-email`
$ docker tag sample-express-app <aws-id>.dkr.ecr.us-west-2.amazonaws.com/sample-express-app:latest
$ docker push <aws-id>.dkr.ecr.us-west-2.amazonaws.com/sample-express-app:latest
9a7704a19307: Pushed
03a86aeeb52b: Layer already exists
c14651828ff6: Layer already exists
4ecb552d7aff: Layer already exists
6ad739b471d2: Layer already exists
954f92adc866: Layer already exists
adca1e83b51a: Layer already exists
73982c948de0: Layer already exists
84d0c4b192e8: Layer already exists
a637c551a0da: Layer already exists
2c8d31157b81: Layer already exists
7b76d801397d: Layer already exists
f32868cde90b: Layer already exists
0db06dff9d9a: Layer already exists
latest: digest: sha256:ed82982bfa5fe6333c4b67afaae0f36e3208a588736c9586ff81dbdd7e1bc0f5 size: 3256
We now have a new version of our image pushed up to our repository. But it has not been deployed to our service running in our cluster. To pick up the change in our cluster we need to restart the service so that it pulls the latest
tag of our image.
Luckily, we can do that with one AWS CLI call from our command line.
$ aws ecs update-service --force-new-deployment --cluster app-cluster --service sample-express-app
Once our new image gets rolled out to the service (which can take a few minutes), we can curl
our endpoint again and see our new response.
$ curl Infra-appse-187228PB273DW-1700265048.us-west-2.elb.amazonaws.com
New Response
Conclusion
๐ We have now run through the full exercise of building a Docker image, provisioning an ECS cluster, and running our image within our cluster. We even demoed how we can update our image and deploy new versions to our running service.
There are a lot of directions you could go next. The next level for DevOps would be to set up a CI/CD pipeline that allows you to continuously deploy your new images to your ECS cluster. I have a blog post that focuses on building your Docker images using AWS CodePipeline and CodeBuild to get you started.
From there it might be worth reducing the deployment times when launching a new image in our ECS cluster. The reason this is not instantaneous is that the load balancer associated with our service has connection draining enabled. This allows us to incrementally roll out new versions to our service without taking down our existing service right away. This is ideal for an unnoticeable deployment, but it does mean our deployment times take a bit longer.
Want to check out my other projects?
I am a huge fan of the DEV community. If you have any questions or want to chat about different ideas relating to refactoring, reach out on Twitter or drop a comment below.
Outside of blogging, I created a Learn AWS By Using It course. In the course, we focus on learning Amazon Web Services by actually using it to host, secure, and deliver static websites. It's a simple problem, with many solutions, but it's perfect for ramping up your understanding of AWS. I recently added two new bonus chapters to the course that focus on Infrastructure as Code and Continuous Deployment.
I also curate my own weekly newsletter. The Learn By Doing newsletter is packed full of awesome cloud, coding, and DevOps articles each week. Sign up to get it in your inbox.