How to Provision an AWS EKS Kubernetes Cluster with Terraform

Jacob Martin - Oct 20 '21 - - Dev Community

In this guide, you will learn how to provision an AWS EKS Kubernetes cluster with Terraform. Let’s start with the basics.

What is AWS EKS?

AWS EKS provides managed Kubernetes clusters as a service. You’re on AWS and want to avoid getting into the details of setting up a Kubernetes cluster from scratch? EKS is the way to go!

Before we get started

You’ll need to have Terraform installed locally:

brew install terraform
Enter fullscreen mode Exit fullscreen mode

as well as the AWS CLI:

brew install awscli
Enter fullscreen mode Exit fullscreen mode

as well as kubectl:

brew install kubernetes-cli
Enter fullscreen mode Exit fullscreen mode

If you’re on a different operating system, please find the respective installation instructions here:

Step 1 - Configuring the AWS CLI

You’ll need to configure your AWS CLI with access credentials to your AWS account. You can do this by running:

aws configure
Enter fullscreen mode Exit fullscreen mode

and providing your Access Key ID and Secret Access Key. You will also need to provide the region. For the purposes of this guide we will use us-east-2. Terraform will later use these credentials to provision your AWS resources.

Step 2 - Getting the code

You can now clone a repository which contains everything you need to set up EKS:

git clone https://github.com/hashicorp/learn-terraform-provision-eks-cluster/
Enter fullscreen mode Exit fullscreen mode

Inside you’ll see a few files, the main one being eks-cluster.tf:

module "eks" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = local.cluster_name
  cluster_version = "1.20"
  subnets         = module.vpc.private_subnets

  tags = {
    Environment = "training"
    GithubRepo  = "terraform-aws-eks"
    GithubOrg   = "terraform-aws-modules"
  }

  vpc_id = module.vpc.vpc_id

  workers_group_defaults = {
    root_volume_type = "gp2"
  }

  worker_groups = [
    {
      name                          = "worker-group-1"
      instance_type                 = "t2.small"
      additional_userdata           = "echo foo bar"
      asg_desired_capacity          = 2
      additional_security_group_ids = [aws_security_group.worker_group_mgmt_one.id]
    },
    {
      name                          = "worker-group-2"
      instance_type                 = "t2.medium"
      additional_userdata           = "echo foo bar"
      additional_security_group_ids = [aws_security_group.worker_group_mgmt_two.id]
      asg_desired_capacity          = 1
    },
  ]
}
Enter fullscreen mode Exit fullscreen mode

It uses the EKS Terraform module to set up an EKS cluster with 2 worker groups (the actual nodes running your workloads): one with a single medium machine, and one with two small machines.

Step 3 - Running Terraform

You can now create all of those resources using Terraform. First, run:

terraform init -upgrade
Enter fullscreen mode Exit fullscreen mode

to initialize the Terraform workspace and download any modules and providers which are used.

In order to do a dry run of the changes to be made, run:

terraform plan -out terraform.plan
Enter fullscreen mode Exit fullscreen mode

This will show you that 51 resources will be added, as well as the relevant details of them. You can then run terraform apply with the resulting plan, in order to actually provision the resources:

terraform apply terraform.plan
Enter fullscreen mode Exit fullscreen mode

This might take a few minutes to finish. You might get a “timed out” error. In that case, just repeat both the terraform plan and terraform apply steps.

In the end you will get a list of outputs with their respective values printed out. Make note of your cluster_name.

Step 4 - Connecting with kubectl

In order to use kubectl, which is the main tool to interact with a Kubernetes cluster, you have to give it credentials to your EKS Kubernetes cluster. You can do that by running:

aws eks --region us-east-2 update-kubeconfig --name <output.cluster_name>
Enter fullscreen mode Exit fullscreen mode

Just make sure to replace with the relevant value from your Terraform apply outputs.

Step 5 - Interacting with your cluster

You can now see the nodes of your cluster by running:

> kubectl get nodes -o custom-columns=Name:.metadata.name,nCPU:.status.capacity.cpu,Memory:.status.capacity.memory
Name                                       nCPU   Memory
ip-10-0-1-23.us-east-2.compute.internal    2      4026680Ki
ip-10-0-2-8.us-east-2.compute.internal     1      2031268Ki
ip-10-0-3-128.us-east-2.compute.internal   1      2031268Ki
Enter fullscreen mode Exit fullscreen mode

This command is so long because it displays custom columns, and thanks to those we can indeed see that there are 2 smaller nodes, and 1 bigger node.

Let’s deploy an Nginx instance to see if the cluster is working correctly by running:

kubectl run --port 80 --image nginx nginx
Enter fullscreen mode Exit fullscreen mode

You can see the status of it by running:

> kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          2m46s
Enter fullscreen mode Exit fullscreen mode

And finally set up a tunnel from your computer to this pod by running:

kubectl port-forward nginx 3000:80
Enter fullscreen mode Exit fullscreen mode

If you open http://localhost:3000 in your browser, you should see the web server greet you:

eks

Step 6 - Cleaning up

In order to destroy the resources we’ve created in this session, you can run:

terraform destroy
Enter fullscreen mode Exit fullscreen mode

This may again take up to a few minutes.

Conclusion

I hope this guide helped you on your Kubernetes journey on AWS! If you want more help managing your Terraform state file, building more complex workflows based on Terraform, and managing AWS credentials per run, instead of using a static pair on your local machine, check out Spacelift. We’d love to have you!

You will find more Terraform Tutorials on our website:

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .