In my previous post I showed how to spin up an EKS cluster with pure shell and AWS CLI. (All the links to other posts in this series will be here)
This used to be the easiest way of getting to a cluster without leaving your terminal. But pretty early in EKS history (2017) some smart folks from a company named Weaveworks(RIP) realized it was too cumbersome to do this using the aws cli
subcommand and that EKS is complex enough to deserve a command-line client of its own. That's how eksctl
was born.
A few months ago Weaveworks (who brought us a plethora of great OSS tools like Flux, Flagger and Weave) was shut down. But AWS announced full support for eksctl in 2019 - so eksctl
is now the de-facto standard EKS CLI tool.
The great thing about eksctl
is that it allows one to create and manage clusters not only using one-off commands with arguments but also with YAML configuration files - in a true and familiar IaC way.
We'll check out both options but first let's install eksctl and generate an SSH key so we can connect to the nodes in the clusters we create if needed. Please note - I'm not endorsing SSH connections to your EKS nodes. Do avoid this if possible - so as not to cause inadvertent configuration drift. But sometimes we still need this for troubleshooting, especially in training environments. So let's have the SSH key handy.
Install eksctl
If you're on Linux - here are the official instructions:
# for ARM systems, set ARCH to: `arm64`, `armv6` or `armv7`
ARCH=amd64
PLATFORM=$(uname -s)_$ARCH
curl -sLO "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_$PLATFORM.tar.gz"
# (Optional) Verify checksum
curl -sL "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_checksums.txt" | grep $PLATFORM | sha256sum --check
tar -xzf eksctl_$PLATFORM.tar.gz -C /tmp && rm eksctl_$PLATFORM.tar.gz
sudo mv /tmp/eksctl /usr/local/bin
Please note this doesn't install such eksctl prerequisites as kubectl
and aws-iam-authenticator
.
And if, like me - you're on a Mac - definitely use brew
as it takes care of all dependencies. (even though the official eksctl
docs don't recommend it)
brew tap weaveworks/tap
brew install weaveworks/tap/eksctl
And now - let's generate that ssh key:
ssh-keygen -f ./id_rsa -N ''
This will create an id_rsa
and id_rsa.pub
in your current directory. Make sure to run the following eksctl
commands from the same directory and it will pick up this key by default.
Sidenote - the VPC
If you've read the previous post in this series (where we created an EKS cluster using the AWS CLI), you'd notice that creating the VPC was a separate step. The added value of eksctl
is it takes care of most dependencies and add-ons for us without the need of running additional commands. The same is true for VPC creation. A new VPC with default subnet configuration is created for us each time we spin up a new cluster, unless we specifically define we want to re-use an existing VPC.
1. Create an EKS cluster - eksctl with arguments
The most straightforward way of creating an EKS cluster with eksctl
is providing all the arguments on the command-line and letting the tool take care of the defaults. This approach, while limited and not repeatable enough can definitely give us a cluster.
The command I provide here defines quite a number of settings I personally find important even for small toy clusters I spin up for fun and games. But eksctl
can do its job even with less stuff defined. Look in the official "Getting Started" docs if you want just the bare bones.
So here's what I decided to use:
# First - define the environment.
export CLUSTER_NAME=way3
export AWS_REGION=eu-central-1
export K8S_VERSION=1.30
export NODE_TYPE=t2.medium
export MIN_NODES=1
export MAX_NODES=3
I'm starting out with small nodes and already preparing the cluster for auto-scaling with min and max nodes definitions. It's important to note that eksctl
allows us to enable the IAM policy for ASG acces and define the auto-scaling range. But it doesn't take care of installing cluster-autoscaler
. We'd need to do that separately. If we wanted to... On the other hand - these days it makes total sense to start out with Karpenter. For which eksctl
does provide support, but not on the command line. whcih means we'll see how to configure Karpenter in the next section.
And now - time to spin up the cluster:
eksctl create cluster --name $CLUSTER_NAME \
--region $AWS_REGION \
--with-oidc --version $K8S_VERSION \
--nodegroup-name ng-$CLUSTER_NAME-1 \
--node-type t2.medium \
--nodes 1 --nodes-min 1 --nodes-max 3 \
--spot \
--ssh-access \
--asg-access \
--external-dns-access \
--full-ecr-access \
--alb-ingress-access
This command gives us a full-featured cluster with IAM policies for ECR access (--full-ecr-access
), external dns controller (--external-dns-access
) , ALB ingress controller (--alb-ingress-access
), OIDC support and more. It also runs its nodes on spot instances for cost optimization. Which is totally fine for a toy cluster but may be not appropriate if the application you're planning to deploy isn't disruption-tolerant.
From the command output we learn that in the background our command is converted into a couple of CloudFormation stacks:
2024-06-27 12:51:47 [ℹ] will create a CloudFormation stack for cluster itself and 0 nodegroup stack(s)
2024-06-27 12:51:47 [ℹ] will create a CloudFormation stack for cluster itself and 1 managed nodegroup stack(s)
After about 15 minutes (depending on the weather and the region you've decided to use) CloudFormation returns and we can access our cluster:
kubectl get node
NAME STATUS ROLES AGE VERSION
ip-192-168-56-76.eu-central-1.compute.internal Ready <none> 35m v1.29.3-eks-ae9a62a
Note that the new cluster context is added to your kubeconfig
automatically.
If you want to update the kubeconfig
at a later time you can use:
eksctl utils write-kubeconfig -c $CLUSTER_NAME -r $AWS_REGION
But, as we already said - the CLI approach is limited. To do real IaC we want to put the cluster definitions in a YAML config file. This gives us a lot more capabilities, and allows to commit the config file to source control for further collaboration, change tracking and automation.
But first - let's remove the cluster we just created:
eksctl delete cluster --region=$AWS_REGION --name=$CLUSTER_NAME
2. Create an EKS cluster - eksctl with a config file.
The config file I provide here gives us everything we defined at the command line and more. As mentioned - it also allows us to install Karpenter in the same eksctl
execution - thus giving us an industry-standard auto-scaling EKS cluster with just-in-time node provisioning. You can grab this file in Github too.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: way3
region: eu-central-1
version: "1.30"
tags:
karpenter.sh/discovery: way3
iam:
withOIDC: true
managedNodeGroups:
- name: ng-way3-1
labels: { role: worker }
instanceType: t2.medium
desiredCapacity: 2
minSize: 1
maxSize: 3
tags:
nodegrouprole: way3
volumeSize: 20
iam:
withAddonPolicies:
externalDNS: true
certManager: true
awsLoadBalancerController: true
albIngress: true
ebs: true
efs: true
imageBuilder: true
cloudWatch: true
ssh:
allow: true # will use ~/.ssh/id_rsa.pub as the default ssh key
karpenter:
version: '0.37.0'
createServiceAccount: true
withSpotInterruptionQueue: true
An attentive eye will also notice I've also defined some additional stuff such as CloudWatch logging of the control plane, EBS and EFS access. Consider removing these lines if you don't need them.
Also you'll notice that not only it installs Karpenter, it also takes care of setting up the SpotInterruptionQueue, which allows Karpenter to replace spot instances before they die.
And there are many additional options available.
So yes - this is a very scalable approach, which takes care of more or less everything one might need in an EKS cluster.
Execute this plan with:
eksctl create cluster -f cluster.yaml
This again creates a CloudFormation execution that, granted we have all the necessary permissions, should complete successfully.
Let's check that Karpenter got installed:
kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
karpenter karpenter-79db484bbf-flzzq 1/1 Running 0 32s
karpenter karpenter-79db484bbf-nfhsp 1/1 Running 0 32s
kube-system aws-node-8h4ln 2/2 Running 0 17m
kube-system aws-node-vq8wj 2/2 Running 0 18m
kube-system coredns-6f6d89bcc9-qx497 1/1 Running 0 24m
kube-system coredns-6f6d89bcc9-wwjtp 1/1 Running 0 24m
kube-system kube-proxy-8mnd2 1/1 Running 0 18m
kube-system kube-proxy-c5zkp 1/1 Running 0 17m
Yup, here it is!
The upside of using the config file is of course the ability to manage stuff in a somewhat idempotent way. So for example if we want to change our node group config - we can update the following lines:
- name: ng-1
labels: { role: worker }
instanceType: t2.medium
desiredCapacity: 1
minSize: 1
maxSize: 5
and then run eksctl update nodegroup -f cluster.yaml
- this will update our NodeGroup autoscaling range.
And of course eksctl provides us with a plethora of addtional commands that come very handy for ongoing management of EKS clusters:
eksctl -h
The official CLI for Amazon EKS
Usage: eksctl [command] [flags]
Commands:
eksctl anywhere EKS anywhere
eksctl associate Associate resources with a cluster
eksctl completion Generates shell completion scripts for bash, zsh or fish
eksctl create Create resource(s)
eksctl delete Delete resource(s)
eksctl deregister Deregister a non-EKS cluster
eksctl disassociate Disassociate resources from a cluster
eksctl drain Drain resource(s)
eksctl enable Enable features in a cluster
eksctl get Get resource(s)
eksctl help Help about any command
eksctl info Output the version of eksctl, kubectl and OS info
eksctl register Register a non-EKS cluster
eksctl scale Scale resources(s)
eksctl set Set values
eksctl unset Unset values
eksctl update Update resource(s)
eksctl upgrade Upgrade resource(s)
eksctl utils Various utils
eksctl version Output the version of eksctl
All in all - eksctl is the go to tool for EKS management if you haven't already standardized your cloud platform on another IaC solution such as Terraform, Pulumi, CDK or others which we'll look into in the folowing posts.
Thanks for reading and may your clusters be lean!
P.S. now you got a cluster - why not start managing its cost and performance for free with PerfectScale - the leading Kubernetes cost optimization solution?
Join now to build clusters you can be proud of: https://perfectscale.io.