🚀 Spin Up an Amazon EKS Cluster in Minutes! ⏱️

Sarvar Nadaf - Oct 24 - - Dev Community

👋 Hey there! I’m Sarvar, a Cloud Architect passionate about cutting-edge technologies. With years of experience in Cloud Operations (Azure and AWS), Data Operations, Data Analytics, and DevOps, I've had the privilege of working with clients around the globe, delivering top-notch results. I’m always exploring the latest tech trends and love sharing what I learn along the way. Let’s dive into the world of cloud and tech together! 🚀

Important Note: This tutorial is not intended for production-grade deployment. Instead, it’s designed for beginners who want to start their EKS journey and become familiar with the basics of EKS and eksctl.

In this article, we're going to spin up an Amazon EKS cluster in minutes using simple commands from the command line. We’ll start by configuring our AWS credentials for secure access. To do this, we’ll create a new IAM user with sufficient permissions to manage EKS and EC2 resources and use that user's credentials to configure the AWS CLI. With the setup ready, we'll use VS Code and Git Bash (or any Linux CLI) to deploy our EKS cluster in two ways:

  1. Imperative Approach: Running eksctl commands directly in the CLI to configure and launch the cluster step-by-step, ideal for quick experimentation.
  2. Declarative Approach: Using a YAML configuration file to specify all the cluster details, making it easy to manage, replicate, and modify settings.

If you need to install eksctl, awscli, and kubectl, please refer to this guide. The guide provides quick and easy steps to install the necessary CLI tools for this tutorial, taking only a few minutes to complete.


All the code and commands for this tutorial are available in my GitHub repository. Feel free to fork or download the code from this repo to get started!


Imperative Approach:

Using eksctl commands directly in the command line (e.g., eksctl create cluster --name my-cluster) represents an imperative approach to EKS deployment. In this style, you issue specific commands to make changes immediately, step by step. This method is ideal for quick, one-time actions or immediate adjustments to your EKS environment but may lack the consistency and version control offered by declarative methods.

Here's the same configuration translated into a series of eksctl command-line commands. While using the command line, certain features may require additional configuration files (like JSON for IAM roles or Fargate profiles), but you can still achieve much of the setup directly.

Step-by-Step Command-Line Setup

1. Create the Cluster:
Using following command we will creating the EKS cluster control plane.

   eksctl create cluster \
   --name=dev-eks-cluster \
   --region=us-east-1 \
   --zones=us-east-1a,us-east-1b \
   --without-nodegroup 
Enter fullscreen mode Exit fullscreen mode

Image description

Image description

2. List available EKS Cluster:
To List out existing cluster use following command.

   eksctl get cluster  
Enter fullscreen mode Exit fullscreen mode

Image description

3. Enable IAM OIDC Provider for EKS Clusters:
Why It Required? IAM OIDC provider for EKS clusters allows Kubernetes pods to assume IAM roles via OIDC authentication, enabling secure access to AWS services without static credentials.

   eksctl utils associate-iam-oidc-provider \
    --region region-code \
    --cluster <cluter-name> \
    --approve

#Updated
eksctl utils associate-iam-oidc-provider \
    --region us-east-1 \
    --cluster dev-eks-cluster \
    --approve
Enter fullscreen mode Exit fullscreen mode

Image description

4. Create On-Demand Instance Node Group:
Not Recommended Create & Associate On-Demand Managed Nodes for the EKS Cluster

   eksctl create nodegroup \
     --cluster dev-eks-cluster \
     --name on-demand-ng \
     --instance-types t2.micro,t2.small \
     --nodes 2 \
     --nodes-min 1 \
     --nodes-max 5 \
     --node-volume-size 20 \
     --node-volume-type gp3 \
     --ssh-access \
     --ssh-public-key test \
     --managed \
     --alb-ingress-access \
     --asg-access \
     --full-ecr-access
Enter fullscreen mode Exit fullscreen mode

Image description

OR

4. Create On-Demand Instance Node Group:
*Highly Recommended * Cost saving option it will provision Spot instances for Managed Node group.

   eksctl create nodegroup \
     --cluster dev-eks-cluster \
     --name spot-ng \
     --instance-types t2.micro,t2.small \
     --nodes 2 \
     --nodes-min 1 \
     --nodes-max 5 \
     --node-volume-size 20 \
     --node-volume-type gp3 \
     --ssh-access \
     --ssh-public-key test \
     --managed \
     --spot \
     --alb-ingress-access \
     --asg-access \
     --full-ecr-access
Enter fullscreen mode Exit fullscreen mode

Image description

5. List Associated Nodegroups with EKS Cluster:

   eksctl get nodegroup --cluster=<clusterName>

#Updated
   eksctl get nodegroup --cluster=dev-eks-cluster
Enter fullscreen mode Exit fullscreen mode

Image description

6. Delete EKS Cluster:

   eksctl delete cluster <clusterName>

#Updated
   eksctl delete cluster dev-eks-cluster

Enter fullscreen mode Exit fullscreen mode

Image description


Declarative Approach:

By defining EKS configurations in a YAML file (e.g., eksctl create cluster -f eks-cluster-config.yaml), you adopt a declarative approach. Here, you specify the desired end state of your cluster, detailing resources like node groups and networking. eksctl then ensures your cluster matches this configuration, making it easy to manage, version, and reproduce environments, especially for complex or production deployments. You can further customize this YAML configuration to add more features and tailor it to your specific needs. Here are some additional options you might consider:

Let's break down the configuration into sections for easier understanding, and then I’ll provide the full YAML file at the end.

1. Cluster Metadata

This section defines the basic settings for the EKS cluster itself, including the name, region, and Kubernetes version.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: dev-eks-cluster
  region: us-east-1
  version: "1.24"  # Change it if needed
Enter fullscreen mode Exit fullscreen mode

2. VPC Configuration

Here we configure the subnets in which the cluster will operate. In this example, we’re defining public subnets in two availability zones.

vpc:
  id: "vpc-xxxxxxx"  # Put Your VPC Details 
  subnets:
    public:
      us-east-1a:
        id: "subnet-xxxxxxx"
      us-east-1b:
        id: "subnet-xxxxxxx"
      us-east-1c:
        id: "subnet-xxxxxxx"

Enter fullscreen mode Exit fullscreen mode

3. On-Demand Node Group

This node group uses on-demand instances, which provide consistent, always-available compute resources. It includes common settings for desired capacity, volume size, SSH access, and IAM permissions.

managedNodeGroups:
  - name: on-demand-ng
    instanceType: t3.medium  # Change it as needed
    desiredCapacity: 2
    minSize: 1
    maxSize: 5
    volumeSize: 20
    volumeType: gp3
    ssh:
      allow: true
      publicKeyName: your-keypair  # Put your Key pair name
    labels:
      lifecycle: on-demand
      workload: general
    tags:
      Name: "on-demand-ng"
    iam:
      withAddonPolicies:
        albIngress: true       
        autoScaler: true       
        imageBuilder: true     
Enter fullscreen mode Exit fullscreen mode

4. Spot Instance Node Group

This node group uses Spot instances, which are typically more cost-effective. Spot instances are suitable for fault-tolerant and stateless applications due to potential interruptions.

    - name: spot-ng
    instanceTypes: ["t3.small", "t3.micro"]  # Change it as needed
    desiredCapacity: 2
    minSize: 1
    maxSize: 5
    volumeSize: 20
    volumeType: gp3
    ssh:
      allow: true
      publicKeyName: your-keypair  # Your key pair name
    labels:
      lifecycle: spot
      workload: batch
    tags:
      Name: "spot-ng"
    iam:
      withAddonPolicies:
        albIngress: true       
        autoScaler: true       
        imageBuilder: true     
    spot: true  # Spot instances

Enter fullscreen mode Exit fullscreen mode

Full YAML Configuration

Here’s the complete configuration, combining all sections:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: dev-eks-cluster
  region: us-east-1
  version: "1.24"  

vpc:
  id: "vpc-xxxxxxx"  
  subnets:
    public:
      us-east-1a:
        id: "subnet-xxxxxxx"
      us-east-1b:
        id: "subnet-xxxxxxx"
      us-east-1c:
        id: "subnet-xxxxxxx"

managedNodeGroups:
  - name: on-demand-ng
    instanceType: t3.medium  
    desiredCapacity: 2
    minSize: 1
    maxSize: 5
    volumeSize: 20
    volumeType: gp3
    ssh:
      allow: true
      publicKeyName: your-keypair  
    labels:
      lifecycle: on-demand
      workload: general
    tags:
      Name: "on-demand-ng"
    iam:
      withAddonPolicies:
        albIngress: true       
        autoScaler: true       
        imageBuilder: true     

  - name: spot-ng
    instanceTypes: ["t3.small", "t3.micro"]  
    desiredCapacity: 2
    minSize: 1
    maxSize: 5
    volumeSize: 20
    volumeType: gp3
    ssh:
      allow: true
      publicKeyName: your-keypair  
    labels:
      lifecycle: spot
      workload: batch
    tags:
      Name: "spot-ng"
    iam:
      withAddonPolicies:
        albIngress: true       
        autoScaler: true       
        imageBuilder: true     
    spot: true  

Enter fullscreen mode Exit fullscreen mode

Using the YAML Configuration

To create the cluster with both on-demand and Spot instance node groups, save this configuration as cluster-config.yaml and run:

eksctl create cluster -f eks-cluster-config.yaml
Enter fullscreen mode Exit fullscreen mode

Image description

   eksctl get cluster  
Enter fullscreen mode Exit fullscreen mode

Image description

List Associated Nodegroups with EKS Cluster:

   eksctl get nodegroup --cluster=<clusterName>

#Updated
   eksctl get nodegroup --cluster=dev-eks-cluster
Enter fullscreen mode Exit fullscreen mode

Image description

This configuration will set up an EKS cluster in us-east-1 with an on-demand node group (on-demand-ng) and a Spot instance node group (spot-ng). Adjust instance types, labels, and other properties as needed to suit your environment.

To delete an Amazon EKS cluster created with eksctl, you can use the following command:

eksctl delete cluster --name dev-eks-cluster --region us-east-1
Enter fullscreen mode Exit fullscreen mode

Image description

Additional Notes

  • This command will delete the EKS cluster, associated node groups, and the underlying infrastructure (such as VPC and subnets) if they were created by eksctl during cluster creation.
  • If the cluster was set up with an existing VPC and subnets, eksctl will only delete the cluster resources created by eksctl and will leave the existing VPC and subnets intact.
  • At this time, there is no direct way to include the IAM OIDC provider setting within the eksctl YAML configuration file itself. The recommended approach is to run it as a separate command after your cluster is created.

Conclusion: Using tools like eksctl speeds up the process, and Amazon EKS makes it easy to deploy and manage Kubernetes clusters on AWS. With just a few commands, we were able to create a functional EKS cluster both declaratively and imperatively, each with its own advantages. For people who are new to EKS, this easy setup offers a fantastic place to start. But keep in mind that this guide is intended mainly for learning purposes, even though it covers the fundamentals of building an EKS cluster. Additional security settings, configurations, and administration procedures would be required for a dependable and secure cluster in production. Armed with these abilities, you may now begin investigating Kubernetes on AWS for testing, development, or just to learn more about cloud-based container orchestration!

— — — — — — — —
Here is the End!

Thank you for reading! ✨ I hope this article helped simplify the process and gave you valuable insights. As I continue to explore the ever-evolving world of technology, I’m excited to share more guides, tips, and updates with you. 🚀 Stay tuned for more content that breaks down complex concepts and makes them easier to grasp. Let’s keep learning and growing together! 💡

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .