Deploying AWS EKS with Terraform and Blueprints Addons

Timur Galeev - Nov 7 - - Dev Community

After a pause from covering AWS and infrastructure management, I’m back with insights for those looking to navigate the world of AWS containers and Kubernetes with ease. For anyone new to deploying Kubernetes in AWS, leveraging Terraform for setting up an EKS (Elastic Kubernetes Service) cluster can be a game-changer. By combining Terraform’s infrastructure-as-code capabilities with AWS’s EKS Blueprints Addons, users can create a scalable, production-ready Kubernetes environment without the usual complexity.

In this article, I'll guide you through using Terraform to deploy EKS with essential add-ons, which streamline the configuration and management of your Kubernetes clusters. With these modular add-ons, you can quickly incorporate features like CoreDNS, the AWS Load Balancer Controller, and other powerful tools to customize and enhance your setup. Whether you’re new to container orchestration or just seeking an efficient AWS solution, this guide will help you build a resilient EKS environment in a few straightforward steps.

So let’s start.

Setting Up the VPC for EKS

The VPC configuration is foundational for your EKS cluster, establishing a secure, isolated environment with both public and private subnets. Private subnets are typically used to host your Kubernetes nodes, keeping them inaccessible from the internet. Here’s the configuration provided in the vpc.tf file, which sets up both public and private subnets along with NAT and Internet Gateway options for flexible networking.

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 5.0"

  name                 = local.name
  cidr                 = var.vpc_cidr
  azs                  = local.azs
  secondary_cidr_blocks = var.secondary_cidr_blocks
  private_subnets      = concat(local.private_subnets, local.secondary_ip_range_private_subnets)
  public_subnets       = local.public_subnets
  enable_nat_gateway   = true
  single_nat_gateway   = true
  public_subnet_tags   = {"kubernetes.io/role/elb" = 1}
  private_subnet_tags  = {
    "kubernetes.io/role/internal-elb" = 1
    "karpenter.sh/discovery" = local.name
  }
  tags = local.tags
}
Enter fullscreen mode Exit fullscreen mode

This setup:

  • Creates private and public subnets across multiple availability zones.

  • Configures a secondary CIDR block for the EKS data plane, which is crucial for large-scale deployments.

  • Enables a NAT gateway for private subnets, ensuring secure internet access for internal resources.

  • Tags subnets for Kubernetes service and discovery, essential for integration with other AWS services like load balancers and Karpenter.

Deploying EKS with Managed Node Groups

Now that the VPC is configured, let’s move on to deploying the EKS cluster with the eks.tf file configuration. This setup includes defining managed node groups within the EKS cluster, specifying node configurations, security rules, and IAM roles.

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 19.15"

  cluster_name                   = local.name
  cluster_version                = var.eks_cluster_version
  cluster_endpoint_public_access = true
  vpc_id                         = module.vpc.vpc_id
  subnet_ids                     = compact([for subnet_id, cidr_block in zipmap(module.vpc.private_subnets, module.vpc.private_subnets_cidr_blocks) : substr(cidr_block, 0, 4) == "100." ? subnet_id : null])

  aws_auth_roles = [
    {
      rolearn  = module.eks_blueprints_addons.karpenter.node_iam_role_arn
      username = "system:node:{{EC2PrivateDNSName}}"
      groups   = ["system:bootstrappers", "system:nodes"]
    }
  ]

  eks_managed_node_groups = {
    core_node_group = {
      name             = "core-node-group"
      ami_type         = "AL2_x86_64"
      min_size         = 2
      max_size         = 8
      desired_size     = 2
      instance_types   = ["m5.xlarge"]
      capacity_type    = "SPOT"
      labels           = { WorkerType = "SPOT", NodeGroupType = "core" }
      tags             = merge(local.tags, { Name = "core-node-grp" })
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Key components:

  • VPC and Subnets: The vpc_id and subnet_ids reference the private subnets, providing a secure foundation for EKS nodes.

  • Managed Node Groups: This setup defines a core node group with spot instances (capacity_type = "SPOT") to optimize cost, with configurable instance types, sizes, and labels for workload placement.

  • Security Rules and IAM Roles: Configures additional security rules to manage access between nodes and clusters, along with IAM roles to control permissions for Karpenter and node management.

Configuring EKS Add-ons

Add-ons enhance your EKS cluster by integrating additional AWS services and open-source tools. With the EKS Blueprints, you can easily set up these add-ons, which range from storage solutions to observability and monitoring tools.

Setting Up the EBS CSI Driver for Persistent Storage

The Amazon EBS CSI Driver is essential for persistent storage on EKS. This module configures the necessary IAM roles for the driver, enabling it to provision and manage EBS volumes.

module "ebs_csi_driver_irsa" {
  source                = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
  version               = "~> 5.20"
  role_name_prefix      = format("%s-%s-", local.name, "ebs-csi-driver")
  attach_ebs_csi_policy = true
  oidc_providers = {
    main = {
      provider_arn               = module.eks.oidc_provider_arn
      namespace_service_accounts = ["kube-system:ebs-csi-controller-sa"]
    }
  }
  tags = local.tags
}
Enter fullscreen mode Exit fullscreen mode

This configuration creates an IAM role for the EBS CSI Driver using IAM Roles for Service Accounts (IRSA), which allows the driver to interact with EBS securely.

Enabling Amazon CloudWatch Observability

The amazon-cloudwatch-observability add-on integrates CloudWatch for monitoring and logging, providing insights into your cluster’s performance.

eks_addons = {
  amazon-cloudwatch-observability = {
    preserve                 = true
    service_account_role_arn = aws_iam_role.cloudwatch_observability_role.arn
  }
}
Enter fullscreen mode Exit fullscreen mode

This snippet specifies the IAM role required for CloudWatch, enabling detailed observability for your workloads.

Integrating AWS Load Balancer Controller

The AWS Load Balancer Controller allows you to provision and manage Application Load Balancers (ALBs) for Kubernetes services. Here’s how it’s configured:

enable_aws_load_balancer_controller = true
aws_load_balancer_controller = {
  set = [{
    name  = "enableServiceMutatorWebhook"
    value = "false"
  }]
}
Enter fullscreen mode Exit fullscreen mode

The enableServiceMutatorWebhook setting is disabled to avoid automatic modification of service annotations, making it ideal for custom configurations.

Adding Karpenter for Autoscaling

Karpenter is an open-source autoscaler designed for Kubernetes, enabling efficient and dynamic scaling of EC2 instances based on workload requirements. This configuration sets up Karpenter with support for spot instances, reducing costs for non-critical workloads.

enable_karpenter                  = true
karpenter_enable_spot_termination = true
karpenter_node = {
  iam_role_additional_policies = {
    AmazonSSMManagedInstanceCore = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
  }
}
karpenter = {
  chart_version       = "0.37.0"
  repository_username = data.aws_ecrpublic_authorization_token.token.user_name
  repository_password = data.aws_ecrpublic_authorization_token.token.password
}
Enter fullscreen mode Exit fullscreen mode

This configuration includes additional IAM policies for Karpenter nodes, making it easier to integrate with AWS services like EC2 for flexible scaling.

These add-ons, configured through the AWS EKS Blueprints and Terraform, help streamline Kubernetes management on AWS while offering enhanced storage, observability, and autoscaling. ​

To explore the complete configuration, you can find the full code in the GitHub repository https://github.com/timurgaleev/aws-eks-terraform-addons. The repository includes install.sh to deploy the EKS cluster and configure the add-ons seamlessly, along with cleanup.sh to tear down the environment when it’s no longer needed.

Conclusion

This Terraform setup provides a powerful framework for deploying EKS with essential add-ons, such as storage, observability, and autoscaling, to support scalable applications. Specifically, this configuration is designed to enable deployment of applications like OpenAI Chat, showcasing Kubernetes' flexibility for real-time, interactive workloads. With this setup, you’re ready to deploy and manage robust, production-grade EKS clusters in AWS.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .