Creating an Amazon EKS cluster with managed node group using Terraform

Chabane R. - Mar 20 '21 - - Dev Community

In the previous part we created our network stack. In this part we will configure the Amazon EKS Cluster.

The following resources will be created:

  • Amazon EKS Cluster
  • Two Amazon EKS Node Groups with m5.large instances. A Nitro-based Amazon EC2 instance family is required for Security groups for pods.
    • IAM role to be assigned to the node groups. The IAM role will have the following required policies:
      • AmazonEKSWorkerNodePolicy
      • AmazonEKS_CNI_Policy
      • AmazonEC2ContainerRegistryReadOnly
      • And an inline policy for cluster autoscaling
    • Node Security Groups

Alt Text

Amazon EKS cluster

Our EKS cluster has both public and private endpoints. The public API server endpoint can only be accessed from a specific range of IP addresses. All cluster logs are enabled.

Create the terraform file infra/plan/eks-cluster.tf:



resource "aws_eks_cluster" "eks" {
  name     = "${var.eks_cluster_name}-${var.env}"
  role_arn = aws_iam_role.eks.arn

  vpc_config {
    security_group_ids      = [aws_security_group.eks_cluster.id]
    endpoint_private_access = true
    endpoint_public_access  = true
    public_access_cidrs     = [var.internal_ip_range]
    subnet_ids              = [aws_subnet.private["private-eks-1"].id, aws_subnet.private["private-eks-2"].id, aws_subnet.public["public-eks-1"].id, aws_subnet.public["public-eks-2"].id]
  }

  enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]

  depends_on = [
    aws_iam_role_policy_attachment.eks-AmazonEKSClusterPolicy,
    aws_iam_role_policy_attachment.eks-AmazonEKSVPCResourceController,
    aws_iam_role_policy_attachment.eks-AmazonEKSServicePolicy
  ]

  tags = {
    Environment = var.env
  }
}


Enter fullscreen mode Exit fullscreen mode

Create an infra/plan/output.tf file to export all the information needed to connect to the cluster.



output "eks-endpoint" {
    value = aws_eks_cluster.eks.endpoint
}

output "kubeconfig-certificate-authority-data" {
    value = aws_eks_cluster.eks.certificate_authority[0].data
}


Enter fullscreen mode Exit fullscreen mode

EKS IAM Roles



resource "aws_iam_role" "eks" {
  name = "${var.eks_cluster_name}-${var.env}"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "eks-AmazonEKSClusterPolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = aws_iam_role.eks.name
}

resource "aws_iam_role_policy_attachment" "eks-AmazonEKSServicePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
  role       = aws_iam_role.eks.name
}


Enter fullscreen mode Exit fullscreen mode

Cluster Security Groups

Here we add an additional security group to allow communication between the control plane and worker node groups and allow unmanaged nodes to communicate with the control plane.



resource "aws_security_group" "eks_cluster" {
  name        = "${var.eks_cluster_name}-${var.env}/ControlPlaneSecurityGroup"
  description = "Communication between the control plane and worker nodegroups"
  vpc_id      = aws_vpc.main.id

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name        = "${var.eks_cluster_name}-${var.env}/ControlPlaneSecurityGroup"
    Environment = var.env
  }
}

resource "aws_security_group_rule" "cluster_inbound" {
  description              = "Allow unmanaged nodes to communicate with control plane (all ports)"
  from_port                = 0
  protocol                 = "-1"
  security_group_id        = aws_eks_cluster.eks.vpc_config[0].cluster_security_group_id
  source_security_group_id = aws_security_group.eks_nodes.id
  to_port                  = 0
  type                     = "ingress"
}


Enter fullscreen mode Exit fullscreen mode

Node Groups

Our cluster has two node groups. One for internal workloads and one for Internet facing workloads.

Internal workloads will reside on a private node group deployed on private subnets.

Internet facing workloads will reside on a public node group deployed on public subnets.

Create a terraform file infra/plan/eks-node-group.tf:



resource "aws_eks_node_group" "private" {
  cluster_name    = aws_eks_cluster.eks.name
  node_group_name = "private-node-group-${var.env}"
  node_role_arn   = aws_iam_role.node-group.arn
  subnet_ids      = [aws_subnet.private["private-eks-1"].id, aws_subnet.private["private-eks-2"].id]

  labels          = {
    "type" = "private"
  }

  instance_types = ["m5.large"]

  scaling_config {
    desired_size = 1
    max_size     = 3
    min_size     = 1
  }

  depends_on = [
    aws_iam_role_policy_attachment.node-group-AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.node-group-AmazonEKS_CNI_Policy,
    aws_iam_role_policy_attachment.node-group-AmazonEC2ContainerRegistryReadOnly
  ]

  tags = {
    Environment = var.env
  }
}

resource "aws_eks_node_group" "public" {
  cluster_name    = aws_eks_cluster.eks.name
  node_group_name = "public-node-group-${var.env}"
  node_role_arn   = aws_iam_role.node-group.arn
  subnet_ids      = [aws_subnet.public["public-eks-1"].id, aws_subnet.public["public-eks-2"].id]

  labels          = {
    "type" = "public"
  }

  instance_types = ["m5.large"]

  scaling_config {
    desired_size = 1
    max_size     = 3
    min_size     = 1
  }

  depends_on = [
    aws_iam_role_policy_attachment.node-group-AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.node-group-AmazonEKS_CNI_Policy,
    aws_iam_role_policy_attachment.node-group-AmazonEC2ContainerRegistryReadOnly,
  ]

  tags = {
    Environment = var.env
  }
}


Enter fullscreen mode Exit fullscreen mode

EKS Node Groups IAM Roles




resource "aws_iam_role" "node-group" {
  name = "eks-node-group-role-${var.env}"

  assume_role_policy = jsonencode({
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Principal = {
        Service = "ec2.amazonaws.com"
      }
    }]
    Version = "2012-10-17"
  })
}

resource "aws_iam_role_policy_attachment" "node-group-AmazonEKSWorkerNodePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.node-group.name
}

resource "aws_iam_role_policy_attachment" "node-group-AmazonEKS_CNI_Policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.node-group.name
}

resource "aws_iam_role_policy_attachment" "node-group-AmazonEC2ContainerRegistryReadOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.node-group.name
}

resource "aws_iam_role_policy" "node-group-ClusterAutoscalerPolicy" {
  name = "eks-cluster-auto-scaler"
  role = aws_iam_role.node-group.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = [
            "autoscaling:DescribeAutoScalingGroups",
            "autoscaling:DescribeAutoScalingInstances",
            "autoscaling:DescribeLaunchConfigurations",
            "autoscaling:DescribeTags",
            "autoscaling:SetDesiredCapacity",
            "autoscaling:TerminateInstanceInAutoScalingGroup"
        ]
        Effect   = "Allow"
        Resource = "*"
      },
    ]
  })
}


Enter fullscreen mode Exit fullscreen mode

Node Security Groups



resource "aws_security_group" "eks_nodes" {
  name        = "${var.eks_cluster_name}-${var.env}/ClusterSharedNodeSecurityGroup"
  description = "Communication between all nodes in the cluster"
  vpc_id      = aws_vpc.main.id

  ingress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    self        = true
  }

  ingress {
    from_port       = 0
    to_port         = 0
    protocol        = "-1"
    security_groups = [aws_eks_cluster.eks.vpc_config[0].cluster_security_group_id]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name        = "${var.eks_cluster_name}-${var.env}/ClusterSharedNodeSecurityGroup"
    Environment = var.env
  }
}


Enter fullscreen mode Exit fullscreen mode

Complete the file infra/plan/variable.tf:



variable "eks_cluster_name" {
  type = string
  default = "eks-cluster"
}


Enter fullscreen mode Exit fullscreen mode

Let's deploy our cluster



cd infra/envs/dev

terraform apply ../../plan/ 


Enter fullscreen mode Exit fullscreen mode

Let's check if all the resources have been created and are working correctly

EKS cluster

Alt Text

Alt Text

EKS Node Groups

Alt Text

Conclusion

Our EKS cluster is now active. In the next part, we will focus on setting up Amazon RDS.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .