Complete Guide to Automate the Deployment of the Sock Shop Application on Kubernetes with IaC, CI/CD, and Monitoring

ChigozieCO - Oct 22 - - Dev Community

This project is about deploying a microservices-based application using automated tools to ensure quick, reliable, and secure deployment on Kubernetes. By focusing on Infrastructure as Code, you'll create a reproducible and maintainable deployment process that leverages modern DevOps practices and tools.

For a detailed breakdown of what this project is trying to achieve check out the requirements here

Prerequisites

  • An AWS Account
  • AWS CLI installed and configured
  • Terraform installed
  • Kubectl installed
  • Helm
  • A custom domain

Set Up AWS Hosted Zone and Custom Domain

The first thing to do to begin this project is to create a hosted zone and configure our custom domain.

I have already purchased a custom domain projectchigozie.me and so I created an AWS hosted zone to host this domain. I didn't use terraform to create this hosted zone because this step still required manually configuration to add the nameservers to the domain.

Steps to create an AWS hosted zone

  • Navigate the the aws management console
  • Click on services in the top left corner, click on the networking and content delivery category and choose the Route53 subcategory.
  • Click on create hosted zone
  • When it opens up, enter your custom domain name.
  • Leave the rest as default, you can add a tag if you want to.
  • Click create hosted zone

Hosted-zone-console

Once the hosted zone is created, we then retrieve the namespaces from the created hosted zone and use it to replace those already in our custom domain.

The specific steps to take to do this will vary depending on your domain name registrar but it's pretty much very easy across board.

hosted-zone


Provision AWS EKS Cluster with Terraform

For automation and to speed up the process, we will write a terraform script to deploy an EKS cluster and configure the necessary VPC, subnets, security groups and IAM roles.

We won't be reinventing the wheel here as there are a lot of terraform modules out there that do just exactly what we are trying to do. I will be using the official terraform/aws vpc and eks modules.

My terraform script for the EKS cluster provisioning can be found in the terraform directory

Create a .gitignore File

Before we begin it is usually best to create a .gitignore where we will specify the files that will not be pushed to version control.

For the sake of keeping this post short, find my git ignore file here, copy its contents and add to your own .gitignore file.


Setup Remote Backend

We will make use of remote backend for this project, using an S3 bucket to save our statefiles abd DynamoDb to save our lock file.

Create an S3 bucket and name it anything you like, remember that your bucket name must be unique so find a unique name to use, you can enable versioning in your bucket to ensure older statefiles are not deleted incase you need to go go back to a previous configuration.

In that bucket create 2 folders, I named mine eks and k8s, for the two different terraform scripts.

Next create a DynamoDb table with any name of your choosing, ensure the partition key is named LockID written exactly like this and leave all other settings as default. We will use them for our terraform configurations.


Add Providers

terraform/main.tf

First create a new directory called terraform this is where all our Terraform scripts will be.

Create a main.tf file in the terraform directory and add the code below to the file.

# Copyright (c) HashiCorp, Inc.

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }

  backend "s3" {
    bucket         = "sockshop-statefiles" # Replace with your bucket name
    key            = "eks/terraform.tfstate" # Replace with your first folder name
    region         = "us-east-1"
    dynamodb_table = "sock-shop-lockfile" # Replace with your DynamoDb table name
  }

}

provider "aws" {
  region = var.region
  shared_credentials_files = ["~/.aws/credentials"]
}
Enter fullscreen mode Exit fullscreen mode

Retrieve Data from AWS

terraform/data.tf

We will retrieve the availability zones data from AWS so we can use them in creating our resources and so create a data.tf file in the terraform directory and add the below lines of code to the file:

# Filter out local zones, which are not currently supported 
# with managed node groups
data "aws_availability_zones" "available" {
  filter {
    name   = "opt-in-status"
    values = ["opt-in-not-required"]
  }
}
Enter fullscreen mode Exit fullscreen mode

Create VPC and Other Networking Resources

terraform/vpc.tf

Also in the terraform directory create a new file vpc.tf

You can find the official terraform/aws vpc module here, the code below is mostly the same with a little modification.

# Create vpc using the terraform aws vpc module
module "vpc" {
  source                  = "terraform-aws-modules/vpc/aws"

  name                    = var.vpcname
  cidr                    = "10.0.0.0/16"

  azs                     = slice(data.aws_availability_zones.available.names, 0, 3)
  private_subnets         = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  public_subnets          = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]

  enable_nat_gateway      = true
  single_nat_gateway      = true
  enable_dns_hostnames    = true

 public_subnet_tags = {
  "kubernetes.io/role/internal-elb"             = 1 # If you want to deploy load balancers to a subnet, the subnet must have 
 }

 private_subnet_tags = {
  "kubernetes.io/role/internal-elb"             = 1 # If you want to deploy load balancers to a subnet, the subnet must have 
 }
}
Enter fullscreen mode Exit fullscreen mode

Define VPC Variables

terraform/variables.tf

Create a variables.tf file in the terraform directory, we will define our variables in this file. If you noticed, we have already referenced some variables already in the code above and your code should already be throwing errors. To rectify them, create the file and add the below code to it:

variable "region" {
  description = "The region where the VPC will be located"
  type        = string
  default     = "us-east-1"
}

variable "vpcname" {
  description = "Vpc name"
  type        = string
}
Enter fullscreen mode Exit fullscreen mode

Create EKS Cluster

We will also be using the official terraform EKS module to create our EKS cluster. You can find the EKS Module here.

Create a new file in the terraform directory, name it eks.tf, this is where we will store our terraform script to create an eks cluster.

The reason we are breaking down our code into several files is for readability and maintainability, it makes the code easier to read and maintain when all scripts that fall into the same group are found in the same place.

terraform/eks.tf

Add the below code to the eks.tf file

module "eks" {
  source                                   = "terraform-aws-modules/eks/aws"
  version                                  = "~> 20.0"

  cluster_name                             = var.cluster_name
  cluster_version                          = "1.30"

  cluster_endpoint_public_access           = true

  vpc_id                                   = module.vpc.vpc_id
  subnet_ids                               = module.vpc.private_subnets

  # EKS Managed Node Group(s)
  eks_managed_node_group_defaults = {
    # Starting on 1.30, AL2023 is the default AMI type for EKS managed node groups
    ami_type                               = "AL2023_x86_64_STANDARD"
  }

  eks_managed_node_groups = {
    one = {
      name                                 = "node-group-1"
      instance_types                       = ["t2.medium"]

      min_size                             = 1
      max_size                             = 3
      desired_size                         = 2
    }

    two = {
      name                                 = "node-group-2"
      instance_types                       = ["t2.medium"]

      min_size                             = 1
      max_size                             = 2
      desired_size                         = 1
    }
  }

  # Cluster access entry
  # To add the current caller identity as an administrator
  enable_cluster_creator_admin_permissions = true

}
Enter fullscreen mode Exit fullscreen mode

Looking at the above code, you will notice that we referenced a variable and so we need to declare that variable in our variables.tf file, and so we will do that next.

terraform/variables.tf

Add this to your variables.tf file

variable "cluster_name" {
  description = "Name of EKS cluster"
  type        = string
}
Enter fullscreen mode Exit fullscreen mode

Declare Outputs

We will need some details from our configuration after the provisioning of our resources is over to enable us deploy our app and so we will tell terraform to give us those details when it is done.

We do this by defining our outputs in an output.tf file. Create another file in the terraform directory, name it outputs.tf

terraform/outputs.tf

Add the below code to your outputs.tf file

output "cluster_endpoint" {
  description = "Endpoint for EKS control plane"
  value       = module.eks.cluster_endpoint
}

output "cluster_security_group_id" {
  description = "Security group ids attached to the cluster control plane"
  value       = module.eks.cluster_security_group_id
}

output "region" {
  description = "AWS region"
  value       = var.region
}

output "cluster_name" {
  description = "Kubernetes Cluster Name"
  value       = module.eks.cluster_name
}

output "cluster_oidc_issuer_url" {
  description = "The URL on the EKS cluster for the OpenID Connect identity provider"
  value       = module.eks.cluster_oidc_issuer_url
}

output "aws_account_id" {
  description = "Account Id of your AWS account"
  sensitive = true
  value = data.aws_caller_identity.current.account_id
}

output "cluster_certificate_authority_data" {
  description = "Base64 encoded certificate data required to communicate with the cluster"
  sensitive = true
  value = module.eks.cluster_certificate_authority_data
}
Enter fullscreen mode Exit fullscreen mode

Add Values for your Variables

Next we need to set values for all the variables we defined in our variables.tf file. We will do this in a *.tfvars file, terraform will recognise any file with this extension as holding the values for your variables and look here for those variables values.

This file usually holds secrets (although non of our variables are secrets of such) and should therefore never be committed to version control in other to avoid exposing your secrets.

If you copied the .gitignore file in the link at the beginning of this post then your .tfvars file will be ignored by version control.

Create a new file terraform.tfvars in the terraform directory and add the code below, substituting it with your details.

terraform/terraform.tfvars

vpcname = "<your vpc name>"
cluster_name = "your eks cluster name"
Enter fullscreen mode Exit fullscreen mode

Test Configuration

Now we can test our configuration to see what will be created and ensure we have no errors in our script.

On your terminal, navigate to the terraform directory (where you saved all your terraform scripts) and run the following commands on the terminal:

terraform init

terraform plan
Enter fullscreen mode Exit fullscreen mode

cli-output

After running those commands you will see that, just like the image above, terraform will create 63 resources for us when we run the terraform apply command.

We won't apply the configuration just yet, we still have to create some extra roles.


Create Policy and Role for Route53 to Assume in the ClusterIssuer Process

While writing the configuration to spin up my VPC and other networking resources as well as my EKS I also added configuration to configure IAM roles and policies for Route 53 with cert-manager.

I created an IAM role with a trust policy that specifies the Open ID Connect (OIDC) provider and conditions for when the role can be assumed based on the service account and namespace.

The ClusterIssuer will need these credentials for the certificate issuing process and as a safe way to handle my secrets I will use IAM roles associated with Kubernetes service accounts to manage access to AWS services securely. This is why it is necessary to create this policy and role for Route53 and I did it using terraform.

You can find the script to create the role here


Create Roles and Policy for Route53

Down the line we will create a certificate for our domain with LetsEncrypt. This process will need us to create a ClusterIssuer for Let’s Encrypt that uses DNS-01 validation and since we are using AWS Route 53 as our DNS provider we will need to pass Route53 credentials for domain verification.

The ClusterIssuer will need these credentials for the certificate issuing process and as a safe way to handle our secrets we will use IAM roles associated with Kubernetes service accounts to manage access to AWS services securely. This is why it is necessary to create the next set of policy and role for Route53.

We will add the configuration to our terraform script.

terraform/route53-role-policy.tf

First we need to create the policy document for the policy that grants the necessary permissions for managing Route 53 DNS records.

Create a new file route53-role-policy.tf in the terraform directory and add the below code block to it.

# Retrieves the current AWS account ID to dynamically reference it in the policy document
data "aws_caller_identity" "current" {}

# Policy document for the Route53CertManagerPolicy
data "aws_iam_policy_document" "route53_policy" {
  version = "2012-10-17"
    statement {
      effect = "Allow"
      actions = ["route53:GetChange"]
      resources = ["arn:aws:route53:::change/*"]
    }
    statement {
      effect = "Allow"
      actions = ["route53:ListHostedZones"]
      resources = ["*"]
    }
    statement {
      effect = "Allow"
      actions = [
        "route53:ChangeResourceRecordSets",
        "route53:ListResourceRecordSets"
      ]
      resources = ["arn:aws:route53:::hostedzone/*"]
    }
    statement {
      effect = "Allow"
      actions = [ 
        "route53:ListHostedZonesByName",
        "sts:AssumeRoleWithWebIdentity"
      ]
      resources = ["*"]
    }
}
Enter fullscreen mode Exit fullscreen mode

terraform/route53-role-policy.tf

Now we create the policy, add the next block of code to the terraform/route53-role-policy.tf file

# Create an IAM policy for Route53 that grants the necessary permissions for managing Route 53 DNS records based on the above policy document
resource "aws_iam_policy" "route53_policy" {
  name = "Route53CertManagerPolicy"
  policy = data.aws_iam_policy_document.route53_policy.json
}
Enter fullscreen mode Exit fullscreen mode

We will create an IAM role for Service accounts (IRSA) for the Kubernetes service account we will create soon.

IRSA enables assigning IAM roles to Kubernetes service accounts. This mechanism allows pods to use AWS resources securely, without needing to manage long-lived AWS credentials.

Before we create the role let's define our trust relationship policy document with a data block like we did with the policy document before.

terraform/route53-role-policy.tf

A trust relationship establishes a trust relationship between your Kubernetes cluster and AWS, allowing Kubernetes service accounts to assume IAM roles.

Add the below code to your terraform/route53-role-policy.tf file:

# Strip the "https://" prefix from the OIDC issuer URL
locals {
  cluster_oidc_issuer = replace(module.eks.cluster_oidc_issuer_url, "https://", "")
}

# Trust relationship policy document for the Route53CertManagerRole we will create
data "aws_iam_policy_document" "oidc_assume_role" {
  version = "2012-10-17"
  statement {
    effect = "Allow"
    principals {
      type        = "Federated"
      identifiers = ["arn:aws:iam::${data.aws_caller_identity.current.account_id}:oidc-provider/${local.cluster_oidc_issuer}"]
    }
    actions = [
      "sts:AssumeRoleWithWebIdentity",
    ]

    condition {
      test     = "StringEquals"
      variable = "${local.cluster_oidc_issuer}:sub"
      values   = [
        "system:serviceaccount:${var.namespace}:${var.service_account_name}"
      ]
    }

    condition {
      test     = "StringEquals"
      variable = "${local.cluster_oidc_issuer}:aud"
      values   = ["sts.amazonaws.com"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Now we need to declare the variables we just called to our variables.tf file

terraform/variables.tf

variable "namespace" {
  description = "The Kubernetes namespace for the service account"
  type        = string
}

variable "service_account_name" {
  description = "The name of the Kubernetes service account"
  type        = string
}
Enter fullscreen mode Exit fullscreen mode

Now we can create the IRSA, do this by adding the following code to the terraform/route53-role-policy.tf file

terraform/route53-role-policy.tf

# Create IAM Role for service account
resource "aws_iam_role" "Route53CertManagerRole" {
  name               = "Route53CertManagerRole"
  assume_role_policy = data.aws_iam_policy_document.oidc_assume_role.json
}
Enter fullscreen mode Exit fullscreen mode

Lastly, we need to attach the policy we created above to this newly created role

terraform/route53-role-policy.tf

Add this to your terraform/route53-role-policy.tf file:

# Attach the Route53CertManagerPolicy to the Route53CertManagerRole
resource "aws_iam_role_policy_attachment" "Route53CertManager" {
  role = aws_iam_role.Route53CertManagerRole.name
  policy_arn = aws_iam_policy.route53cmpolicy.arn
}
Enter fullscreen mode Exit fullscreen mode

Update Your secrets terraform.tfvars File

We added some new variable so we need to add the values for these variables to our .tfvars file.

Add the following to your file and substitute the values with your correct values

terraform/terraform.tfvars

namespace = "<the name of the namespace you will create for your service account>"
service_account_name = "<your service account name>"
Enter fullscreen mode Exit fullscreen mode

⚠️ Note

Take note of the name you specify here as it must match the name you use for your service account when you create it.

I will advice you use cert-manager for both values (namespace and service_account_name), that's what I will be using.


Create EKS Resources

I provisioned my EKS cluster at this point to ensure there was no error up to this point.

Now we can go ahead and create our VPC and EKS cluster with terraform.

Run the below command:

terraform apply --auto-approve
Enter fullscreen mode Exit fullscreen mode

The screenshots below show a successful deployment of the EKS cluster.

Terraform CLI Showing the Successful Deployment of the Resources


4

AWS Console Showing the EKS Cluster


eks-cluster

cluster-nodes

AWS Console Showing the VPC Deployed Along with the EKS Cluster


vpc-console

vpc-resourcemap


Configure HTTPS Using Let’s Encrypt

Before we deploy our application let's go ahead and configure HTTPS using Let's Encrypt. We will be using terraform as well, using the Kubernetes provider and the kubectl provider. We will write a new terraform configuration for this, is this best so that the EKS cluster configuration is separate in other to breakdown the process.

You can find the terraform scripts for this deployment in the K8s-terraform directory here

Now let's write our terraform script together.


Add Providers

Create a new directory k8s-terraform, and create a new file main.tf. We will add the AWS, kubernetes, kubectl and helm providers as we will need them for the next set of configurations. We will also add our remote backend. We would then need to also add the configurations for those providers.

Open your main.tf file and add the following to it:

k8s-terraform/main.tf

# Add required providers
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }

    kubernetes = {
      source = "hashicorp/kubernetes"
    }

    kubectl = {
      source  = "gavinbunney/kubectl"
      version = "~> 1.14"
    }

    helm = {
      source  = "hashicorp/helm"
    }
  }

  backend "s3" {
    bucket         = "sockshop-statefiles" # Replace with your bucket name
    key            = "k8s/terraform.tfstate" # Replace with your second folder name
    region         = "us-east-1"
    dynamodb_table = "sock-shop-lockfile" # Replace with your DynamoDb table name
  }
}
Enter fullscreen mode Exit fullscreen mode

Now we can go ahead to add and configure our providers.


Configure Provider

Update your main.tf file, add the below to your file

k8s-terraform/main.tf

# Configure the AWS Provider
provider "aws" {
  region = var.region
  shared_credentials_files = ["~/.aws/credentials"]
}

# Configure the kubernetes Provider, the exec will use the aws cli to retrieve the token 
provider "kubernetes" {
  host = var.cluster_endpoint
  cluster_ca_certificate = base64decode(var.cluster_certificate_authority_data)
  exec {
    api_version = "client.authentication.k8s.io/v1"
    command     = "aws"
    args        = ["eks", "get-token", "--cluster-name", var.cluster_name]
  }
}

# Configure the kubernetes Provider, the exec will use the aws cli to retrieve the token 
provider "kubectl" {
  host = var.cluster_endpoint
  cluster_ca_certificate = base64decode(var.cluster_certificate_authority_data)
  exec {
    api_version = "client.authentication.k8s.io/v1"
    command     = "aws"
    args        = ["eks", "get-token", "--cluster-name", var.cluster_name]
  }
}

# Configure the helm Provider using the kubernetes configuration
provider "helm" {
  kubernetes {
    host = var.cluster_endpoint
    cluster_ca_certificate = base64decode(var.cluster_certificate_authority_data)
    exec {
      api_version = "client.authentication.k8s.io/v1"
      command     = "aws"
      args        = ["eks", "get-token", "--cluster-name", var.cluster_name]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

We need to retrieve some values from AWS so create another file data.tf in the k8s-terraform directory and add the below code to it:

k8s-terraform/data.tf

# Retrieves the current AWS account ID to dynamically reference it in the policy document
data "aws_caller_identity" "current" {}

# Retrieve details of the current AWS region to dynamically reference it in the configuration
data "aws_region" "current" {}

# Retrieve the eks cluster endpoint from AWS
data "aws_eks_cluster" "cluster" {
  name = var.cluster_name
}
Enter fullscreen mode Exit fullscreen mode

Declare Variables

We have referenced some variables in our configuration and so we need to declare them for terraform. To declare the variables we used in the main.tf file, create a new file variables.tf and add the code:

k8s-terraform/variables.tf

variable "region" {
  description = "The region where the VPC will be located"
  type        = string
  default     = "us-east-1"
}

variable "cluster_endpoint" {
  description = "Endpoint for EKS control plane"
  type        = string
}

variable "cluster_certificate_authority_data" {
  description = "Base64 encoded certificate data required to communicate with the cluster"
  type        = string
}
Enter fullscreen mode Exit fullscreen mode

Let's begin writing the actual configuration


Create Kubernetes Service Account for the Cert Manager to use

Earlier, while writing our EKS cluster configuration, we added a configuration to create an IAM role for service account (IRSA). The first thing we will do here was to create the namespace for cert-manager and also create a service account and annotate it with the IAM role.

Before we create the service account we will create the namespace for cert-manager as our service account will exit in the cert-manager namespace.

To create a namespace for cert-manager use the kubernetes-namespace resource. Create a new file cert-manager.tf in the k8s-terraform directory and add this code in:

k8s-terraform/cert-manager.tf

Add this code to the file

# Create a cert-manager namespace that our service account will use
resource "kubernetes_namespace" "cert-manager" {
  metadata {
    name = "cert-manager"
  }
}
Enter fullscreen mode Exit fullscreen mode

Now to the actual creation of the service account, add this code to your configuration

k8s-terraform/cert-manager.tf

# Create the service account for cert-manager
resource "kubernetes_service_account" "cert_manager" {
  metadata {
    name      = "cert-manager"
    namespace = "cert-manager"
    annotations = {
      "eks.amazonaws.com/role-arn" = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/Route53CertManagerRole"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Configure Ingress Controller

Before creating the cert-manager resource we will configure our ingress controller, it's crucial to ensure that your Ingress controller is deployed and running before you create Ingress resources that it will manage.

If you will like to go ahead, you can find the complete configuration of the ingress controller here. It will be deployed using helm, using the helm_release resource in terraform.

Let's get to writing the configuration together:

First we will create an ingress-nginx namespace where the ingress controller will reside, we are creating this manually cos helm has a behaviour of not deleting the namespaces it creates.

Create a new file ingress.tf in the k8s-terraform directory and add the code, this is the code to create both the namespace and deploy the ingress controller using helm.

k8s-terraform/ingress.tf

# Create the ingress-nginx namespace
resource "kubernetes_namespace" "ingress-nginx" {
  metadata {
    name = "ingress-nginx"
  }
}

# Use helm to create an nginx ingress controller
resource "helm_release" "ingress-nginx" {
  name       = "nginx-ingress"
  repository = "https://kubernetes.github.io/ingress-nginx"
  chart      = "ingress-nginx"
  namespace  = "ingress-nginx"
  create_namespace = false
  cleanup_on_fail = true
  force_update = true
  timeout = 6000

  set {
    name  = "controller.service.name"
    value = "ingress-nginx-controller"
  }

  set {
    name  = "controller.service.type"
    value = "LoadBalancer"
  }

  set {
    name  = "controller.service.annotations.service\\.beta\\.kubernetes\\.io/aws-load-balancer-connection-idle-timeout"
    value = "3600"
  }

  set {
    name  = "controller.publishService.enabled"
    value = "true"
  }

  set {
    name  = "controller.config.cleanup"
    value = "true"
  }

  set {
    name  = "controller.extraArgs.default-ssl-certificate"
    value = "sock-shop/${var.domain}-tls"
  }

  depends_on = [ kubernetes_namespace.ingress-nginx, kubernetes_namespace.sock-shop  ]
}
Enter fullscreen mode Exit fullscreen mode

Declare Variables

Declare the domain name variable in the variables.tf file

k8s-terraform/variables.tf

variable "domain" {
  description = "The domain name to access your application from and use in the creation of your SSL certificate"
  type = string
}
Enter fullscreen mode Exit fullscreen mode

Configure Cert-Manager

After configuring the ingress controller, the next thing to do is to configure the cert-manager, I did this also using helm. Find the configuration here

Now we will deploy cert-manager using a helm chart with the helm_release resource in terraform.

Add this code to the k8s-terraform/cert-manager.tf file

k8s-terraform/cert-manager.tf

# Use Helm to deploy the cert-manager
resource "helm_release" "cert_manager" {
  name       = "cert-manager"
  repository = "https://charts.jetstack.io"
  chart      = "cert-manager"
  namespace  = "cert-manager"
  create_namespace = false
  cleanup_on_fail = true
  force_update = true

  set {
    name  = "installCRDs"
    value = "true"
  }

  set {
    name  = "serviceAccount.create"
    value = "false"
  }

  set {
    name  = "serviceAccount.name"
    value = kubernetes_service_account.cert_manager.metadata[0].name
  }

  set {
    name  = "securityContext.fsGroup"
    value = "1001"
  }

  set {
    name  = "controller.config.cleanup"
    value = "true"
  }

  set {
    name = "helm.sh/resource-policy"
    value = "delete"
  }

  depends_on = [ kubernetes_service_account.cert_manager, kubernetes_namespace.cert-manager, helm_release.ingress-nginx ]
}
Enter fullscreen mode Exit fullscreen mode

RBAC (Role-based access control )

In order to allow cert-manager issue a token using your ServiceAccount you must deploy some RBAC (Role-based access control ) to the cluster. Find the complete code here

Create a new file role-roleBinding.tf in the k8s-terraform directory and add the following to the file:

k8s-terraform/role-roleBinding.tf

# Deploy some RBAC to the cluster
resource "kubectl_manifest" "cert-manager-tokenrequest" {
  yaml_body = <<YAML
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cert-manager-tokenrequest
  namespace: cert-manager
rules:
  - apiGroups: [""]
    resources: ["serviceaccounts/token"]
    resourceNames: ["cert-manager"]
    verbs: ["create", "update", "delete", "get", "list", "watch"]
YAML
}

# Deploy a RoleBinding to the cluster
resource "kubectl_manifest" "cmtokenrequest" {
  yaml_body = <<YAML
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cert-manager-cert-manager-tokenrequest
  namespace: cert-manager
subjects:
  - kind: ServiceAccount
    name: cert-manager
    namespace: cert-manager
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cert-manager-tokenrequest
YAML
}
Enter fullscreen mode Exit fullscreen mode

Configure ClusterIssuer

Next we will configure the ClusterIssuer manifest file with the kubectl_manifest resource from the kubectl provider so that terraform can adequately use it in the certificate issuing process. We will use the DNS01 resolver instead of HTTP01

I had wanted to use the kubernetes_manifest resource from the kubernetes provider even though I knew it would require two stages of the terraform apply command as the cluster has to be accessible at plan time and thus cannot be created in the same apply operation, another limitation of the kubernetes_manifest resource is that it doesn't support having multiple resources in one manifest file, to circumvent this you could either break down your manifest files into their own individual files (but where's the fun in that) or use a for_each function to loop through the single file.

However from research I was able to discover that the kubectl's kubectl_manifest resource handles manifest files better and allows for a single stage run of the terraform apply command.

Find the ClusterIssuer configuration file here.

To configure our ClusterIssuer create a new file cluster-issuer.tf in the k8s-terraform directory and add the below code to it.

k8s-terraform/cluster-issuer.tf

# Create the Cluster Issuer for the production environment
resource "kubectl_manifest" "cert_manager_cluster_issuer" {
  yaml_body = <<YAML
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: ${var.email}
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
    - dns01:
        route53:
          region: ${data.aws_region.current.name}
          hostedZoneID: ${data.aws_route53_zone.selected.zone_id}
        auth:
            kubernetes:
              serviceAccountRef: 
                name: cert-manager
                namespace: cert-manager
YAML
  depends_on = [helm_release.cert_manager]
}
Enter fullscreen mode Exit fullscreen mode

Declare Variables

Update your variable.tf file by adding the following to it:

k8s-terraform/variables.tf

variable "email" {
  description = "The email address to use in the creation of your SSL certificate"
  type = string
}
Enter fullscreen mode Exit fullscreen mode

Create Certificate

To create the certificate we will use the kubectl_manifest resource to define our manifest file for the certificate creation. You can find my certificate manifest file here

Create a new file certificate.tf in the k8s-terraform directory and add the below code to it.

First we will create a sock-shop namespace as this is the namespace where we want our certificate to be saved. Our certificate will be in this namespace as this is the namespace where our application will be in, the application and the certificate have to be in the same namespace so that the SSl certificate can apply correctly to our site.

The creation of the certificate will depend on the sock-shop namespace and so the depends_on argument is added to the configuration.

k8s-terraform/certificate.tf

# Create a sock-shop namespace that our service account will use
resource "kubernetes_namespace" "sock-shop" {
  metadata {
    name = "sock-shop"
  }
}

# Resource to create the certificate
resource "kubectl_manifest" "cert_manager_certificate" {
  yaml_body = <<YAML
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: ${var.domain}-cert
  namespace: sock-shop  
spec:
  secretName: ${var.domain}-tls
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
    group: cert-manager.io
  commonName: ${var.domain}
  dnsNames:
    - ${var.domain}
    - "*.${var.domain}"
YAML
depends_on = [ kubernetes_namespace.sock-shop, kubectl_manifest.cert_manager_cluster_issuer ]
}
Enter fullscreen mode Exit fullscreen mode

Declare Variables

We have referenced some more variables in our configuration so we will declare them for terraform as usual. Update your variables.tf file accordingly

terraform/variables.tf

variable "domain" {
  description = "The domain name to access your application from and use in the creation of your SSL certificate"
  type = string
}
Enter fullscreen mode Exit fullscreen mode

Configure Ingress

Now that we have configured Cert Manager, Cluster Issuer and Certificate we need to setup our Ingress Controller and Ingress resource that will allow us access to our application, we will also be doing this using our terraform configuration.

I decided to use the nginx-ingress helm chart with the aid of the helm-release resource from the helm provider.

Find my ingress configuration here

Create a new file ingress.tf in the k8s-terraform directory and add the code below to it.

Our Ingress will be located in the ingress-nginx namespace and so we will create the namespace first.

After creating the namespace we will create the ingress controller that controls the ingress resources and creates a LoadBalancer. The creation of the ingress controller will depend on the ingress-nginx and the sock-shop namespace and so the depends_on argument is added to the configuration.

k8s-terraform/ingress.tf

# Create the ingress-nginx namespace
resource "kubernetes_namespace" "ingress-nginx" {
  metadata {
    name = "ingress-nginx"
  }
}

# Use helm to create an nginx ingress controller
resource "helm_release" "ingress-nginx" {
  name       = "nginx-ingress"
  repository = "https://kubernetes.github.io/ingress-nginx"
  chart      = "ingress-nginx"
  namespace  = "ingress-nginx"
  create_namespace = false
  cleanup_on_fail = true
  force_update = true
  timeout = 6000

  set {
    name  = "controller.service.name"
    value = "ingress-nginx-controller"
  }

  set {
    name  = "controller.service.type"
    value = "LoadBalancer"
  }

  set {
    name  = "controller.service.annotations.service\\.beta\\.kubernetes\\.io/aws-load-balancer-connection-idle-timeout"
    value = "3600"
  }

  set {
    name  = "controller.publishService.enabled"
    value = "true"
  }

  set {
    name  = "controller.config.cleanup"
    value = "true"
  }

  set {
    name  = "controller.extraArgs.default-ssl-certificate"
    value = "sock-shop/${var.domain}-tls"
  }

  depends_on = [ kubernetes_namespace.ingress-nginx, kubernetes_namespace.sock-shop  ]
}
Enter fullscreen mode Exit fullscreen mode

Now we can then create the actual ingress, this will be located in the sock-shop namespace as our resources will be located there and for effective flow of traffic the ingress needs to be in the same namespace as the application. Add the below code to the k8s-terraform/ingress.tf file.

k8s-terraform/ingress.tf

# Create an Ingress resource using the kubectl_manifest resource
resource "kubectl_manifest" "ingress" {
  depends_on = [ kubectl_manifest.cert_manager_cluster_issuer, kubectl_manifest.cert_manager_certificate ]
  yaml_body = <<YAML
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress
  namespace: sock-shop
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    certmanager.k8s.io/acme-challenge-type: dns01
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  tls:
  - hosts:
    - ${var.domain}
    - "www.${var.domain}"
  secretName: "${var.domain}-tls"
  rules:
  - host: ${var.domain}
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: front-end
            port:
              number: 80
  - host: "www.${var.domain}"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: front-end
            port:
              number: 80
YAML
}
Enter fullscreen mode Exit fullscreen mode

Connect Domain to LoadBalancer

The Ingress-controller will create a LoadBalancer that give us an external IP to use in accessing our resources and we will point our domain to it by creating an A record.

I used this LoadBalancer to create two A records, one for my main domain name and the other for a wildcard (*) for my subdomains, this will enable us access the sock shop application from our domain.

Remember the hosted zone we created at the beginning of this project? We need information about it to be available to terraform so we need to retrieve the data from the hosted zone using the data block.

We will also retrieve the name of the LoadBalancer created by our ingress controller. When I was using the exact name created by the ingress controller, I wss getting an error that the LoadBalancer's name was too long so I retrieved the name and split it to make it shorter. It is all shown in the code block below.

Add the following to you k8s-terraform/data.tf file

k8s-terraform/data.tf

# Retrieve the Route53 hosted zone for the domain
data "aws_route53_zone" "selected" {
  name         = var.domain
}

# Retrieve the ingress load balancer hostname
data "kubernetes_service" "ingress-nginx-controller" {
  metadata {
    name      = "nginx-ingress-ingress-nginx-controller"
    namespace = "ingress-nginx"
  }

  # Ensure this data source fetches after the service is created
  depends_on = [
    helm_release.ingress-nginx
  ]
}

# Extract the load balancer name from the hostname
locals {
  ingress_list = data.kubernetes_service.ingress-nginx-controller.status[0].load_balancer[0].ingress
  lb_hostname  = element(local.ingress_list, 0).hostname
  lb_name     = join("", slice(split("-", local.lb_hostname), 0, 1))
}

# Data source to fetch the AWS ELB details using the extracted load balancer name
data "aws_elb" "ingress_nginx_lb" {
  name = local.lb_name
}
Enter fullscreen mode Exit fullscreen mode

As you can see in the above block of code, the original LoadBalancer name was split using the locals block and the new name is set with a data block.

Now we can go ahead and create the A records. Add the following to your k8s-terraform/ingress.tf file.

k8s-terraform/ingress.tf

# Route 53 record creation so that our ingress controller can point to our domain name
resource "aws_route53_record" "ingress_load_balancer" {
  zone_id = data.aws_route53_zone.selected.zone_id  # Replace with your Route 53 Hosted Zone ID
  name    = var.domain # Replace with the DNS name you want
  type    = "A"

  # Use the LoadBalancer's external IP or DNS name
  alias {
    name                   = data.aws_elb.ingress_nginx_lb.dns_name
    zone_id                = data.aws_elb.ingress_nginx_lb.zone_id  # zone ID for the alias
    evaluate_target_health = true
  }
}

# Route 53 record creation so that our ingress controller can point to subdomains of our domain name
resource "aws_route53_record" "ingress_subdomain_load_balancer" {
  zone_id = data.aws_route53_zone.selected.zone_id
  name    = "*.${var.domain}"
  type    = "A"
  alias {
    name                   = data.aws_elb.ingress_nginx_lb.dns_name
    zone_id                = data.aws_elb.ingress_nginx_lb.zone_id
    evaluate_target_health = true
  }
}
Enter fullscreen mode Exit fullscreen mode

Define Outputs

In your k8s-terraform/output.tf add the following:

k8s-terraform/output.tf

output "ingress_load_balancer_dns" {
  description = "The dns name of the ingress controller's LoadBalancer"
  value = data.kubernetes_service.ingress-nginx-controller.status[0].load_balancer[0].ingress[0].hostname
}

output "ingress_load_balancer_zone_id" {
  value = data.aws_elb.ingress_nginx_lb.zone_id
}
Enter fullscreen mode Exit fullscreen mode

Monitoring, Logging and Alerting

To setup prometheus, grafana, alertmanager and Kibana for monitoring, logging and alerting retrieve the respective manifest files from my Github repo. We will then create two additional ingresses that will exist in the monitoring namespace and the kube-logging namespace so that we can access these dashboards from your subdomain. We will also copy the SSL secret covering the entire domain to the monitoring namespace and the kube-logging namespace.

Download the alerting manifest files from my github here

Download the logging manifest files from my github here

Download the monitoring manifest files from my github here

Add these three directories to your project directory.

Now we will create two new ingresses.


Create a Prometheus and Grafana Ingress

To create an ingress that will point to the prometheus, grafana and alertmanager subdomains, open the k8s-terraform/ingress.tf file add the below code:

k8s-terraform/ingress.tf

# Create an Ingress resource using the kubectl_manifest resource for our monitoring resources
resource "kubectl_manifest" "ingress_monitoring" {
  depends_on = [ kubectl_manifest.cert_manager_cluster_issuer, null_resource.update_secret ]
  yaml_body = <<YAML
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress
  namespace: monitoring
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    certmanager.k8s.io/acme-challenge-type: dns01
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  tls:
  - hosts:
    - "prometheus.${var.domain}"
    - "grafana.${var.domain}"
    - "alertmanager.${var.domain}"
  secretName: "${var.domain}-tls"
  rules:
  - host: "prometheus.${var.domain}"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: prometheus
            port:
              number: 9090
  - host: "grafana.${var.domain}"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: grafana
            port:
              number: 80
  - host: "alertmanager.${var.domain}"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: alertmanager
            port:
              number: 9093
YAML
}
Enter fullscreen mode Exit fullscreen mode

Add Secret to the Monitoring and Kube-logging Namespace

If you remember, the SSL certificate we created is in the sock-shop namespace as well as the secret for that certificate but we need the secret to also exist in the monitoring namespace if we will use that SSL for our monitoring sub-domain.

We can create separate certificates and include them in these namespaces but I do not want to go that route. What we will do is copy the secret from the sock-shop namespace to the monitoring and kube-system namespace.

We need to fetch the existing secret, using the null_resource with the local-exec provisioner. It runs aws eks update-kubeconfig to configure kubectl then fetches the Secret from the sock-shop namespace, modifies it and saves it in the monitoring and kube-logging namespace.

The kube-logging namespace is where our elasticsearch, Fluentd and Kibana resources will reside so will will create that first before sending any secret there.

To create the kube-logging namespace, add the below code to k8s-terraform/ingress.tf file:

k8s-terraform/ingress.tf

# Create kube-logging namespace
resource "kubernetes_namespace" "kube-logging" {
  metadata {
    name = "kube-logging"
  }
}
Enter fullscreen mode Exit fullscreen mode

Add the following code to your k8s-terraform/data.tf file:

k8s-terraform/data.tf

# Retrieve the certificate secret and copy to the monitoring namespace
resource "null_resource" "update_secret" {
  provisioner "local-exec" {
    command = <<EOT
    aws eks update-kubeconfig --region us-east-1 --name sock-shop-eks
    kubectl get secret projectchigozie.me-tls -n sock-shop -o yaml | sed 's/namespace: sock-shop/namespace: monitoring/' | kubectl apply -f -
    EOT
  }
  # Ensure this data source fetches after the service is created
  depends_on = [kubectl_manifest.cert_manager_certificate]
}
Enter fullscreen mode Exit fullscreen mode

To create the secret in the kube-logging namespace, we will just repeat the same code changing only the namespace in the metadata.

Add this code to your k8s-terraform/certificate.tf file

k8s-terraform/certificate.tf

# Retrieve the certificate secret and copy to the kube-logging namespace
resource "null_resource" "update_secret_kibana" {
  provisioner "local-exec" {
    command = <<EOT
    aws eks update-kubeconfig --region us-east-1 --name sock-shop-eks
    kubectl get secret projectchigozie.me-tls -n sock-shop -o yaml | sed 's/namespace: sock-shop/namespace: kube-logging/' | kubectl apply -f -
    EOT
  }
  # Ensure this data source fetches after the service is created
  depends_on = [kubectl_manifest.cert_manager_certificate]
}
Enter fullscreen mode Exit fullscreen mode

The records for the subdomains are covered by the wildcard (*) record we added so we do not need to add any more A records.


Create Ingress for Kibana

To create an ingress that will point to the kibana subdomains open the k8s-terraform/ingress.tf file add the below code:

k8s-terraform/ingress.tf

# Create an Ingress resource using the kubectl_manifest resource for our kibana resources
resource "kubectl_manifest" "ingress_kibana" {
  depends_on = [ kubectl_manifest.cert_manager_cluster_issuer, null_resource.update_secret_kibana ]
  yaml_body = <<YAML
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress
  namespace: kube-logging
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    certmanager.k8s.io/acme-challenge-type: dns01
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  tls:
  - hosts:
    - "kibana.${var.domain}"
  secretName: "${var.domain}-tls"
  rules:
  - host: "kibana.${var.domain}"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kibana
            port:
              number: 5601
YAML
}
Enter fullscreen mode Exit fullscreen mode

⚠️ NOTE

To use Alertmanager and have it send slack alerts you need to configure your slack webhook, I will make a separate post on how to configure that so that you can get the messages in your slack.

For now you can skip it by not applying the alerting configurations.


Set Environment Variables (Variable Value)

If you noticed, so far we haven't defined any of our variables. We have declared them in our variables.tf file, we have referenced them in a lot of our configurations but we haven't added any value for any them.

Typically we do this using a .tfvars file or by setting them as environment variables. We will be using the latter but with a twist. Most of the variables we used here are the same variables we used in our EKS configuration and so we won't be repeating ourselves.

When we defined our EKS configuration we defined some outputs and all those outputs are exactly the same with our variables, so we will write a script to take the outputs of the EKS configuration and set them as env vars for this configuration to use.

Our setup is in such a way that we will build the EKS cluster first, then using the outputs from that deployment we will set the environment variables for our next terraform build to create our ingress resources and SSL certificate.

We will also export our terraform output values as environment variables to use with kubectl and other configurations. This will aid to make the whole process more automated reducing the manual configurations.

I wrote different scripts to do this, find the scripts here, the first script will create the terraform variables that we will use to run our next deployment, find it here

This script will extract the key and value from your terraform output and then generate a new script file env.sh containing the export commands for the Terraform inputs of the next terraform deployment, it also takes into consideration outputs that are marked as sensitive and handles them appropriately and so sets both regular and sensitive Terraform outputs as environment variables.

To write the scripts, create a new directory scripts in your root directory. In the scripts directory, create a file 1-setTFvars.sh we will save our script to export our environment variables here.

scripts/1-setTFvars.sh

Add the following block of code to your file:

#!/usr/bin/env bash

# This script sets the Terraform variables from the outputs of the Terraform configuration. Run this script immediately after the first terraform apply that creates the EKS cluster.

# Set the path where the script is being run from
RUN_FROM=$(pwd)

# Define the directory containing your Terraform configuration files by walking backwards to the project root path so that this script works from whatever directory it is run.
ABS_PATH="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)/$(basename "${BASH_SOURCE[0]}")"
DIR_PATH=$(dirname "$ABS_PATH")
PROJECT_PATH=$(dirname "$DIR_PATH")
TF_PATH="$PROJECT_PATH/terraform"

# Change to the Terraform directory
cd "$TF_PATH" || { echo "Failed to change directory to $TF_PATH"; return 1; }

# Check if jq is installed
if ! command -v jq &> /dev/null; then
  echo "jq is not installed. Please install jq and try again."
  return 1
fi

# Get the Terraform outputs and check if they are non-empty
TF_OUTPUT=$(terraform output -json)

if [ -z "$TF_OUTPUT" ] || [ "$TF_OUTPUT" == "{}" ]; then
  echo "No terraform outputs found or outputs are empty. Please run 'terraform apply' first."
  return 1
fi

# Generate TFenv.sh file with export commands
{
  echo "$TF_OUTPUT" | jq -r 'to_entries[] | "export TF_VAR_" + .key + "=" + (.value.value | tostring)'
  echo 'echo "Environment variables have been exported and are available for Terraform."'
} > "$RUN_FROM/TFenv.sh"

echo "Terraform variables export script has been created as TFenv.sh."

# Make it executable
chmod u+x "$RUN_FROM/TFenv.sh"

# Source the TFenv.sh script to export the variables into the current shell
echo "Sourcing TFenv.sh..."
source "$RUN_FROM/TFenv.sh"
cd $RUN_FROM
Enter fullscreen mode Exit fullscreen mode

The script generates a new script file TFenv.sh containing the export commands and then run the script to use for our next Terraform build.

⚠️ NOTE

This script takes about 15 seconds to run, let it run its course.


Make Script Executable

You don't necessarily have to make the script executable as we will run it with the source command but if you want to, save the script as 1-setTFvars.sh and make it executable using the command below:

chmod u+x 1-setTFvars.sh
Enter fullscreen mode Exit fullscreen mode

Run script

Now we run the script to set our environment variables. Take note of the directory from which you are running the script and adjust the command to reflect the correct path to the script.

Use the command below:

source scripts/1-setTFvars.sh
Enter fullscreen mode Exit fullscreen mode

TFenv

The new script generated from this script is not committed to version control as it contains some sensitive values.


Create Resources

We've broken down our infrastructure into a two stage build where we will create our EKS cluster first before we configure certificate and ingress, so before you apply this script the first terraform configuration script should have already been deployed.

Now you are ready to create your resources. Ensure you deploy your EKS cluster using the first Terraform script then make sure you run your first script to set your environment variables. (if you are following this post sequentially you wont have done this step just before getting to this section so skip ahead)

source scripts/1-setTFvars.sh
Enter fullscreen mode Exit fullscreen mode

Then apply your terraform script, you should be in the k8s-terraform directory before running the below commands.

terraform init
terraform plan
terraform apply
Enter fullscreen mode Exit fullscreen mode

Now that we have our infrastructure up and running we need to connect kubectl to our cluster so that we can deploy our application and manage our resources.


Connect Kubectl to EKS Cluster

Once my EKS Cluster is fully provisioned on AWS and I have deployed my ingress and certificate resources in the cluster, the next thing to do is to connect kubectl to the cluster so that I can use kubectl right from my local machine to define, create, update and delete my Kubernetes resources as necessary.

The command to do this is shown below:

aws eks update-kubeconfig --region <region-code> --name <cluster name>
Enter fullscreen mode Exit fullscreen mode

However since this is an imperative command I decided to create a script out of it for easier automation and reproduction of the process. Find the script here

Before we create the script to connect the kubectl though, will need to set our environment variables for the script to use, this will be the same as the first script with a little modification. Create a new file 2-setEnvVars.sh in the scripts directory and add the below code to the file:

scripts/2-setEnvVars.sh

#!/usr/bin/env bash

# This script sets environment variables from Terraform outputs for use in connect our kubectl to our cluster. Run this after applying the k8s-terraform configuration.

# Set the path where the script is being run from
RUN_FROM=$(pwd)

# Define the directory containing your Terraform configuration files by walking backwards to the project root path so that this script works from whatever directory it is run.
ABS_PATH="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)/$(basename "${BASH_SOURCE[0]}")"
DIR_PATH=$(dirname "$ABS_PATH")
PROJECT_PATH=$(dirname "$DIR_PATH")
TF_PATH="$PROJECT_PATH/terraform"

# Change to the Terraform directory
cd "$TF_PATH" || { echo "Failed to change directory to $TF_PATH"; return 1; }

# Check if jq is installed
if ! command -v jq &> /dev/null; then
  echo "jq is not installed. Please install jq and try again."
  return 1
fi

# Get the Terraform outputs and check if they are non-empty
TF_OUTPUT=$(terraform output -json)

if [ -z "$TF_OUTPUT" ] || [ "$TF_OUTPUT" == "{}" ]; then
  echo "No terraform outputs found or outputs are empty. Please run 'terraform apply' first."
  return 1
fi

# Generate env.sh file with export commands
echo "$TF_OUTPUT" | jq -r 'to_entries[] | "export " + .key + "=" + (.value.value | tostring)' > "$RUN_FROM/env.sh"
echo "Environment variables export script has been created as env.sh."
echo "Run the 3-connect-kubectl.sh script now to connect Kubectl to your EKS cluster"
cd $RUN_FROM
Enter fullscreen mode Exit fullscreen mode

You need to run this script before you run the script to connect your kubectl to your cluster. Make your script executable and run it:

chmod u+x 2-setEnvVars.sh
source 2-setEnvVars.sh
Enter fullscreen mode Exit fullscreen mode

Script to connect Kubectl to Cluster

To create the script to connect your kubectl to your cluster create a new file 3-connect-kubectl.sh in the scripts directory and add the following code block to the file:

scripts/3-connect-kubectl.sh

#!/usr/bin/env bash

# Source environment variables from a file
if [ -f ./env.sh ]; then
  source ./env.sh
  echo "Environment variables loaded from env.sh file and successfully set."
else
  echo "Environment variables file not found."
  exit 1
fi

# Check that AWS CLI is installed, if it isn't stop the script
if ! command -v aws &> /dev/null; then
  echo "AWS CLI is not installed. Please install AWS CLI and try again."
  exit 1
fi

# Check that the kubectl is configured, if it isn't stop the script
if ! command -v kubectl &> /dev/null; then
  echo "kubectl is not installed. Please install kubectl and try again."
  exit 1
fi

# Connect kubectl to your EKS cluster
echo "Connecting kubectl to your EKS cluster......"
aws eks update-kubeconfig --region $region --name $cluster_name
Enter fullscreen mode Exit fullscreen mode

Make the script Executable and Run it

Save the script and make it executable with the second command, then run it with the second command

chmod u+x 3-connect-kubectl.sh
./3-connect-kubectl.sh
Enter fullscreen mode Exit fullscreen mode

Take note of the directory from which you are running the script and adjust the command to reflect the correct path to the script.

After running the script you should see that kubectl is connecting with your EKS cluster and you will get a success response if it was successful as seen in the screenshot below.

When the script is successfully run, our kubectl is connected to our cluster.

kubectl-connected


Deploy Application

Previously I had wanted to deploy the application using terraform but it seems like an overkill seeing as we using a CI/CD pipeline to automate the whole flow, I eventually resolved to use kubectl to deploy the application to the cluster.

Retrieve the complete-demo.yaml application file from the project repo which is a combination of all the manifests for all the microservices required for our application to be up and running.

Download the complete-demo.yaml file from my repo and add it to your project directory. For me I created a new directory called app and saved the application file there.

Take note of where your saved it cause you will deploy it from that location.

To deploy our app, run the command below:

kubectl apply -f app/complete-demo.yaml
kubectl apply -f monitoring/
kubectl apply -f logging/
kubectl apply -f alerting/
Enter fullscreen mode Exit fullscreen mode

With the above commands you have now deployed your application as well as your monitoring, logging and alerting.

Verify Deployment

Head on over to your domain to see that the sock-shop application is showing up, this is what it should look like:

application-frontend

secure-cert

From the images above you can see that my frontend is secure and the certificate is issued my let's encrypt. If you followed along correctly, yours should be too.

Check your prometheus and grafana subdomains as well for your monitoring and logging dashboards

Prometheus Dashboard


Prometheus

Grafana Dashboard


Grafana

Commands to use in exploring your cluster

We can also verify our setup from our EKS cluster using some kubectl commands:

🌟 To view the Resources in Cert-manager namespace:

kubectl get all -n cert-manager
Enter fullscreen mode Exit fullscreen mode

Resources in Cert-manager namespace


cert-manager


🌟 To view your sock-shop ingress:

kubectl get ingress -n sock-shop
Enter fullscreen mode Exit fullscreen mode

Sock-shop ingress


sock-shop-ingress


🌟 To see all our application pods:

kubectl get pods -n sock-shop
Enter fullscreen mode Exit fullscreen mode

Sock-shop pods


sock-shop-pod


🌟 To see all our application services:

kubectl get svc -n sock-shop
Enter fullscreen mode Exit fullscreen mode

Sock-shop-services


sock-shop-svc


This has been an extremely long post and so we will end here, be on the look out for another post where we setup a CI/CD pipeline with Jenkins for this project.

If you found this post helpful in anyway, please let me know in the comments and share with your friends. If you have a better way of implementing any of the steps I took here, please do let me know, I love to learn new (~better~) ways of doing things.

I enjoy breaking down tech topics into simpler concepts and teaching the world (or at least the DEV Community) how to harness the power of the cloud.

If you'd love to connect on all things cloud engineering or DevOps, then I'd love to as well, drop me a connection request on LinkedIn or follow me GitHub

I'm open to opportunities as well so don't be a stranger, drop a comment, drop a connection, send that DM and let's get cooking. Byeeee until later

Follow for more DevOps content broken down into simple understandable structure.

. . . . . . . . . . . . . . . . . . . . . .