Terraform Configuration for AWS EKS Cluster
Github Link
Kubernetes DevSecOps CICD Project Using Github Actions and ArgoCD
Github Link
Part one: Setting up the environment
ssh exchange between my local computer and my github account
ssh-keygen
export GIT_SSH_COMMAND="ssh -i ~/.ssh/key"
This command tells Git to use the specified SSH key for authentication during operations like git clone, git pull, and git push.
Steps to Create an IAM User and Generate Access Key
In the IAM dashboard, click on Users on the left-hand menu.
Click the Add users button.
Attach existing policies directly. Search for and select AdministratorAccess to give the user full access.
On the success screen, you will see an Access key ID and Secret access key for the user. Make sure to download the CSV or copy these to a safe place as they will not be retrievable again.
Set up an S3 bucket on AWS to store Terraform state files.
This is a common practice in Infrastructure-as-Code (IaC) deployments to ensure the state is stored remotely, securely, and is accessible for team collaboration.
The bucket will eventually populate once we run terraform init and store the file shown.
Store the AWS credentials
(Access Key, Secret Key, and BUCKET NAME) securely in GitHub Secrets for use in a CI/CD pipeline. These credentials will be used by GitHub Actions to authenticate with AWS services, such as deploying infrastructure or uploading state files to an S3 bucket.
The Terraform files define AWS infrastructure resources like S3, DynamoDB, and IAM roles, automating deployment through code.
*Deploying AWS resources using Terraform Configuration *Github Link
On push, the CI/CD pipeline in GitHub triggers automatic builds and deployments based on the latest code changes.
The pipeline ran successfully, completing all tasks such as building and deploying without failures.
Part two: Configuring the environment
The jump host server, often used for secure access to private network resources, has been successfully deployed. Lets SSH into it.
Confirm the tools such as docker, terraform, aws cli, kubectl, trivy and eksctl are installed.
Create an eks cluster
eksctl create cluster --name quizapp-eks-cluster --region us-east-1 --node-type t2.large --nodes-min 2 --nodes-max 4
Part three: Setting up mongodb, SonarQube, snyk and docker hub
Create a MongoDB cluster and a user.
Click "Build a Cluster" and choose a cloud provider, region, and configuration (shared/ free plan).
Click Create Cluster and wait for deployment.
Go to Database Access under the Security tab.
Click "Add New Database User".
Set a username and password, and assign the role (e.g., Atlas Admin or read/write).
Choose where the user can connect from (allow access from your IP or anywhere).
After the application is fully deployed using argocd, the database is actively handling requests and successfully connected to the application.
Set up SonarQube variables for a CI/CD pipeline
SONAR_ORGANIZATION: Specify your organization name in SonarQube.
SONAR_PROJECT_KEY: Define a unique key for your project within the organization.
SONAR_TOKEN: Generate a secure token from your SonarQube account to authenticate API requests.
SONAR_URL: Set the base URL (https://sonarcloud.io).
These variables help integrate SonarQube for code quality analysis within your CI/CD pipeline, ensuring secure authentication and project identification.
SonarQube is an open-source platform used to inspect the quality of code by analyzing codebases for bugs, vulnerabilities and technical debt.
It helps development teams ensure that their code adheres to best practices, maintains high standards, and meets specific criteria before it's deployed. SonarQube provides an easy-to-read dashboard that visualizes key metrics and offers detailed feedback on areas that need improvement.
Key Features of SonarQube:
Code Analysis: SonarQube performs static code analysis on multiple programming languages, identifying bugs, security vulnerabilities, and maintainability issues.
Quality Gates: A set of conditions that your project must meet before it is considered to pass. If the quality gate is not met (e.g., due to failing code coverage, high bug count, or poor maintainability), it will result in a "Quality Gate Failed" message on the dashboard.
Security Hotspots: SonarQube highlights security issues that require developer review, ensuring that code is secure before it goes into production.
Technical Debt Measurement: It calculates the time and effort required to fix the issues in your codebase (measured as technical debt).
Integration with CI/CD Pipelines: SonarQube integrates with popular build tools and CI/CD pipelines (e.g., Jenkins, GitHub Actions) to automate code quality checks.
GitHub Personal Access Token (PAT)
GitHub Personal Access Token is crucial for enabling secure and controlled interactions between the CI/CD pipeline and the GitHub repository, ensuring both functionality and security.
Authenticating with Snyk
SNYK_TOKEN is essential for securely authenticating with Snyk, automating vulnerability scanning, and ensuring controlled access within your CI/CD pipeline.
Snyk is a popular security tool that helps developers find and fix vulnerabilities in their code, open-source dependencies, container images, and infrastructure as code.
This indicates that the container image built on Node 18 has 168 known vulnerabilities. These could include issues in Node.js itself, or in any open-source libraries included in the image.
Upgrading from Node 18 to Node 20.18 only reduces the number of vulnerabilities slightly.
The zlib/zlibg Integer Overflow or Wrap Around refers to a critical vulnerability in the zlib library, which is a popular compression library used across many applications.
This vulnerability allows an attacker to exploit integer arithmetic by causing an overflow or wrap-around, potentially leading to arbitrary code execution, crashes, or other unpredictable behavior. It could allow malicious actors to exploit systems that use this vulnerable library, making it critical to patch.
Since this vulnerability is categorized as critical, it means it could have a significant impact on the security of your application, and patching it should be prioritized.
Setting up Docker Hub token
Docker Hub serves as a centralized repository to store and manage your Docker images. This makes it easy to version control your images and share them across different environments.
These keys and tokens are essential for securely authenticating and authorizing access to various services and tools in a CI/CD pipeline. They enable automation, enhance security, and facilitate collaboration across different environments and teams.
Part Four: Deploying React Application
On push, the CI/CD pipeline in GitHub triggers automatic builds and deployments based on the latest code changes.
CI/CD pipeline Github Link
The pipeline ran successfully, completing all tasks such as building and deploying without failures.
Create an eks cluster using the below commands.
The command allows connection to the EKS cluster created allowing Kubernetes operations on that cluster.
aws eks update-kubeconfig --region us-east-1 --name quizapp-eks-cluster
validate whether nodes are ready
kubectl get nodes
Configure the Load Balancer on our EKS because our application will have an ingress controller. Download the policy for the LoadBalancer prerequisite.
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json
Create the IAM policy
aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam_policy.json
Create OIDC Provider
To allows the cluster to integrate with AWS IAM for assigning IAM roles to Kubernetes service accounts, enhancing security and management.
eksctl utils associate-iam-oidc-provider --region=us-east-1 --cluster=quizapp-eks-cluster --approve
Create Service Account
eksctl create iamserviceaccount --cluster=quizapp-eks-cluster --namespace=kube-system --name=aws-load-balancer-controller --role-name AmazonEKSLoadBalancerControllerRole --attach-policy-arn=arn:aws:iam::<ACCOUNT-ID>:policy/AWSLoadBalancerControllerIAMPolicy --approve --region=us-east-1
Deploy the AWS Load Balancer Controller using Helm
sudo snap install helm --classic
helm repo add eks https://aws.github.io/eks-charts
helm repo update eks
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=quizapp-eks-cluster --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller
check whether aws-load-balancer-controller pods are running or not.
kubectl get deployment -n kube-system aws-load-balancer-controller
Configure ArgoCD
Create the namespace for the EKS Cluster.
kubectl create namespace quiz
kubectl get namespaces
Create a separate namespace for it and apply the argocd configuration for installation.
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.7/manifests/install.yaml
Confirm argoCD pods are running
kubectl get pods -n argocd
Expose the argoCD server as LoadBalancer
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
Get the password for our argoCD server to perform the deployment.
sudo apt install jq -y
export ARGOCD_SERVER=`kubectl get svc argocd-server -n argocd -o json | jq --raw-output '.status.loadBalancer.ingress[0].hostname'`
export ARGO_PWD=`kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d`
echo $ARGO_PWD
Set up the Monitoring for our EKS Cluster using Prometheus and Grafana
Helm Add all the helm repos, the prometheus, grafana repos
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
Install the prometheus
helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring --create-namespace
Install the Grafana
helm install grafana grafana/grafana -n monitoring --create-namespace
Get Grafana admin user password
kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
Confirm the services and validate from AWS LB console.
kubectl get svc -n monitoring
Access your Prometheus Dashboard Paste the Prometheus-LB-DNS:9090 in your browser. Click on Status and select Target. You will see a lot of Targets.
Click on Data Source, Select Prometheus and in the Connection, paste your Prometheus-LB-DNS:9090
Create a dashboard to visualize our Kubernetes Cluster Logs.
Import a type of Kubernetes Dashboard. 6417 ID
Deploy Quiz Application using ArgoCD
Configure the app_code github repository in ArgoCD
Create our application which will deploy the frontend, backend, database and ingress.
Deployment is synced and healthy
Once your Ingress application is deployed. It will create an Application Load Balancer, You can check out the load balancer named with k8s-ingress.
Troubleshooting
In my case the backend nodes were not healthy and this is what I did to solve the issue.
One needs to change the health path to (/api/questions)
Copy the ALB-DNS and go to Cloud Front and set up a distribution to route traffic from the load balancer.
This ensures that Cloud Front pulls content from my load balancer and delivers it efficiently to users.
To make the application accessible via a custom domain, I used Amazon Route 53, which is Amazon's DNS service.
I created a CNAME record in my Route 53 hosted zone and pointed it to the CloudFront distribution's DNS. This setup allows users to access the application through a friendly, custom domain name while benefiting from CloudFront's caching and distribution capabilities.
I used the MongoDB connection string to interact with the database manually through my console. This approach allowed me to directly upload the data to MongoDB Atlas without relying on the automated pipeline, ensuring that my application had access to the necessary data for proper functionality.
The data has been successfully been inserted.
After deploying the application, it’s essential to confirm that all pods are running correctly within the designated namespace. They run successfully!
kubectl get nodes -n quiz
kubectl logs <name of the pod> -n quiz
Final Part: App Demo
Throughout this journey, we started by automating the deployment process using GitHub Actions and Terraform, setting up an EKS cluster as the backbone of our infrastructure. From there, we integrated essential security tools like Snyk and SonarQube, ensuring that our code remained secure and of high quality. We also connected external services such as MongoDB and Docker Hub to streamline our data and container management.
Next, we configured a load balancer to manage traffic efficiently, linked CloudFront for content delivery, and used Amazon Route 53 to route our DNS. Finally, we set up Grafana and Prometheus for monitoring and observability, giving us a comprehensive view of the system’s health and performance.
This end-to-end process has given us a scalable, secure, and well-monitored application infrastructure—ready for production use. Thank you for following along!
More Grafana dashboard IDs to try:
Dashboard ID
k8s-addons-prometheus.json 19105
k8s-system-api-server.json 15761
k8s-system-coredns.json 15762
k8s-views-global.json 15757
k8s-views-namespaces.json 15758
k8s-views-nodes.json 15759
k8s-views-pods.json 15760
Reference
Master Three-Tier Application | A Complete DevSecOps Guide on AWS with Kubernetes, GitOps & ArgoCD