CI/CD Pipeline: DevSecOps for Application Deployment on Kubernetes: GitHub Actions and ArgoCD

Manasseh - Oct 14 - - Dev Community

Terraform Configuration for AWS EKS Cluster
Github Link

Kubernetes DevSecOps CICD Project Using Github Actions and ArgoCD
Github Link

Part one: Setting up the environment

ssh exchange between my local computer and my github account

ssh-keygen 
export GIT_SSH_COMMAND="ssh -i ~/.ssh/key"
Enter fullscreen mode Exit fullscreen mode

This command tells Git to use the specified SSH key for authentication during operations like git clone, git pull, and git push.
setting up 1

setting up 2

Steps to Create an IAM User and Generate Access Key

In the IAM dashboard, click on Users on the left-hand menu.
Click the Add users button.
Generate Access Key

Attach existing policies directly. Search for and select AdministratorAccess to give the user full access.
AdministratorAccess

On the success screen, you will see an Access key ID and Secret access key for the user. Make sure to download the CSV or copy these to a safe place as they will not be retrievable again.

Access key ID and Secret access key

Set up an S3 bucket on AWS to store Terraform state files.

This is a common practice in Infrastructure-as-Code (IaC) deployments to ensure the state is stored remotely, securely, and is accessible for team collaboration.

The bucket will eventually populate once we run terraform init and store the file shown.
eventually populate

Store the AWS credentials

(Access Key, Secret Key, and BUCKET NAME) securely in GitHub Secrets for use in a CI/CD pipeline. These credentials will be used by GitHub Actions to authenticate with AWS services, such as deploying infrastructure or uploading state files to an S3 bucket.

Github Secrets

The Terraform files define AWS infrastructure resources like S3, DynamoDB, and IAM roles, automating deployment through code.

*Deploying AWS resources using Terraform Configuration *Github Link

iac code

On push, the CI/CD pipeline in GitHub triggers automatic builds and deployments based on the latest code changes.

GitHub triggers

The pipeline ran successfully, completing all tasks such as building and deploying without failures.

successfullyn

Part two: Configuring the environment

The jump host server, often used for secure access to private network resources, has been successfully deployed. Lets SSH into it.
Confirm the tools such as docker, terraform, aws cli, kubectl, trivy and eksctl are installed.

jump host server

Create an eks cluster
eksctl create cluster --name quizapp-eks-cluster --region us-east-1 --node-type t2.large --nodes-min 2 --nodes-max 4

eks cluster1

eks cluster2

Part three: Setting up mongodb, SonarQube, snyk and docker hub

Create a MongoDB cluster and a user.

Click "Build a Cluster" and choose a cloud provider, region, and configuration (shared/ free plan).
Click Create Cluster and wait for deployment.

MongoDB cluster

Go to Database Access under the Security tab.
Click "Add New Database User".
Set a username and password, and assign the role (e.g., Atlas Admin or read/write).
Choose where the user can connect from (allow access from your IP or anywhere).

Database Access

After the application is fully deployed using argocd, the database is actively handling requests and successfully connected to the application.

Database Access2

Set up SonarQube variables for a CI/CD pipeline

SONAR_ORGANIZATION: Specify your organization name in SonarQube.
SONAR_PROJECT_KEY: Define a unique key for your project within the organization.
SONAR_TOKEN: Generate a secure token from your SonarQube account to authenticate API requests.
SONAR_URL: Set the base URL (https://sonarcloud.io).

These variables help integrate SonarQube for code quality analysis within your CI/CD pipeline, ensuring secure authentication and project identification.

SonarQube1

SonarQube2

SonarQube is an open-source platform used to inspect the quality of code by analyzing codebases for bugs, vulnerabilities and technical debt.
It helps development teams ensure that their code adheres to best practices, maintains high standards, and meets specific criteria before it's deployed. SonarQube provides an easy-to-read dashboard that visualizes key metrics and offers detailed feedback on areas that need improvement.

SonarQube1

Key Features of SonarQube:

  • Code Analysis: SonarQube performs static code analysis on multiple programming languages, identifying bugs, security vulnerabilities, and maintainability issues.

  • Quality Gates: A set of conditions that your project must meet before it is considered to pass. If the quality gate is not met (e.g., due to failing code coverage, high bug count, or poor maintainability), it will result in a "Quality Gate Failed" message on the dashboard.

  • Security Hotspots: SonarQube highlights security issues that require developer review, ensuring that code is secure before it goes into production.

  • Technical Debt Measurement: It calculates the time and effort required to fix the issues in your codebase (measured as technical debt).

  • Integration with CI/CD Pipelines: SonarQube integrates with popular build tools and CI/CD pipelines (e.g., Jenkins, GitHub Actions) to automate code quality checks.

SonarQube2

SonarQube3

GitHub Personal Access Token (PAT)

GitHub Personal Access Token is crucial for enabling secure and controlled interactions between the CI/CD pipeline and the GitHub repository, ensuring both functionality and security.

Personal Access Token

Authenticating with Snyk

SNYK_TOKEN is essential for securely authenticating with Snyk, automating vulnerability scanning, and ensuring controlled access within your CI/CD pipeline.

SNYK_TOKEN

Snyk is a popular security tool that helps developers find and fix vulnerabilities in their code, open-source dependencies, container images, and infrastructure as code.

This indicates that the container image built on Node 18 has 168 known vulnerabilities. These could include issues in Node.js itself, or in any open-source libraries included in the image.
Upgrading from Node 18 to Node 20.18 only reduces the number of vulnerabilities slightly.

Snyk1

Snyk2

The zlib/zlibg Integer Overflow or Wrap Around refers to a critical vulnerability in the zlib library, which is a popular compression library used across many applications.

Snyk3

This vulnerability allows an attacker to exploit integer arithmetic by causing an overflow or wrap-around, potentially leading to arbitrary code execution, crashes, or other unpredictable behavior. It could allow malicious actors to exploit systems that use this vulnerable library, making it critical to patch.
Since this vulnerability is categorized as critical, it means it could have a significant impact on the security of your application, and patching it should be prioritized.

Setting up Docker Hub token

Docker Hub serves as a centralized repository to store and manage your Docker images. This makes it easy to version control your images and share them across different environments.

Docker Hub1

Docker Hub2

These keys and tokens are essential for securely authenticating and authorizing access to various services and tools in a CI/CD pipeline. They enable automation, enhance security, and facilitate collaboration across different environments and teams.

keys and tokens

Part Four: Deploying React Application

On push, the CI/CD pipeline in GitHub triggers automatic builds and deployments based on the latest code changes.

CI/CD pipeline Github Link

CI/CD pipeline1

The pipeline ran successfully, completing all tasks such as building and deploying without failures.

CI/CD pipeline2

Create an eks cluster using the below commands.

The command allows connection to the EKS cluster created allowing Kubernetes operations on that cluster.

aws eks update-kubeconfig --region us-east-1 --name quizapp-eks-cluster
Enter fullscreen mode Exit fullscreen mode

validate whether nodes are ready
kubectl get nodes
Configure the Load Balancer on our EKS because our application will have an ingress controller. Download the policy for the LoadBalancer prerequisite.

curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json
Enter fullscreen mode Exit fullscreen mode

Create the IAM policy

aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam_policy.json
Enter fullscreen mode Exit fullscreen mode

Create OIDC Provider
To allows the cluster to integrate with AWS IAM for assigning IAM roles to Kubernetes service accounts, enhancing security and management.

eksctl utils associate-iam-oidc-provider --region=us-east-1 --cluster=quizapp-eks-cluster --approve
Enter fullscreen mode Exit fullscreen mode

Create Service Account

eksctl create iamserviceaccount --cluster=quizapp-eks-cluster --namespace=kube-system --name=aws-load-balancer-controller --role-name AmazonEKSLoadBalancerControllerRole --attach-policy-arn=arn:aws:iam::<ACCOUNT-ID>:policy/AWSLoadBalancerControllerIAMPolicy --approve --region=us-east-1
Enter fullscreen mode Exit fullscreen mode

eks cluster

Deploy the AWS Load Balancer Controller using Helm

sudo snap install helm --classic
helm repo add eks https://aws.github.io/eks-charts
helm repo update eks
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=quizapp-eks-cluster --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller
Enter fullscreen mode Exit fullscreen mode

Load Balancer Controller

check whether aws-load-balancer-controller pods are running or not.
kubectl get deployment -n kube-system aws-load-balancer-controller

load-balancer

Configure ArgoCD

Create the namespace for the EKS Cluster.

kubectl create namespace quiz
kubectl get namespaces
Enter fullscreen mode Exit fullscreen mode

Create a separate namespace for it and apply the argocd configuration for installation.

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.7/manifests/install.yaml
Enter fullscreen mode Exit fullscreen mode

argocd configuration

Confirm argoCD pods are running
kubectl get pods -n argocd

argoCD pods

Expose the argoCD server as LoadBalancer

kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
Enter fullscreen mode Exit fullscreen mode

rgoCD server as LoadBalancer

Get the password for our argoCD server to perform the deployment.
sudo apt install jq -y

export ARGOCD_SERVER=`kubectl get svc argocd-server -n argocd -o json | jq --raw-output '.status.loadBalancer.ingress[0].hostname'`
export ARGO_PWD=`kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d`
echo $ARGO_PWD
Enter fullscreen mode Exit fullscreen mode

argoCD serve

Set up the Monitoring for our EKS Cluster using Prometheus and Grafana

Helm Add all the helm repos, the prometheus, grafana repos

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
Enter fullscreen mode Exit fullscreen mode

Install the prometheus

helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring --create-namespace
Enter fullscreen mode Exit fullscreen mode

Install the Grafana

helm install grafana grafana/grafana -n monitoring --create-namespace
Enter fullscreen mode Exit fullscreen mode

Grafana

Get Grafana admin user password

kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
Enter fullscreen mode Exit fullscreen mode

grafana

Confirm the services and validate from AWS LB console.
kubectl get svc -n monitoring

services

Access your Prometheus Dashboard Paste the Prometheus-LB-DNS:9090 in your browser. Click on Status and select Target. You will see a lot of Targets.

Prometheus Dashboard

Click on Data Source, Select Prometheus and in the Connection, paste your Prometheus-LB-DNS:9090

Data Source

Create a dashboard to visualize our Kubernetes Cluster Logs.
Import a type of Kubernetes Dashboard. 6417 ID
Prometheus

dashboard

dashboard2

Deploy Quiz Application using ArgoCD

Configure the app_code github repository in ArgoCD

ArgoCD1

Create our application which will deploy the frontend, backend, database and ingress.

application

Deployment is synced and healthy
synced and healthy

ArgoCD2

Once your Ingress application is deployed. It will create an Application Load Balancer, You can check out the load balancer named with k8s-ingress.

Troubleshooting
In my case the backend nodes were not healthy and this is what I did to solve the issue.
One needs to change the health path to (/api/questions)

loadbalancer2

loadbalancer3

loadbalancer1

Copy the ALB-DNS and go to Cloud Front and set up a distribution to route traffic from the load balancer.

This ensures that Cloud Front pulls content from my load balancer and delivers it efficiently to users.

Cloud Front1

Cloud Front2

To make the application accessible via a custom domain, I used Amazon Route 53, which is Amazon's DNS service.
I created a CNAME record in my Route 53 hosted zone and pointed it to the CloudFront distribution's DNS. This setup allows users to access the application through a friendly, custom domain name while benefiting from CloudFront's caching and distribution capabilities.

ROUTE53

I used the MongoDB connection string to interact with the database manually through my console. This approach allowed me to directly upload the data to MongoDB Atlas without relying on the automated pipeline, ensuring that my application had access to the necessary data for proper functionality.

mongo connection

The data has been successfully been inserted.
mongo connection2

mongo connection3

After deploying the application, it’s essential to confirm that all pods are running correctly within the designated namespace. They run successfully!

kubectl get nodes -n quiz
kubectl logs <name of the pod> -n quiz
Enter fullscreen mode Exit fullscreen mode

log1

log2

log3

Final Part: App Demo

demo1

demo2

demo3

demo4

Throughout this journey, we started by automating the deployment process using GitHub Actions and Terraform, setting up an EKS cluster as the backbone of our infrastructure. From there, we integrated essential security tools like Snyk and SonarQube, ensuring that our code remained secure and of high quality. We also connected external services such as MongoDB and Docker Hub to streamline our data and container management.

Next, we configured a load balancer to manage traffic efficiently, linked CloudFront for content delivery, and used Amazon Route 53 to route our DNS. Finally, we set up Grafana and Prometheus for monitoring and observability, giving us a comprehensive view of the system’s health and performance.

This end-to-end process has given us a scalable, secure, and well-monitored application infrastructure—ready for production use. Thank you for following along!

More Grafana dashboard IDs to try:

Dashboard ID
k8s-addons-prometheus.json 19105
k8s-system-api-server.json 15761
k8s-system-coredns.json 15762
k8s-views-global.json 15757
k8s-views-namespaces.json 15758
k8s-views-nodes.json 15759
k8s-views-pods.json 15760

Grafana Dashboard1

Grafana Dashboard2

Grafana Dashboard3

Grafana Dashboard4

Grafana Dashboard5

Grafana Dashboard6

Grafana Dashboard7

Grafana Dashboard8

Reference

Master Three-Tier Application | A Complete DevSecOps Guide on AWS with Kubernetes, GitOps & ArgoCD

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .