This article will cover the steps to spin up a Kubernetes Cluster using Github Actions. Also will discuss the use cases and benefits of this.
Prerequisites ✅
🤔 Understanding the Problem
Companies are increasingly using Kubernetes for Production load management. The number of Companies building Kubernetes-specific tools is putting JS frameworks to shame.
Big Companies are choosing to build custom applications and controllers rather than simply using Helm charts.
But every software needs QA testing! And let me just put this forward owning a K8s Cluster on any provider ain't cheap.
What if I told you, there is a way to test your software changes on a Kubernetes cluster for Free ??
I ran into a similar issue on my personal project Kube-ez. The issue was to build a better CI for my project: Here. Since the project is made to interact with a Kubernetes Cluster, I needed a cluster to check the changes made.
The CI I built saves me more than 300$💰/year!
🤷🏻♂️ Why solve this Problem?
For Kubernetes specific projects, it is too expensive to host a cluster only to run each commit made to the repo.
You won't have to maintain and provision a cluster.
You don't even need to configure monitoring and observability tools
You save money and resources as the Cluster can spin up when needed and kills off after use.
Each commit runs on an independent cluster, ideal to isolate the error in case of multiple users trigger the CI.
You can make the Cluster like your company's sandbox/staging/prod environment. Thus, each Developer can "Psuedo" run their code changes on the env without any cost. (lil far-fetched but yes possible)
🏗️ Let's get building!
These are the steps I used in my CI:
1. Build your project ⚙️
Make sure the code gets compiled and doesn't run into errors in the initial setup. As bad code will eventually break pipeline. Thus better to compile and verify it early on.
Many Github Actions workflows help you build your project on the CI: Here.
Code Snippet:
name: Building go binary
on:
push:
pull_request:
jobs:
build-binary:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.18
- name: Build
run: go build -v ./... && echo Go Binary built
## Need to add the Test command here, when we have unit TCs
In Go Project, this compiles the code and makes a binary out of it.
2. Develop a Docker Image 🐳
Here starts the interesting part!
Now we will make sure each code change gets its own image to run on the cluster. This is tricky, as you would not want to push new tags to your Docker repository every time the workflow runs.
I suggest using Github Container Registry (ghcr) as a middle man. You can store the image of each commit without messing up your Docker Repository.
- Check out the code for each commit using: actions/checkout.
- Log into ghcr using: docker/login-action
- Build an image of the commit and push it into ghcr with the appropriate tags using: docker/build-and-push
Tip 💡 : You could use the
github.SHA
as the tag of the image as shown in snippet
Code Snippet:
name: Kube-ez CI
on:
push:
pull_request:
workflow_dispatch:
env:
# Use docker.io for Docker Hub if empty
REGISTRY: ghcr.io
DOCKER_IMAGE_NAME: kube-ez
DOCKERFILE_PATH: ./Dockerfile
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
## checks out the commit made to the project
- name: Checkout code
uses: actions/checkout@v2
## logins to ghcr
- name: Login to Github Container Registry
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
## builds and pushes the image to ghcr
- name: Build and push Docker image
uses: docker/build-push-action@v2
with:
context: .
file: ${{ env.DOCKERFILE_PATH }}
push: true
tags: |
ghcr.io/${{ github.repository_owner }}/${{ env.DOCKER_IMAGE_NAME }}:${{ github.sha }}
✅ Your code changes are now ready to run on a Cluster
3. Running on the Cluster ☸️
We will be using: helm/kind-action.
This is the package that provides you with an ephemeral K8s cluster for your CI. This cluster starts and stops at the end of your Github actions workflow.
Alternate Github Actions: Kind
- Start the cluster.
- Mention the commands for setting up the Cluster (* as needed).
- Run your image on the cluster, as done Here.
- You can decalre the commands to test your changes in the yaml.
Code Snippet:
name: Kube-ez CI
on:
push:
pull_request:
workflow_dispatch:
env:
# Use docker.io for Docker Hub if empty
REGISTRY: ghcr.io
DOCKER_IMAGE_NAME: kube-ez
DOCKERFILE_PATH: ./Dockerfile
jobs:
Test-on-cluster:
runs-on: ubuntu-latest
steps:
## starts the cluster
- name: Testing on a k8s Kind Cluster
uses: helm/kind-action@v1.4.0
## makes sure cluster is up and running
- run: |
kubectl cluster-info
kubectl get nodes
- name : Preparing cluster for kube-ez
## Commands that setup the cluster as per my project needs
run: |
kubectl apply -f https://raw.githubusercontent.com/kitarp29/kube-ez/main/yamls/sa.yaml
kubectl apply -f https://raw.githubusercontent.com/kitarp29/kube-ez/main/yamls/crb.yaml
kubectl run kube-ez --image=ghcr.io/${{ github.repository_owner }}/${{ env.DOCKER_IMAGE_NAME }}:${{ github.sha }} --port=8000
sleep 20
kubectl get po
kubectl port-forward kube-ez 8000:8000 &>/dev/null &
sleep 5
kubectl port-forward kube-ez 8000:8000 &>/dev/null &
- run: |
curl -i http://localhost:8000/
This file should be taken a s reference and configured as per the usage
You did it! 🎉
⚠️ Caution:
List of problems I see with this approach:
- You can't get a CLI connection to this cluster. Only the Kubectl commands you write in the workflow yaml execute.
- In case of infamous errors in the cluster like CrashLoopBackOff or others you can't debug in real-time.
- It restricts a lot of actions you can run on a Cluster.
Hope this helped you!
If you liked this content you can follow me or on Twitter at kitarp29 for more!
Thanks for reading my article :)