A long time ago, in a job far, far away, I was tasked with switching our old-school LAMP stacks over to Kubernetes. My boss at the time, always starry-eyed for new technologies, announced the change should only take a few days—a bold statement considering we didn’t even have a grasp on how containers worked yet.
After reading the official docs and Googling around, I began to feel overwhelmed. There were too many new concepts to learn: there were the pods, the containers, and the replicas. To me, it seemed Kubernetes was reserved for a clique of sophisticated developers.
This post is what I would have liked to read at that time: a short, simple, no-nonsense guide on how the heck I go about deploying an application in Kubernetes.
I’ve put all the files we’ll be needing below. Feel to fork and clone the repository.
Maybe I’m stating the obvious, but the first step is getting a Kubernetes cluster. Most cloud providers offer this service in one form or another, so shop around and see what fits your needs. The lowest-end machine and cluster size is enough to run our example app. I like starting from a three-node cluster, but you can get away with just one node.
After the cluster ready, download the kubeconfig file from your provider. Some let you download it directly from their web console, while others require a helper program. Check their documentation. We’ll need this file to connect to the cluster.
Step 2: The Docker Image
We can run anything in Kubernetes—as long as it has been packaged with Docker.
So, what does Docker do? Docker creates an isolated space, called a container, where application can run without interference. We can use Docker to put our applications in portable images that we can run anywhere without having to install libraries or dependencies.
To build a Docker image, we need the docker CLI and a Dockerfile like this:
Automatic deployment is Kubernetes’ strong suit. All we need is to tell the cluster our final desired state and it will take care of the rest.
In Kubernetes we don’t manage containers directly. In truth, we work with pods. A pod is like a group of merry friends that always go together to the same places. Containers in a pod are guaranteed to run on the same node and have the same IP. They always start and stop in unison and, since they run on the same machine, they can share its resources.
To tell Kubernetes what we want we must write a manifest file. A minimal viable manifest looks like this:
Labels: resources can have a name and several labels, which are convenient to organize things.
Spec: defines the desired final state and the template used to create the pods.
Replicas: defines how many copies of the pod to create. We usually set this to the number of nodes in the cluster.
To complete the setup, we need a service. A service presents a fixed IP address to the world. We can use load balancer service to forward traffic to the pods:
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
semaphore-demo-ruby-kubernetes 1/1 1 1 31s
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.120.0.1 <none> 443/TCP 5d20h
semaphore-demo-ruby-kubernetes-lb LoadBalancer 10.120.8.161 <pending> 80:31603/TCP 36s
The Secret Ingredient: CI/CD
Pssst... come near... I have to tell you a secret...
You don’t have to do all this by hand. You can use Continuous Integration and Delivery to test and deploy on your behalf.
The demo project we cloned in the beginning comes with everything you need to get started with the Semaphore CI/CD platform.
To get started with a free account, go to semaphoreci.com and sign up using your GitHub account.
Tell Semaphore How to Connect to Kubernetes
Semaphore needs to know how to connect to your cluster. We can store sensitive data in Semaphore using [secrets].
Semaphore provides a secure mechanism to store sensitive information such as passwords, tokens, or keys.
In order to connect to your cluster, create a [secret] in the Semaphore website:
On the left navigation bar, under Configuration, click on Secrets.
Click on Create New Secret.
The secret name is “do-k8s”.
Upload the Kubeconfig file to /home/semaphore/.kube/dok8s.yml.
Define any other environment variables needed to connect to your cloud.
Click on the Save Changes button.
Create second secret to store the dockerhub user and password:
Semaphore Pipelines
Semaphore uses the YAML Syntax to define what the pipelines do at each step.
The pipeline files are located in the .semaphore directory:
semaphore.yml: tests the application.
docker-build.yml: build the Docker image and push it to Docker Hub.
deploy-k8s.yml: deploy the application.
Let’s examine deploy-k8s.yml. The basics for pipelines were discussed in a previous post so I’ll jump straight deployment job. The heart of the pipeline are blocks and jobs. We put our command in jobs, and our jobs in blocks.
The deploy block first imports the secrets we just created using the secrets property:
blocks:-name:Deploy to Kubernetestask:secrets:-name:do-k8s-name:dockerhub
Then, we define the environment using the env_vars property. You may need to add more variables.