How to Deploy and Scale Strapi on a Kubernetes Cluster 1/2

Strapi - Feb 3 '23 - - Dev Community

The container world has revolutionized the software industry in a very short time. Although the underlying technologies are not new, the combination and automation of them is. Among this revolution's key players we have Kubernetes, initially released in 2014 by Google.

The Cloud Native Computing Foundation (CNCF), a Linux Project founded in 2015 to help push and align the container industry. Its founding members include Google, CoreOs, Red Hat, Intel, Cisco, etc., and one of its first projects was Kubernetes.
Since 2016 it has been conducting annual surveys to determine the state of the container world.
The Annual Survey of 2021 states that 96% of organizations are either using or evaluating Kubernetes. And 90% of them rely on cloud-managed alternatives. So it's safe to agree that Kubernetes can be qualified as a stable and mainstream technology.

Kubernetes is, in a big summary an operative system for container orchestration. It allows teams to deploy software easier, faster and more efficiently. The learning curve can seem steep, but the ROI is high, plus there are many cloud-managed alternatives that can help you a lot. To use Kubernetes, it is essential to understand containers and the surrounding concepts such as images, runtimes, and/or orchestration.

The goal of this series of articles, divided into two parts, is to give a comprehensive guide on how to integrate Strapi with Kubernetes.
It will cover the journey from building your image to deploying a highly available, and robust deployment. The first part will focus on the building blocks as well as an intermediate deployment. While the second part, the following article, focuses on a highly-available deployment and tries to cover more advanced topics.

Strapi is the leading open-source headless CMS based on NodeJS, and its projects can vary a lot between themselves, but also Kubernetes provides a lot of flexibility. Therefore, it's worth investing some time in the best practices to integrate them.

Pre-requisites

Tooling

All the presented work was done on a MacBook Pro with macOS Ventura 13.2, for there will be macOS specific commands which should be easily translated to Linux.

The following tools and versions are used; any path or minor updated version should work as well:

Project Setup

The Strapi project was created following the Quick Start Guide - Part A by running the following command:

yarn create strapi-app strapi-k8s
# using the following configuration
? Choose your installation type Custom (manual settings)
? Choose your preferred language JavaScript
? Choose your default database client mysql
? Database name: strapi-k8s
? Host: 127.0.0.1
? Port: 3306
? Username: strapi
? Password: ****** # please always use strong passwords
? Enable SSL connection: No
Enter fullscreen mode Exit fullscreen mode

Source Code

You can check out the source code of this article on GitHub.
The code is into two folders, one for each part.

On The Shoulders of Giants

Given that we should never (with some exceptions of course) re-invent the wheel, we'll be relying on the existing Strapi docs and many good articles from the internet.

Strapi + Docker

The first part of the documentation to keep in mind is Running Strapi in a Docker container, with some slight modifications.

First, for the Development Dockerfile (or any Dockerfile), there should always be a .dockerignore in the same location as the Dockerfile with content similar to this:

.idea
node_modules
npm-debug.log
yarn-error.log
Enter fullscreen mode Exit fullscreen mode

This will avoid the duplication of node_modules folder inside the container, which the Dockerfile itself already generates. This could cause problems if your "local" node_modules was generated with a different version, architecture, or something like that.

Second, to run it locally to test your docker image, you should use docker-compose following the (Optional) Docker Compose section.
Don't forget to properly configure your .env file with the matching DB values you configured in your app, you can use the .env.example as a reference.
Just keep in mind that to run the docker-compose, you should use the command:

docker-compose build --no-cache # to force the build of the image without cache
docker-compose --env-file .env up
docker-compose stop # to stop
docker-compose down # to completely remove it (it won't delete any created volumes)
Enter fullscreen mode Exit fullscreen mode

We need to pass the --env-file flag, because we are using environment variables in the compose file.
And you could also do some cleaning to the docker-compose file and use it like this:

version: '3'
services:
  strapi:
    build: .
    image: mystrapiapp:latest
    restart: unless-stopped
    env_file:
      - .env
    volumes:
      - ./config:/opt/app/config
      - ./src:/opt/app/src
      - ./package.json:/opt/package.json
      - ./yarn.lock:/opt/yarn.lock
      - ./public/uploads:/opt/app/public/uploads
      - ./.env:/opt/app/.env
    ports:
      - '1337:1337'
    networks:
      - strapi
    depends_on:
      - strapiDB

  strapiDB:
    platform: linux/amd64 #for platform error on Apple M1 chips
    restart: unless-stopped
    image: mysql:5.7
    command: --default-authentication-plugin=mysql_native_password
    environment:
      MYSQL_USER: ${DATABASE_USERNAME}
      MYSQL_ROOT_PASSWORD: ${DATABASE_PASSWORD}
      MYSQL_PASSWORD: ${DATABASE_PASSWORD}
      MYSQL_DATABASE: ${DATABASE_NAME}
    volumes:
      - strapi-data:/var/lib/mysql
    ports:
      - '3306:3306'
    networks:
      - strapi

volumes:
  strapi-data:

networks:
  strapi:
Enter fullscreen mode Exit fullscreen mode

The following was done to it:

  • We removed the services.strapi.environment section since it's redundant due to services.strapi.env_file.
  • The services.*.container_name key was removed, this could stay, but it's cleaner this way. The docker-compose cmd can abstract any potential use you need for the container_name.
  • The services.strapiDB.env_file key was removed because it's not needed since none of those environment variables on that file are used, only the ones passed in services.strapiDB.environment, you can find more information in the Docker MySQL image official docs.
  • The networks.strapi key should not contain name nor driver for our use. It's better to use the default.

Finally, don't use the latest tag for Building the production container.

It is highly discouraged in the K8s world since it doesn’t tell you anything. While developing, it’s super useful and flexible in your local machine, but once you move your code to a shared environment, it should be clear which version you are using.

On top of that, K8s, by default, will cache the images, which will never guarantee that you are actually pulling the latest image. Spoiler alert: there are some workarounds to use the latest tag in K8s, but the industry agrees that you should not use the latest tag in a shared environment like K8s, even less in production.

Dockerize

Another great alternative to generate the Dockerfile, docker-compose.yaml and all the Docker related files is to use dockerize.
This tool will automatically detect your project and help you add docker support via a nice CLI UI.
From your project root folder, you need to run the following:

npx @strapi-community/dockerize
# complete the steps
✔ Do you want to create a docker-compose file? 🐳 … Yes
✔ What environments do you want to configure? › Both
✔ Whats the name of the project? … strapi-k8s
✔ What database do you want to use? › MySQL
✔ Database Host … localhost
✔ Database Name … strapi-k8s
✔ Database Username … strapi
✔ Database Password … ***********
✔ Database Port … 3306
Enter fullscreen mode Exit fullscreen mode

This tool will generate multiple files, so afterward you can run:

docker-compose --env-file .env up
Enter fullscreen mode Exit fullscreen mode

Nonetheless, it's important that you review all the Docker related files and adapt them to your needs.

Strapi + K8s

The second article to keep in mind is Deploying and Scaling the Official Strapi Demo App "Foodadvisor" with Kubernetes & Docker.
This article provides a good foundation on the K8s concepts, but we will try to go deeper into the K8s rabbit-hole and go beyond the discussed topics.
We won't be using minikube either, for no reason.
Since the app is not the focus of this article, and it should work with any app, assuming that all adjustments are made if it's using customizations.

It’s highly encouraged that you push your images to a docker registry, as the two previous articles recommended. For all production deployments, this is a requirement, but you don’t have to do it for this article.

Kubernetes setup

For this article, we will use k3d by Rancher.
In summary, this project allows us to deploy a lightweight production-ready Kubernetes cluster with docker containers.
If you are already comfortable with K8s, you can use your preferred K8s local setup (e.g., K8s from Docker Desktop, minikube, a cloud provisioned cluster, etc.).
Remember to take care of the private registry, volumes, and port forwarding.

You can check this article Playing with Kubernetes using k3d and Rancher to get everything installed.
With some modifications, first, install the binaries (via brew for macOS would be the easiest).
To create the cluster and in the name of the declarative configuration, you can use a file (as configuration) to create the cluster.

Make sure to have a folder ready to use as your storage, for this article, we'll use /tmp, so make sure to run the command:

mkdir -p /tmp/k3d
Enter fullscreen mode Exit fullscreen mode

Then create a folder to work where you should create a Strapi project, or create a Strapi project and use it as your working directory. Let's create a folder (or project) called strapi-k8s:

mkdir -p ~/strapi-k8s
# or create your Strapi project: yarn create strapi-app strapi-k8s
cd strapi-k8s
Enter fullscreen mode Exit fullscreen mode

Create a file mycluster.yaml with the following content:

apiVersion: k3d.io/v1alpha4 # this will change in the future as we make everything more stable
kind: Simple # internally, we also have a Cluster config, which is not yet available externally
metadata:
  name: mycluster # name that you want to give to your cluster (will still be prefixed with `k3d-`)
servers: 1 # same as `--servers 1`
agents: 2 # same as `--agents 2`
ports:
  - port: 8900:30080 # same as `--port '8080:80@loadbalancer'`
    nodeFilters:
      - agent:0
  - port: 8901:30081 # just in case
    nodeFilters:
      - agent:0
  - port: 8902:30082
    nodeFilters:
      - agent:0
  - port: 1337:31337 # for Strapi
    nodeFilters:
      - agent:0
volumes: # repeatable flags are represented as YAML lists
  - volume: /tmp/k3d:/var/lib/rancher/k3s/storage # same as `--volume '/my/host/path:/path/in/node@server:0;agent:*'`
    nodeFilters:
      - server:0
      - agent:*
registries: # define how registries should be created or used
  create: # creates a default registry to be used with the cluster; same as `--registry-create registry.localhost`
    name: app-registry
    host: "0.0.0.0"
    hostPort: "5050"
  config: | # define contents of the `registries.yaml` file (or reference a file); same as `--registry-config /path/to/config.yaml`
    mirrors:
      "localhost:5050":
        endpoint:
          - http://app-registry:5050
options:
  k3d: # k3d runtime settings
    wait: true # wait for cluster to be usable before returning; same as `--wait` (default: true)
    timeout: "60s" # wait timeout before aborting; same as `--timeout 60s`
    disableLoadbalancer: false # same as `--no-lb`
  kubeconfig:
    updateDefaultKubeconfig: true # add new cluster to your default Kubeconfig; same as `--kubeconfig-update-default` (default: true)
    switchCurrentContext: true # also set current-context to the new cluster's context; same as `--kubeconfig-switch-context` (default: true)
Enter fullscreen mode Exit fullscreen mode

This file can be overwhelming, but it will make sense as we progress in this blog post.
We are defining three nodes (one main server and two agents), ports for easy port-forwarding, volumes for storage, and a registry for our docker images inside the cluster.

Then run the command:

k3d cluster create -c mycluster.yaml
Enter fullscreen mode Exit fullscreen mode

Now you should have a fully functional K8s cluster running in your local machine (you might need to give 5-10 min for everything to converge).

To test it, run the command:

kubectl get nodes
# the output should be similar to this:
NAME                     STATUS   ROLES                  AGE   VERSION
k3d-mycluster-server-0   Ready    control-plane,master   24s   v1.24.4+k3s1
k3d-mycluster-agent-0    Ready    <none>                 20s   v1.24.4+k3s1
k3d-mycluster-agent-1    Ready    <none>                 19s   v1.24.4+k3s1
Enter fullscreen mode Exit fullscreen mode

If something fails, or you have other clusters configured, you can always make sure that you are using the proper K8s context by running the command:

kubectx k3d-mycluster
Enter fullscreen mode Exit fullscreen mode

To stop/start the cluster once it's created, you can use the following commands:

k3d cluster stop mycluster
k3d cluster start mycluster
Enter fullscreen mode Exit fullscreen mode

Finally, you should install Rancher, by following steps 3. Deploying Rancher and 4. Creating the nodeport from the article mentioned.
It will give you a web interface to see the status of your cluster.
The web app is intuitive, and you can check your deployments, pods, persistent volumes, etc., by varying the namespace.

We have everything set up and are ready to have some K8s fun.

Production

It's worth mentioning that k3d is great for local experimentation and many other applications, but for production (or more enterprise setups), you should use a different solution, for example:

Basic Deployment

Let us start our journey by trying to deploy the simplest of applications, our starter app, with the "same" setup we had with docker-compose but in K8s.
To achieve this, let's put everything we have discussed so far together. Therefore we need:

  • Docker images
  • Environment variables
  • DB Deployment
  • App Deployment

To organize ourselves, let us create a folder inside our strapi-k8s folder, called k8s.

mkdir k8s
Enter fullscreen mode Exit fullscreen mode

Development flow

One of the most popular words in the IT/Software world could be "environment".
It's intended to describe a set of objects (hardware and cloud, network, software, configuration, etc.) that align to fulfill a purpose.
This word is also contextual, meaning that depending on which set of objects or purposes can be different.
For example, you could have, at your infrastructure level, 2 environments named "lower-envs" (dev for short) and "higher-envs" (prod for short).
Inside each, you could be hosting a K8s cluster, which has the environments dev, int and test for the "lower-envs" and staging and prod for "higher-envs".
And in each of those environments, you could have a NodeJS app running with its node environment set to development, staging or production given different criteria.
So those are 3 different contexts of environments, and they all refer to different objects and purposes.
The important takeaway is that we need to be aware of the context whenever we are talking about the "environment".

Strapi defines 3 major "environments": development, staging and production.
Development should be the developers local environment on their host machine.
Staging and production should be the environments where "final" content is created.
This means that, whenever you create or update content-types, you should do it in a "Strapi development environment".
Afterward, it should be pushed to a source control for further team validation and control.
Finally, it can be considered Strapi production and be shipped to the content creators for its final journey to the end user.

All of this being said, for this article, we will assume that the proper workflow is in place.
Extending the same concept, it's not recommended to deploy a "Strapi development" environment to K8s.
The developers, architects, and stakeholders should carefully create or update content types before moving this into K8s.
Once this workflow is defined, you can align it with semantic versioning for your Docker images.

Creating your Docker images is also tightly coupled with this development flow.
You can use any CI/CD tools to achieve this, e.g., Jenkins, Github Actions, GitLab CI/CD, etc.
Therefore, any Strapi version update, code update, or content-type update, should be versioned properly through the Docker image tag.

It's also worth adding, that this process should be tailored to each company's requirements.
For example, a company whose content-types are not changing that often might be good by using "Strapi production environment" in their K8s environment.
But maybe another company whose content-types change more regularly, or they have a different type of quality pipeline might want to have "Strapi staging environment" in dev and int K8s environment, and "Strapi production" in the rest.
The bottom line, both options are good as long as they work for everybody involved in the process.

Docker images

For this article's purposes, we'll assume that all of Strapi's environments will be in production regardless of the environment they are deployed in K8s.

As mentioned earlier, we must build our images with the proper versioning.
We will be using the Dockerfile.prod from the Docker deployment docs.
So let's build the Strapi production image:

docker build -t mystrapiapp-prod:0.0.1 -f Dockerfile.prod .
Enter fullscreen mode Exit fullscreen mode

But this will only work in our local machine, not in our K8s cluster.
Sadly, K8s running via K3d (or in the cloud) don't know of our local Docker registry, so we added a section of registries to the mycluster.yaml when creating the K3d cluster.
So we need to tag the docker image and push it to that registry as follows:

docker tag mystrapiapp-prod:0.0.1 localhost:5050/mystrapiapp-prod:0.0.1
docker push localhost:5050/mystrapiapp-prod:0.0.1
# or build it with the tag
docker build -t mystrapiapp-prod:0.0.1 -t localhost:5050/mystrapiapp-prod:0.0.1 -f Dockerfile .
docker push localhost:5050/mystrapiapp-prod:0.0.1
Enter fullscreen mode Exit fullscreen mode

Please note that we are tagging it with localhost:5050, which according to our conf will mirror app-registry:5050 inside the cluster.

Alternatively, you can create a DockerHub account (or your preferred docker registry), then tag and push your images to that registry.
If you do this, you can re-create your K3d cluster without the registries section.
And whenever this article references mystrapiapp-prod docker image, make sure to use your registry's URL.

Environment variables

We need to configure our app, and we have used environment variables so far.
Kubernetes provides a mechanism to configure env vars via ConfigMaps.
We need one for the database (DB) and another for the app.
But, if you think about it, among those env vars we have the DB password, which is sensitive data. Therefore it should be somewhere protected.
So for those env vars in particular, we will use Secrets.
Remember that each value added to a Secret must be base64 encoded.

For example, if you want to store the string "123456", you can run the command:

echo -n 123456 | base64
# MTIzNDU2
Enter fullscreen mode Exit fullscreen mode

Let's create the file conf.yaml inside our k8s folder, with the following content:

# ~/strapi-k8s/k8s/conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: strapi-database-conf
data:
  MYSQL_USER: strapi
  MYSQL_DATABASE: strapi-k8s
---
apiVersion: v1
kind: Secret
metadata:
  name: strapi-database-secret
type: Opaque
data:
  # please NEVER use these passwords, always use strong passwords
  MYSQL_ROOT_PASSWORD: c3RyYXBpLXN1cGVyLXNlY3VyZS1yb290LXBhc3N3b3Jk # echo -n strapi-super-secure-root-password | base64
  MYSQL_PASSWORD: c3RyYXBpLXBhc3N3b3Jk # echo -n strapi-password | base64
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: strapi-app-conf
data:
  HOST: 0.0.0.0
  PORT: "1337"
  NODE_ENV: production

  # we'll explain the db host later
  DATABASE_HOST: strapi-db
  DATABASE_PORT: "3306"
  DATABASE_USERNAME: strapi
  DATABASE_NAME: strapi-k8s
---
apiVersion: v1
kind: Secret
metadata:
  name: strapi-app-secret
type: Opaque
data:
  # use the proper values in here
  APP_KEYS: <APP keys in base64>
  API_TOKEN_SALT: <API token salt in base64>
  ADMIN_JWT_SECRET: <admin JWT secret in base64>
  JWT_SECRET: <JWT secret in base64>
  # please NEVER use these passwords, always use strong passwords
  DATABASE_PASSWORD: c3RyYXBpLXBhc3N3b3Jk # echo -n strapi-password | base64
Enter fullscreen mode Exit fullscreen mode

Breaking down the objects:

  • strapi-database-conf and strapi-app-conf, contain environment variables that will configure their respective service. The values are not sensitive.
  • strapi-database-secret and strapi-app-secret, also contain environment variables that will configure their respective service. But these values are sensitive values. Therefore, they are stored in a more proper K8s object. These values won't be encrypted by K8s, but you can later restrict their access using RBAC.

DB Deployment

So now, we need a Deployment for our database.
Technically speaking, you could also deploy a database using a StatefulSet. Still, I'm leaving that debate for another day (or you can also check this blog in case you are curious).
So let's write the deployment file and ensure it uses the proper ConfigMap and Secret.
If you remember, back from our docker-compose file, the DB section used a docker volume to write its data, so we need something similar here.
We can achieve basic storage with Persistent Volumes and Persistent Volume Claims, but this is a very complicated topic that we'll expand on later in this article.
Let's create the file db.yaml inside our k8s folder, with the following content:

# ~/strapi-k8s/k8s/db.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: strapi-database-pv
  labels:
    type: local
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  hostPath:
    path: "/var/lib/rancher/k3s/storage/strapi-database-pv" # the path we configured in the conf file to create the cluster + sub path
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: strapi-database-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: strapi-db
spec:
  replicas: 1
  selector:
    matchLabels:
      app: strapi-db
  template:
    metadata:
      labels:
        app: strapi-db
    spec:
      containers:
        - name: mysql
          image: mysql:5.7
          securityContext:
            runAsUser: 1000
            allowPrivilegeEscalation: false
          ports:
            - containerPort: 3306
              name: mysql
          envFrom:
            - configMapRef:
                name: strapi-database-conf # the name of our ConfigMap for our db.
            - secretRef:
                name: strapi-database-secret # the name of our Secret for our db.
          volumeMounts:
            - name: mysql-persistent-storage
              mountPath: /var/lib/mysql
      volumes:
        - name: mysql-persistent-storage
          persistentVolumeClaim:
            claimName: strapi-database-pvc # the name of our PersistentVolumeClaim
---
apiVersion: v1
kind: Service
metadata:
  name: strapi-db # this is the name we use for DATABASE_HOST
spec:
  selector:
    app: strapi-db
  ports:
    - name: mysql
      protocol: TCP
      port: 3306
      targetPort: mysql # same name defined in the Deployment path spec.template.spec.containers[0].ports[0].name
Enter fullscreen mode Exit fullscreen mode

Let's break it down:

  • We have a Persistent Volume with 5Gb of storage, from the storage driver local-path, which creates a volume by using a local path in only one of the nodes, and therefore it can only be read/write by one Pod at a time.
  • Then we have a Persistent Volume Claim which, as the word suggests, claims a volume by using the same access mode, size, and type. And with this claim, we can finally mount it in our Pod.
  • After that, we have our mysql Deployment, which refers to the mysql:5.7 image, exposes port 3306, uses the proper conf and mounts our Persistent Volume Claim in the path /var/lib/mysql.
  • Finally, a Service which exposes the actual port to the rest of the cluster via dns. Due to some DNS conventions, we could reach the mysql Deployment via this Service from the same namespace: strapi-db; and from another namespace: strapi-db.<namespace>, or strapi-db.<namespace>.svc.cluster.local.

App Deployment

So now, we need a Deployment for our application. Finally! 😊
As with the DB deployment, we must pass the proper ConfigMap and Secret with the required environmental variables.

Let's create the file app.yaml inside our k8s folder, with the following content:

# ~/strapi-k8s/k8s/app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: strapi-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: strapi-app
  template:
    metadata:
      labels:
        app: strapi-app
    spec:
      containers:
        - name: strapi
          image: app-registry:5050/mystrapiapp-prod:0.0.1 # this is a custom image, therefore we are using the "custom" registry
          ports:
            - containerPort: 1337
              name: http
          envFrom:
            - configMapRef:
                name: strapi-app-conf # the name of our ConfigMap for our app.
            - secretRef:
                name: strapi-app-secret # the name of our Secret for our app.
---
apiVersion: v1
kind: Service
metadata:
  name: strapi-app
spec:
  type: NodePort
  selector:
    app: strapi-app
  ports:
    - name: http
      protocol: TCP
      port: 1337
      nodePort: 31337 # we are using this port, to match the cluster port forwarding section from mycluster.yaml
      targetPort: http # same name defined in the Deployment path spec.template.spec.containers[0].ports[0].name
Enter fullscreen mode Exit fullscreen mode

Once again, let's break it down:

  • We have our Strapi Deployment, which refers to the mystrapiapp-prod:0.0.1 image from the registry app-registry:5050, exposes port 1337, and it uses the proper conf.
  • Second and last, a Service exposes the actual port to the rest of the cluster via dns. The same DNS conventions mentioned before apply for this as well.

Deploying everything

Now let's apply all of our files.
To apply all the files in our k8s folder, we can run:

kubectl apply -f k8s/
Enter fullscreen mode Exit fullscreen mode

Now, let's wait for everything to start, we can do that by running:

watch kubectl get pods
Enter fullscreen mode Exit fullscreen mode

Eventually, the status of both pods should be Running and the ready column should be 1/1.
Once everything is stable, you can open your browser and navigate to:

http://localhost:1337/
Enter fullscreen mode Exit fullscreen mode

If you want to see the logs of those pods, you can do:

kubectl logs --selector app=strapi-app --tail=50 --follow # for the app
kubectl logs --selector app=strapi-db --tail=50 --follow # for the db
Enter fullscreen mode Exit fullscreen mode

Recap

Let's sum up what we did and achieved:

  • We created a 3-node K8s cluster via k3d with some predefined port-forwarding. It mounts a disk to each of the nodes, which is mapped to our temp folder. And, it also deploys an "external" docker registry to pull the images.
  • We built our docker image, set up some proper versioning, and pushed it to our "external" registry.
  • We deployed mysql with persistent storage.
  • Finally, we deployed our app, and we can access it from a browser. We are finally ready to start creating content, uploading assets, and publishing our website.

Even though all of this awesome work, there are some things not very convincing and that could be improved, like:

  • There are a lot of files with a lot of repeated code (like the labels).
  • How do we deploy a second environment? Do we have to deploy all those files all over again? What if we want to change something?
  • What about the assets?
  • What about high availability?

Improved Deployment

To take our journey one step further, we need the help of a very important tool in the K8s world: Helm.
From their website: "Helm helps you manage Kubernetes applications — Helm Charts helps you define, install, and upgrade even the most complex Kubernetes application."
Therefore, we need a "helm chart" to improve our deployment.

To organize ourselves, inside our strapi-k8s folder, let us create a folder called helm.

mkdir helm
Enter fullscreen mode Exit fullscreen mode

Helm chart

Helm provides us with the proper tooling to manage K8s applications, everything circles around "charts".
These are a collection of yaml template files and proper tooling to reuse code and package our K8s yaml files.
Most of the K8s tools have a helm chart that you can install in your cluster, like the Rancher tool we suggested you deploy in the previous section.
Therefore, you can also search for other tools through a helm repository and install or reuse them, you can find more information in the Helm's Quickstart Guide.

We can start by creating our own chart by running the following command:

cd helm
helm create strapi-chart
Enter fullscreen mode Exit fullscreen mode

This creates the following folder structure:

strapi-chart
├── Chart.yaml
├── charts
├── templates
│   ├── NOTES.txt
│   ├── _helpers.tpl
│   ├── deployment.yaml
│   ├── hpa.yaml
│   ├── ingress.yaml
│   ├── service.yaml
│   ├── serviceaccount.yaml
│   └── tests
│       └── test-connection.yaml
└── values.yaml
Enter fullscreen mode Exit fullscreen mode

As a brief explanation:

  • Chart.yaml, contains global chart information like name, description, version, app version, and dependencies.
  • templates/, contains all the templates, which are rendered using go templates.
  • values.yaml, contains the global values used by the templates, which can be overwritten from the outside. You can find more in-deep information about these files and all the good stuff about charts in their docs.

If you want to quickly see how all of these files connect to K8s, or you just want to debug, you can run the command:

# from the "helm" folder
# helm template <NAME (any name)> <CHART_PATH>
helm template strapi strapi-chart
Enter fullscreen mode Exit fullscreen mode

This will output the yaml files generated based on the chart's templates.
And as you can see, there aren't so many objects after all, it should create by default a ServiceAccount, Service, Deployment and a test connection Pod.
If you wander around the files, you will notice that this chart can also generate an Ingress and an HorizontalPodAutoscaler, we will discuss them, but later, for now, we'll ignore them.
Since we can override the default values, we could reuse this chart and generate the same files for the DB and the app.

Helm chart customization

If we look at our non-reusable files in the k8s folder, you can notice that the only real difference between both is the persistence storage configuration (PersistentVolume and PersistentVolumeClaim).
So let's add that, with the same "logic" as the Ingress or the HorizontalPodAutoscaler, so they will work only if we enable them.

Let's create a file under the templates folder with the name claim.yaml, with the following content:

# ~/strapi-k8s/helm/strapi-chart/templates/claim.yaml

{{- if .Values.storage.claim.enabled }}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: {{ include "strapi-chart.fullname" . }}-pvc
  labels:
      {{- include "strapi-chart.labels" . | nindent 4 }}
spec:
  accessModes: {{ .Values.storage.accessModes }}
  storageClassName: {{ .Values.storage.storageClassName }}
  resources:
    requests:
      storage: {{ .Values.storage.capacity }}
{{- end }}
Enter fullscreen mode Exit fullscreen mode

We will not add the PersistentVolume, since the "local-path" storage provisioner will take care of that.
Previously we did, but in reality, we can omit this step, assuming that we are using the default values.

Then, let's add at the end of the values.yaml file:

# ~/strapi-k8s/helm/strapi-chart/values.yaml
# ...
storage:
  claim:
    enabled: false
  capacity: 5Gi
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  mountPath: "/tmp"
Enter fullscreen mode Exit fullscreen mode

Finally, add to the end of the templates/deployment.yaml, the volume section, watch out for the indentation:

# ~/strapi-k8s/helm/strapi-chart/templates/deployment.yaml
# ...
      {{- if .Values.storage.claim.enabled }}
      volumes:
        - name: {{ include "strapi-chart.fullname" . }}-storage
          persistentVolumeClaim:
            claimName: {{ include "strapi-chart.fullname" . }}-pvc
      {{- end }}
Enter fullscreen mode Exit fullscreen mode

And, in the same templates/deployment.yaml file, between the spec.template.spec.containers[0].resources and spec.template.spec.nodeSelector, add the volumeMounts section (the following code has the "surrounding" code for references):

# ~/strapi-k8s/helm/strapi-chart/templates/deployment.yaml
# ... 
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
          {{- if .Values.storage.claim.enabled }}
          volumeMounts:
            - name: {{ include "strapi-chart.fullname" . }}-storage
              mountPath: {{ .Values.storage.mountPath }}
          {{- end }}
      {{- with .Values.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
# ...
Enter fullscreen mode Exit fullscreen mode

Ok, let's run our template command again:

helm template mysql strapi-chart
Enter fullscreen mode Exit fullscreen mode

Now we need to activate the storage, let's run it again but let's enable our storage:

helm template mysql strapi-chart --set storage.volume.enabled=true --set storage.claim.enabled=true
Enter fullscreen mode Exit fullscreen mode

This is amazing if you compare the k8s/db.yaml file and this output, which is almost identical, but we still have some work to do.
Let us add the environment variables configuration.

In the templates folder, create a file with the name configmap.yaml, with the following content:

# ~/strapi-k8s/helm/strapi-chart/templates/configmap.yaml

{{- if .Values.configMap.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ include "strapi-chart.fullname" . }}
data:
{{- toYaml .Values.configMap.data | nindent 2 }}
{{- end }}
Enter fullscreen mode Exit fullscreen mode

Now, in the templates folder, create another file with the name secret.yaml, with the following content:

# ~/strapi-k8s/helm/strapi-chart/templates/secret.yaml

{{- if .Values.secret.enabled }}
apiVersion: v1
kind: Secret
metadata:
  name: {{ include "strapi-chart.fullname" . }}
type: Opaque
data:
{{- toYaml .Values.secret.data | nindent 2 }}
{{- end }}
Enter fullscreen mode Exit fullscreen mode

Then, let's add at the end of the values.yaml file:

# ~/strapi-k8s/helm/strapi-chart/values.yaml
# ...
configMap:
  enabled: false
  data: {}

secret:
  enabled: false
  data: {}
Enter fullscreen mode Exit fullscreen mode

Finally, in the templates/deployment.yaml file, between the spec.template.spec.containers[0].volumeMounts (previously added) and spec.template.spec.nodeSelector, add the envFrom section (the following code has the "surrounding" code for references):

# ~/strapi-k8s/helm/strapi-chart/templates/deployment.yaml
# ...
          {{- if .Values.storage.claim.enabled }}
          volumeMounts:
            - name: {{ include "strapi-chart.fullname" . }}-storage
              mountPath: {{ .Values.storage.mountPath }}
          {{- end }}
          {{- if or .Values.configMap.enabled .Values.secret.enabled }}
          envFrom:
            {{- if .Values.configMap.enabled }}
            - configMapRef:
                name: {{ include "strapi-chart.fullname" . }}
            {{- end }}
            {{- if .Values.secret.enabled }}
            - secretRef:
                name: {{ include "strapi-chart.fullname" . }}
            {{- end }}
          {{- end }}
      {{- with .Values.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
# ...
Enter fullscreen mode Exit fullscreen mode

We are almost there, in the values.yaml file, add portName, containerPort, and nodePort to the service key, so it looks like this:

# ~/strapi-k8s/helm/strapi-chart/values.yaml
# ...
service:
  type: ClusterIP
  port: 80
  portName: http
  containerPort: 80
  nodePort: # for a Service of type NodePort, and yes, we'll leave it empty
# ...
Enter fullscreen mode Exit fullscreen mode

And, in the deployment.yaml file, update the values spec.template.spec.containers[0].ports[0].name and spec.template.spec.containers[0].ports[0].containerPort, like the following:

# ~/strapi-k8s/helm/strapi-chart/templates/deployment.yaml
# ...
          ports:
            - name: {{ .Values.service.portName }}
              containerPort: {{ .Values.service.containerPort }}
              protocol: TCP
# ...
Enter fullscreen mode Exit fullscreen mode

And, in the service.yaml, update the ports spec, like the following:

# ~/strapi-k8s/helm/strapi-chart/templates/service.yaml
# ...
  ports:
    - port: {{ .Values.service.port }}
      targetPort: {{ .Values.service.containerPort }}
      protocol: TCP
      name: {{ .Values.service.portName }}
      {{- if and (eq .Values.service.type "NodePort") .Values.service.nodePort }}
      nodePort: {{ .Values.service.nodePort }}
      {{- end }}
# ...
Enter fullscreen mode Exit fullscreen mode

The last change, at the end of the values.yaml file, add livenessProbe and readinessProbe keys, so it looks like this:

# ~/strapi-k8s/helm/strapi-chart/values.yaml
# ...
livenessProbe: {}
#  httpGet:
#    path: /
#    port: http

readinessProbe: {}
#  httpGet:
#    path: /
#    port: http
Enter fullscreen mode Exit fullscreen mode

And, in the deployment.yaml file, update the values spec.template.spec.containers[0].livenessProbe and spec.template.spec.containers[0].readinessProbe, and replace them with the following code:

# ~/strapi-k8s/helm/strapi-chart/templates/deployment.yaml
# ...
          {{- if .Values.livenessProbe }}
          livenessProbe:
          {{- toYaml .Values.livenessProbe | nindent 12 }}
          {{- end }}
          {{- if .Values.readinessProbe }}
          readinessProbe:
          {{- toYaml .Values.readinessProbe | nindent 12 }}
          {{- end }}
# ...
Enter fullscreen mode Exit fullscreen mode

Don't worry, we'll get back to those probes later.

Ok, we are done customizing our chart for now, it's time to put it to use.

Parenthesis Regarding the Secrets

For the purposes of this article, we'll not get involved in securing our secrets.
You might have seen or assumed by now that if we write the Secrets into a repo (even base64 encoded), that's a huge security risk.
So you need a safe way to handle them.
The options vary, and you can choose the one that fits you the most, or a combination of some options.
But some potential options are:

DB Values

Given the customizations we did to our Helm chart, we are ready to use it.
Let's create a file, under the helm folder, with the name db.yaml, which will override the default values for our DB, with the following content:

# ~/strapi-k8s/helm/db.yaml
image:
  repository: mysql
  tag: 5.7

securityContext:
  runAsUser: 1000
  allowPrivilegeEscalation: false

service:
  port: 3306
  portName: mysql
  containerPort: 3306

storage:
  claim:
    enabled: true
  mountPath: "/var/lib/mysql"

configMap:
  enabled: true
  data:
    MYSQL_USER: strapi
    MYSQL_DATABASE: strapi-k8s

secret:
  enabled: true
  data:
    # please never use these passwords, always use strong passwords, AND remember the section "(Parenthesis regarding the Secrets)"
    MYSQL_ROOT_PASSWORD: c3RyYXBpLXN1cGVyLXNlY3VyZS1yb290LXBhc3N3b3Jk # echo -n strapi-super-secure-root-password | base64
    MYSQL_PASSWORD: c3RyYXBpLXBhc3N3b3Jk # echo -n strapi-password | base64
Enter fullscreen mode Exit fullscreen mode

Let's do a quick debug to see if we are on the right track, run the following command:

# ~/strapi-k8s/helm
helm template mysql strapi-chart -f db.yaml
Enter fullscreen mode Exit fullscreen mode

Everything should look amazing, if you compare the files (from our k8s folder and the helm output), they should be almost the same and contain only slight differences in the order of some keys, some other labels and some other minor differences.

App values

Ok, now let's go for our main Strapi application.
Let's create a file under the helm folder with the name app.yaml, with the following content:

image:
  repository: app-registry:5050/mystrapiapp-prod
  tag: 0.0.1

service:
  type: NodePort
  port: 1337
  containerPort: 1337
  nodePort: 31337

configMap:
  enabled: true
  data:
    HOST: 0.0.0.0
    PORT: "1337"
    NODE_ENV: production

    DATABASE_HOST: mysql-strapi-chart # notice that this name changed
    DATABASE_PORT: "3306"
    DATABASE_USERNAME: strapi
    DATABASE_NAME: strapi-k8s

secret:
  enabled: true
  data:
    # use the proper values in here in base64
    APP_KEYS: <APP keys in base64>
    API_TOKEN_SALT: <API token salt in base64>
    ADMIN_JWT_SECRET: <admin JWT secret in base64>
    JWT_SECRET: <JWT secret in base64>

    DATABASE_PASSWORD: c3RyYXBpLXBhc3N3b3Jk # echo -n strapi-password | base64
Enter fullscreen mode Exit fullscreen mode

Let's do a quick debug to see if we are on the right track, run the following command:

helm template strapi strapi-chart -f app.yaml
Enter fullscreen mode Exit fullscreen mode

Deploying Everything

Make sure you deleted all the previous resources from the last chapter in your K8s cluster.

If you want to run both at the same time, you need to change in either of the k8s or helm example, the following:

  • The nodePort from the app Service, so they don't both try to run at 31337.
  • Add another port mapping for the cluster.
  • And lastly but optional, deploy them to different namespaces.

To install our charts, we can run the following commands:

helm install mysql strapi-chart -f db.yaml --atomic
helm install strapi strapi-chart -f app.yaml --atomic
Enter fullscreen mode Exit fullscreen mode

Now, let's wait for everything to start, we can do that by running:

watch kubectl get pods
Enter fullscreen mode Exit fullscreen mode

Eventually, the status of both pods should be Running and the ready should be 1/1.
Once everything is stable, you can open your browser and navigate to:

http://localhost:1337/admin
Enter fullscreen mode Exit fullscreen mode

Conclusions

Let's sum up what we did and what we achieved:

  • We created a Helm chart, that allows us to re-utilize our K8s configuration.
  • We created the proper override configuration for MySQL and Strapi. In the long run, this allows us to install our chart with different configuration and deploy it to multiple environments and/or multiple clusters.
  • Finally, we deployed our app, and we can access it from a browser.

But, are we done yet? Not yet, there are some things not very convincing and that could be improved, like:

  • The secrets. But we already discuss this, so please never commit secrets to a repo and find a solution that fits your needs.
  • What happened to those livenessProbe and readinessProbe that we sort of commented out?
  • We still don't know about the assets, what's up with that?
  • And, again, what about high availability?

In part 2 of this tutorial series, we'll try to answer these questions to achieve a more advanced and robust deployment.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .