CKA Full Course 2024: Day 12/40 Daemonsets, Job and Cronjob in Kubernetes

Lloyd Rivers - Oct 31 - - Dev Community

Since this post is all about deploying an Nginx frontend and gathering metrics from our cluster, let’s start with a brief overview of what we’re building and the tools involved.

Note: Originally, I wasn’t planning to include another introduction, but given the complexity of today’s setup, a recap will ensure everyone has a solid understanding before diving in.

Project Overview

In this tutorial, we’re deploying a simple Nginx server that serves as our frontend. This setup will serve the default Nginx homepage, which you can access via your browser once it’s running. We’ll also configure a health-checking system using a CronJob, which will periodically check if our Nginx server is up and running, returning a status code as confirmation.

Additionally, we’ll be setting up a DaemonSet to run a Node Exporter on every node in our cluster. This Node Exporter will gather metrics, giving us insights into the performance and resource usage of our app and cluster.

Key Concepts

To make sure we're all on the same page, here’s a breakdown of the main components we’re working with:

  • Nginx: Nginx is a web server that can serve static content (like HTML) or be configured as a reverse proxy or load balancer. Here, we’re using it to serve the default Nginx homepage, which acts as our app’s frontend.

  • CronJob: In Kubernetes, a CronJob allows you to run a job at specific intervals, just like scheduled tasks. Here, we’re using a CronJob to regularly check the health of our Nginx server. If the Nginx server is up and running, it will return a status code that confirms it’s reachable.

  • DaemonSet: A DaemonSet ensures that a specific pod runs on every node in your Kubernetes cluster. In this setup, we’re using it to run a Node Exporter on each node, collecting metrics like CPU and memory usage, which is crucial for monitoring app health and resource consumption.

With this structure in mind, we’ll dive into the YAML files needed to set up each component.


Prerequisites

Before we start, ensure you have a configuration file to create the Kubernetes cluster. Refer to the Kind Quick Start guide for detailed instructions on setting up your Kind cluster.

Cluster Configuration (config.yml)

Create a file named config.yml with the following content to define your Kind cluster:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: cka-cluster  
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 30001  
    hostPort: 30001
    listenAddress: "0.0.0.0" 
    protocol: tcp
- role: worker  
- role: worker 
Enter fullscreen mode Exit fullscreen mode

Run the following command to create the cluster:

kind create cluster --name kind-cka-cluster --config config.yml
Enter fullscreen mode Exit fullscreen mode

Use the following command to set the context to the new cluster:

kubectl config use-context kind-kind-cka-cluster
Enter fullscreen mode Exit fullscreen mode

Tasks

Here’s the amended section with a focus on what you are doing and encouraging the reader to explore the documentation further:


Create a DaemonSet

  • The task here is to create a DaemonSet in Kubernetes. A DaemonSet ensures that all nodes in the cluster run a copy of the specified pod. For this example, I am setting up the Prometheus Node Exporter, which will allow for monitoring metrics from each node. While I demonstrated this based on the video, I encourage the reader to visit the Kubernetes documentation to read more about DaemonSets and their configurations for a deeper understanding.

Solution

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: prometheus-node-exporter
  namespace: kube-system
  labels:
    app: prometheus-node-exporter
spec:
  selector:
    matchLabels:
      app: prometheus-node-exporter
  template:
    metadata:
      name: prometheus-node-exporter
      labels:
        app: prometheus-node-exporter
    spec:
      containers:
      - image: prom/node-exporter:v0.16.0
        imagePullPolicy: IfNotPresent
        name: prometheus-node-exporter
        ports:
        - name: prom-node-exp
          #^ must be an IANA_SVC_NAME (at most 15 characters, ..)
          containerPort: 9100
          hostPort: 9100
      tolerations:
      - key: "node-role.kubernetes.io/master"
        effect: "NoSchedule"
      hostNetwork: true
      hostPID: true
      hostIPC: true
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: 'true'
    prometheus.io/app-metrics: 'true'
    prometheus.io/app-metrics-path: '/metrics'
  name: prometheus-node-exporter
  namespace: kube-system
  labels:
    app: prometheus-node-exporter
spec:
  clusterIP: None
  ports:
    - name: prometheus-node-exporter
      port: 9100
      protocol: TCP
  selector:
    app: prometheus-node-exporter
  type: ClusterIP
Enter fullscreen mode Exit fullscreen mode

Create a CronJob

  • This task involves creating a CronJob that runs every 5 minutes. I chose not to follow the video tutorial here. This CronJob checks if the app is up by returning a 200 status code and the HTML content if successful

Solution

apiVersion: batch/v1
kind: CronJob
metadata:
  name: nginx-app-health-check
spec:
  schedule: "*/5 * * * *"  # Runs every 5 minutes
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: check-web-server
              image: appropriate/curl
              command:
                - /bin/sh
                - -c
                - |
                  status_code=$(curl -s -o /dev/null -w '%{http_code}' http://nginx-app-svc.default.svc.cluster.local)
                  homepage_content=$(curl -s http://nginx-app-svc.default.svc.cluster.local)
                  echo "Status Code: $status_code"
                  echo "Homepage Content: $homepage_content"
          restartPolicy: OnFailure
Enter fullscreen mode Exit fullscreen mode

This configuration ensures the CronJob runs every 5 minutes to check the health of the Nginx app. Let me know if any additional clarifications would help here!


Putting It All Together

In this section, we combine our Kubernetes resources to deploy the Nginx application effectively. Below are the configurations for both the Deployment and Service.

Nginx Deployment

This Deployment ensures that we have 3 replicas of our Nginx application running:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels:
    app: nginx-app
spec:
  replicas: 3  
  selector:
    matchLabels:
      app: nginx-app
  template:
    metadata:
      labels:
        app: nginx-app
    spec:
      containers:
      - name: nginx
        image: nginx:1.23.4-alpine
        ports:
        - containerPort: 80
Enter fullscreen mode Exit fullscreen mode

Nginx Service

The following Service exposes the Nginx application, allowing external traffic to access it via a specified node port:

apiVersion: v1
kind: Service
metadata:
  name: nginx-app-svc
spec:
  selector:
    app: nginx-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 30001
  type: NodePort
Enter fullscreen mode Exit fullscreen mode

Common Gotchas

  1. Label Consistency: Ensure that the labels in the Deployment and Service match correctly. In this case, both resources use app: nginx-app to ensure that the Service can route traffic to the right pods.

  2. NodePort Configuration: When using NodePort, make sure the specified node port (e.g., 30001) does not conflict with other services running on the same node.

  3. Replica Count: The replica count in the Deployment affects availability. In this example, we specified 3 replicas to ensure high availability of the Nginx app.

  4. Container Port: Confirm that the container port in the Deployment matches the target port specified in the Service to ensure proper routing of traffic.


Final thoughts...

Today, I focused on building components in Kubernetes, specifically DaemonSets, Jobs, and CronJobs. Here are my key takeaways:

  1. Understanding DaemonSets: DaemonSets ensure that a specific pod runs on all or selected nodes in a Kubernetes cluster. This is particularly useful for monitoring and logging applications that need to be deployed across every node.

  2. Utilizing Jobs and CronJobs: Jobs are great for executing tasks that run to completion, while CronJobs allow for scheduling tasks at specified intervals. This functionality is essential for automating routine tasks, such as health checks or backups.

  3. Leveraging Documentation: I realized the importance of the Kubernetes documentation as a crucial resource when building applications. It's empowering to have access to comprehensive guides and examples, which enhance my ability to troubleshoot and implement features effectively.

  4. Deep Learning through Hands-On Practice: Engaging in building applications from scratch aligns perfectly with my desire for deep learning. I find that struggling through challenges helps me develop a stronger grasp of concepts, solidifying my knowledge and skills.

  5. Experimentation and Exploration: Taking the initiative to explore beyond tutorials fosters a sense of ownership over the learning process. It’s fulfilling to construct solutions independently, reinforcing my understanding of Kubernetes.


Tags and Mentions

. . . . . . . . . . . . .