CKA Full Course 2024: Day 13/40 Static Pods, Manual Scheduling, Labels, and Selectors in Kubernetes

Lloyd Rivers - Nov 3 - - Dev Community

Task: Schedule a Pod Manually Without the Scheduler

In this task, we’ll be exploring how to bypass the Kubernetes scheduler by directly assigning a pod to a specific node in a cluster. This can be a useful approach for specific scenarios where you need a pod to run on a particular node without going through the usual scheduling process.

Prerequisites

We assume you have a Kubernetes cluster running, created with a KIND (Kubernetes in Docker) configuration similar to the one described in previous posts. Here, we’ve created a cluster named kind-cka-cluster:

kind create cluster --name kind-cka-cluster --config config.yml
Enter fullscreen mode Exit fullscreen mode

Since we’ve already covered cluster creation with KIND in earlier posts, we won’t go into those details again.

Step 1: Verify the Cluster Nodes

To see the nodes available in this new cluster, run:

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

You should see output similar to this:

NAME                           STATUS   ROLES           AGE   VERSION
kind-cka-cluster-control-plane Ready    control-plane   7m   v1.31.0
Enter fullscreen mode Exit fullscreen mode

For this task, we’ll be scheduling our pod on kind-cka-cluster-control-plane.

Step 2: Define the Pod Manifest (pod.yml)

Now, let’s create a pod manifest in YAML format. Using the nodeName field in our pod configuration, we can specify the exact node for the pod, bypassing the Kubernetes scheduler entirely.

pod.yml:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx
  nodeName: kind-cka-cluster-control-plane
Enter fullscreen mode Exit fullscreen mode

In this manifest:

  • We set nodeName to kind-cka-cluster-control-plane, which means the scheduler will skip assigning a node, and the Kubelet on this specific node will handle placement instead.

This approach is a direct method for node selection, overriding other methods like nodeSelector or affinity rules.

According to Kubernetes documentation:

"nodeName is a more direct form of node selection than affinity or nodeSelector. nodeName is a field in the Pod spec. If the nodeName field is not empty, the scheduler ignores the Pod and the kubelet on the named node tries to place the Pod on that node. Using nodeName overrules using nodeSelector or affinity and anti-affinity rules."

For more details, refer to the Kubernetes documentation on node assignment.

Step 3: Apply the Pod Manifest

With our manifest ready, apply it to the cluster:

kubectl apply -f pod.yml
Enter fullscreen mode Exit fullscreen mode

This command creates the nginx pod and assigns it directly to the kind-cka-cluster-control-plane node.

Step 4: Verify Pod Placement

Finally, check that the pod is running on the specified node:

kubectl get pods -o wide
Enter fullscreen mode Exit fullscreen mode

The output should confirm that the nginx pod is indeed running on kind-cka-cluster-control-plane:

NAME    READY   STATUS    RESTARTS   AGE   IP           NODE                             NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          28s   10.244.0.5   kind-cka-cluster-control-plane   <none>           <none>
Enter fullscreen mode Exit fullscreen mode

This verifies that by setting the nodeName field, we successfully bypassed the Kubernetes scheduler and directly scheduled our pod on the control plane node.


Task: Login to the control plane node and go to the directory of default static pod manifests and try to restart the control plane components.

To access the control plane node of our newly created cluster, use the following command:

docker exec -it kind-cka-cluster-control-plane bash
Enter fullscreen mode Exit fullscreen mode

Navigate to the directory containing the static pod manifests:

cd /etc/kubernetes/manifests
Enter fullscreen mode Exit fullscreen mode

Verify the current manifests:

ls
Enter fullscreen mode Exit fullscreen mode

To restart the kube-controller-manager, move its manifest file temporarily:

mv kube-controller-manager.yaml /tmp
Enter fullscreen mode Exit fullscreen mode

After confirming the restart, return the manifest file to its original location:

mv /tmp/kube-controller-manager.yaml /etc/kubernetes/manifests/
Enter fullscreen mode Exit fullscreen mode

With these steps, we successfully demonstrated how to access the control plane and manipulate the static pod manifests to manage the lifecycle of control plane components.


Confirming the Restart of kube-controller-manager

After temporarily moving the kube-controller-manager.yaml manifest file to /tmp, we can verify that the kube-controller-manager has restarted. As mentioned in previous posts, I am using k9s, which does clearly show the restart, but for readers without k9s, try the following command

Inspect Events:
To gather more information, use:

   kubectl describe pod kube-controller-manager-kind-cka-cluster-control-plane -n kube-system
Enter fullscreen mode Exit fullscreen mode

Look for events at the end of the output. A successful restart will show events similar to:

   Events:
     Type    Reason   Age                    From     Message
     ----    ------   ----                   ----     -------
     Normal  Killing  4m12s (x2 over 8m32s)  kubelet  Stopping container kube-controller-manager
     Normal  Pulled   3m6s (x2 over 7m36s)   kubelet  Container image "registry.k8s.io/kube-controller-manager:v1.31.0" already present on machine
     Normal  Created  3m6s (x2 over 7m36s)   kubelet  Created container kube-controller-manager
     Normal  Started  3m6s (x2 over 7m36s)   kubelet  Started container kube-controller-manager
Enter fullscreen mode Exit fullscreen mode

The presence of "Killing," "Created," and "Started" events indicates that the kube-controller-manager was stopped and then restarted successfully.


Cleanup

Once you have completed your tasks and confirmed the behavior of your pods, it is important to clean up any resources that are no longer needed. This helps maintain a tidy environment and frees up resources in your cluster.

List Pods:
First, you can check the current pods running in your cluster:

   kubectl get pods
Enter fullscreen mode Exit fullscreen mode

You might see output like this:

   NAME    READY   STATUS    RESTARTS   AGE
   nginx   1/1     Running   0          35m
Enter fullscreen mode Exit fullscreen mode

Describe Pod:
To get more information about a specific pod, use the describe command:

   kubectl describe pod nginx
Enter fullscreen mode Exit fullscreen mode

This will give you details about the pod, such as its name, namespace, node, and other configurations:

   Name:             nginx
   Namespace:        default
   Priority:         0
   Service Account:  default
   Node:             kind-cka-cluster-control-plane/172.19.0.3
Enter fullscreen mode Exit fullscreen mode

Delete the Pod:
If you find that the pod is no longer needed, you can safely delete it with the following command:

   kubectl delete pod nginx
Enter fullscreen mode Exit fullscreen mode

Verify Deletion:
After executing the delete command, you can verify that the pod has been removed by listing the pods again:

   kubectl get pods
Enter fullscreen mode Exit fullscreen mode

Ensure that the nginx pod no longer appears in the list.

By performing these cleanup steps, you help ensure that your Kubernetes cluster remains organized and efficient.


Creating Multiple Pods with Specific Labels

In this section, we will create three pods based on the nginx image, each with a unique name and specific labels indicating different environments: env:test, env:dev, and env:prod.

Step 1: Create the Script

First, we'll create a script that contains the commands to generate the pods. I am using a script for 2 reasons:

  1. I want to learn bash,
  2. If I need to create 3 nodes again I only have to run the file instead of type it all out again.

Use the following command to create the script file:

vi create-pods.sh
Enter fullscreen mode Exit fullscreen mode

Next, paste the following code into the file:

#!/bin/bash

# Create pod1 with label env=test
kubectl run pod1 --image=nginx --labels=env=test

# Create pod2 with label env=dev
kubectl run pod2 --image=nginx --labels=env=dev

# Create pod3 with label env=prod
kubectl run pod3 --image=nginx --labels=env=prod

# Wait for a few seconds to allow the pods to start
sleep 5

# Verify the created pods and their labels
echo "Verifying created pods and their labels:"
kubectl get pods --show-labels
Enter fullscreen mode Exit fullscreen mode

Step 2: Make the Script Executable

After saving the file, make the script executable with the following command:

chmod +x create-pods.sh
Enter fullscreen mode Exit fullscreen mode

Step 3: Execute the Script

Run the script to create the pods:

./create-pods.sh
Enter fullscreen mode Exit fullscreen mode

You should see output indicating the creation of the pods:

pod/pod1 created
pod/pod2 created
pod/pod3 created
Enter fullscreen mode Exit fullscreen mode

Step 4: Verify the Created Pods

The script will then display the status of the created pods:

Verifying created pods and their labels:
NAME   READY   STATUS              RESTARTS   AGE   LABELS
pod1   0/1     ContainerCreating   0          5s    env=test
pod2   0/1     ContainerCreating   0          5s    env=dev
pod3   0/1     ContainerCreating   0          5s    env=prod
Enter fullscreen mode Exit fullscreen mode

At this point, you can filter the pods based on their labels. For example, to find the pod with the env=dev label, use the following command:

kubectl get po -l env=dev
Enter fullscreen mode Exit fullscreen mode

You should see output confirming the pod is running:

NAME   READY   STATUS    RESTARTS   AGE
pod2   1/1     Running   0          4m9s
Enter fullscreen mode Exit fullscreen mode

Tags and Mentions

. . . . . . . . . . . . . . . .