Introduction
Kubernetes also known as K8s is a powerful, open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It provides a highly resilient, scalable, and efficient environment for running cloud-native applications. To understand how Kubernetes operates, it is essential to explore its architecture, which consists of the Control Plane, Worker Nodes, and several interconnected components.
What Is a Cluster?
A cluster is a group of interconnected computers (or nodes) that work together to perform tasks. Each node in the cluster contributes resources like CPU, memory, and storage, allowing the cluster to handle more significant workloads and provide higher availability and fault tolerance.
What Is A Kubernetes Cluster?
A Kubernetes cluster is a group of nodes (computers) that work together to run containerized applications. It provides a flexible and efficient environment for deploying, managing, and scaling applications.
Core Components of Kubernetes Cluster
- Control Plane (Master Node)
The Control Plane is responsible for managing the Kubernetes cluster. It makes global decisions about the cluster (e.g., scheduling applications) and ensures the system's desired state is maintained.
2.Worker Nodes (Node Components)
Worker nodes are responsible for running application workloads.
3.Pods: The smallest and simplest unit in Kubernetes. A pod represents a single instance of a running process in your cluster and can contain one or more containers.
What is MicroK8s?
MicroK8s is an option for deploying a single-node Kubernetes cluster as a single package to target workstations and Internet of Things (IoT) devices. Canonical, the creator of Ubuntu Linux, originally developed and currently maintains MicroK8s.
How To Create Kubernetes Cluster
Several options are available when you're running Kubernetes locally. You can install Kubernetes on physical machines or VMs, or use a cloud-based solution such as Azure Kubernetes Service (AKS).
We are going to explore a Kubernetes installation with a single-node cluster. We also are going to learn how to configure and install a MicroK8s environment that's easy to set up and tear down. Then, you deploy a Kubernetes service and scale it out to multiple instances to host a website.
Install MicroK8s on Linux
The Linux installation of Micro8s involves the few steps. Open a terminal window and execute the commands as in the following instructions:
- Install the MicroK8s snap app.
sudo snap install microk8s --classic
You should get this message upon successful installation
2.The next command is to run the status command.
To check the status of the installation, run the microk8s.status --wait-ready
command.
3.To install the add-ons, run the following command. sudo microk8s.enable dns dashboard registry
Mine is enabled already.
You're now ready to access your cluster with kubectl.
Explore the Kubernetes cluster
What Is Kubectl?
Kubectl is a command-line tool that lets you interact with a Kubernetes cluster. You can use it to deploy applications, check on your resources, and manage your cluster.
MicroK8s provides a version of kubectl that you can use to interact with your new Kubernetes cluster. This copy of kubectl allows you to have a parallel installation of another system-wide kubectl instance without affecting its functionality.
Execute the snap alias
command to create an alias for microk8s.kubectl
as kubectl
. This step streamlines usage.
Display cluster node information
Check the nodes that are running in your cluster.
- MicroK8s is a single-node cluster installation, so only one node is. It is important to note that this node functions as both control plane and a worker node in the cluster. Confirm this configuration by executing the
kubectl get nodes
command. To obtain information about all the resources in your cluster, execute thekubectl get
command.:
This indicates that there is only one node (virtual machine (VM)) in the cluster named emmanuel. that is in Ready state, which that the control plane may schedule workloads on this node.
Let's assume that you need to find the node's IP address. To fetch extra information from the API server, run the -o wide
command
Kubernetes uses a concept called namespaces to logically divide a cluster into multiple virtual clusters.
To fetch all services in all namespaces, pass the --all-namespaces
:
Now that you can see the services running on the cluster.
Install A Web Server On a Cluster
NGINX (pronounced "Engine-X") is a web server that helps websites and applications run smoothly. It is fast, lightweight, and efficient, making it popular for hosting websites, managing web traffic, and handling multiple users at once.
1.We are to use NGINX for ours. To create NGINX deployment, run the kubectl create deployment
command. Specify the name of the deployment and the container image to create a single instance of the pod.
2.To fetch the information about your deployment, run kubectl get deployments
3.The deployment created a pod. To fetch info about your cluster's pods, run the kubectl get pods
command
Test the website installation
Test the NGINX installation by connecting to the web server through the pod's IP address.
A Pod is the smallest unit in Kubernetes that runs your application. Think of a Pod as a box that holds your app.
- To find the pod's address, pass the
-o wide
comand:
Notice that the command returns both the IP address of the node, and the node name on which the workload is scheduled.
- To access the website, run
wget
on the IP listed before
Scale a web server deployment on a cluster
If there is a sudden increase in users accessing your website, causing it to due to the load, you can deploy additional instances of the site in your and distribute the load across instances.
- To scale the number of replicas in your deployment, the
kubectl scale
command. Specify the desired number of replicas and the name of the deployment. To scale the total of NGINX pods to three, run thekubectl scale
command:
The scale command allows you to scale the instance count up or down.
- To check the number of running pods, execute the
kubectl get
command, followed by the-o wide
command.
Notice that you now see three running pods, each with a unique IP address.
Kubernetes, often abbreviated as K8s, revolutionizes deployment and management ofized applications. Its robust architecture, comprising the Control Plane and Worker Nodes, high availability, scalability and efficient resource management. By understanding the core components and concepts of Kubernetes such as containertration and management, developers and IT professionals can leverage its full potential to build resilient scalable systems
Deploying a Kubernetes cluster, on physical machines, virtual machines, or using a cloud-based solution like Azure Service (AKS), offers flexibility and ease of management. The-by-step guide on installing MicroK8 and deploying applications, such as NGINX, provides a practical approach to getting started with Kubernetes. Scaling applications and managing workloads become seamless tasks, allowing for better handling of increased user traffic and ensuring consistent performance.
As the demand for scalable and resilient systems grows, Kubernetes out as a powerful tool to meet these challenges, empowering organizations to achieve infrastructure goals.