Learn How to Set Kubernetes Resource Requests and Limits

Pavan Belagatti - Mar 10 '23 - - Dev Community

Kubernetes has emerged as the go-to container orchestration platform for modern-day applications. While it offers an array of features to manage containerized applications, it becomes essential to define proper resource allocation for these containers. CPU requests and limits are one such resource allocation mechanism that Kubernetes provides. In this article, I will guide you through CPU requests and limits in Kubernetes YAML.

Kubernetes CPU Requests and Limits

In Kubernetes, CPU requests and limits are used to manage and allocate resources to containerized applications running on a cluster.In Kubernetes YAML files, you can set CPU requests and limits for a container using the resources field in the container specification. This allows you to specify the desired CPU resources for your container and ensure that it is scheduled and run correctly. Properly setting CPU requests and limits is crucial for effective resource management and efficient use of the Kubernetes cluster.

  • CPU requests - It is the minimum amount of CPU resources that a container requires. Kubernetes guarantees that the container will get this amount of CPU resources whenever it is available.

  • CPU limits - It is the maximum amount of CPU resources that a container can use. Kubernetes enforces this limit and ensures that the container does not exceed this limit.

Here are some reasons why you should set CPU requests and limits in Kubernetes YAML:

  • Resource allocation: Setting CPU requests and limits allows Kubernetes to allocate the appropriate amount of resources to your containerized application. By specifying how much CPU your application needs, Kubernetes can schedule your containers on nodes with sufficient resources to handle the workload.

  • Performance: Setting CPU requests and limits ensures that your application has enough CPU resources to run efficiently. If your application doesn't have enough CPU resources, it can become slow or unresponsive, which can impact the user experience.

  • Preventing resource contention: If multiple containers are running on a node and competing for CPU resources, setting CPU limits can prevent any one container from monopolizing resources and causing performance problems for other containers on the same node.

  • Scaling: By setting CPU requests and limits, you can determine how many replicas of your application can run on a single node. This information is used by Kubernetes to automatically scale your application up or down based on resource utilization and demand.

Overall, setting CPU requests and limits in Kubernetes YAML is a best practice for ensuring the efficient and reliable operation of your containerized applications.

Prerequisites

  • Kubernetes cluster access from any cloud provider. Or you can simply use Minikube, it is free and easy to create a single node cluster in a minute.
  • Install kubectl - Kubernetes command-line tool
  • Sample application's Kubernetes deployment YAML file. You can use the deployment file specified in this repository.

Tutorial

Let's make use of this deployment.yaml file specified in this sample application.

Our deployment file looks as below, let's save it as go-app-deployment.yaml



apiVersion: apps/v1
kind: Deployment
metadata:
  name: go-app-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: go-app
  template:
    metadata:
      labels:
        app: go-app
    spec:
      containers:
      - name: go-app
        image: pavansa/golang-hello-world:latest
        ports:
        - containerPort: 8080
        env:
        - name: PORT
          value: "8080"


Enter fullscreen mode Exit fullscreen mode

If you see our above deployment file, the cpu requests and limits are not set. Let's add them:)

To set CPU requests and limits for the go-app-deployment Deployment, you can add the following lines to your container specification:



resources:
  requests:
    cpu: "100m"
  limits:
    cpu: "200m"


Enter fullscreen mode Exit fullscreen mode

Now, after adding the cpu requests and limits, the complete go-app-deployment.yaml looks as below



apiVersion: apps/v1
kind: Deployment
metadata:
  name: go-app-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: go-app
  template:
    metadata:
      labels:
        app: go-app
    spec:
      containers:
      - name: go-app
        image: pavansa/golang-hello-world:latest
        ports:
        - containerPort: 8080
        env:
        - name: PORT
          value: "8080"
        resources:
          requests:
            cpu: "100m"
          limits:
            cpu: "200m"


Enter fullscreen mode Exit fullscreen mode

In the above YAML file, we added the resources field to the container specification, and specified the CPU requests and limits. We set the CPU request to 100 milli CPUs (100m) and the CPU limit to 200 milli CPUs (200m).

Make sure you have the cluster ready. Use Minikube to create a single node cluster; it is easy and free. Assuming you have installed Minikube, use the below command to start Minikube.



minikube start


Enter fullscreen mode Exit fullscreen mode

You should see a successful output as below
Minikube start

Now, let's deploy the YAML using the below kubectl command



kubectl apply -f go-app-deployment.yaml


Enter fullscreen mode Exit fullscreen mode

You should see the deployment getting created.
deployment created

Let's run the below command to see if the pods are running successfully.



kubectl get pods


Enter fullscreen mode Exit fullscreen mode

You should see the pods status
pod status

Let's go a little deeper and describe the pod using the below command



kubectl describe pod go-app-deployment-699dcd8cd5-tcsfp


Enter fullscreen mode Exit fullscreen mode

You can see a completely healthy state of the pod along with the limits set.
pod healthy

You can zoom in on the above screenshot to see the CPU limits and requests set.
limit set for pod

Best practices for setting CPU requests and limits

Setting CPU requests and limits require careful consideration to ensure efficient resource utilization and predictable performance. Here are some best practices that you can follow:

  • Set the CPU requests based on the application requirements - You should set the CPU requests based on the amount of CPU resources that the application requires. It ensures that the application gets the required CPU resources whenever it is available.

  • Set the CPU limits based on the application performance - You should set the CPU limits based on the maximum amount of CPU resources that the application can use without compromising the performance. It ensures that the application does not exceed the maximum limit and cause performance issues.

  • Use relative values for CPU requests and limits - You should use relative values, such as 100m or 0.1, instead of absolute values, such as 1 or 2. It makes it easy to scale the application without changing the CPU requests and limits.

  • Monitor the CPU usage of the containers - You should monitor the CPU usage of the containers and adjust the CPU requests and limits accordingly. It ensures that the application gets the required CPU resources and does not exceed the maximum limit.

Is It Recommended to Set CPU Requests and Limits?

Yes, it is recommended to set CPU requests and limits in Kubernetes YAML files. This helps the Kubernetes scheduler to allocate resources among the containers running in the cluster efficiently. CPU requests are used by the scheduler to decide which nodes are suitable for running a container, and CPU limits are used to prevent a container from consuming too much CPU and degrading the performance of other containers running on the same node. The correct way to set CPU requests and limits in Kubernetes YAML files is to use the resources field in the container specification. The resources field allows you to specify the desired CPU and memory resources for your container.

Check out my other Kubernetes related articles.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .