Scaling & Optimizing Kubernetes with Karpenter - An AWS Community Day Talk

Romar Cablao - Oct 1 - - Dev Community

Overview

This blog post summarizes my presentation delivered at AWS Community Day PH 2024, held in AWS Office - Taguig City, Philippines. The presentation explored the concept of automated scaling in Kubernetes and showcased Karpenter, an open-source tool for autoscaling cluster resources.

Kubernetes Scaling

While Kubernetes excels at scaling workloads through kube-scheduler, it lacks the ability to automatically manage the underlying compute resources of the cluster (CPU, memory and storage). This is where tools like Karpenter come in.
Kubernetes Scaling

Karpenter continuously monitors unscheduled pods and their resource requirements. Based on this information, it selects the most suitable instance type from your cloud provider and provisions new nodes to accommodate the workload demands. This "just-in-time" provisioning ensures your applications always have the resources they need to run smoothly, without the risk of over provisioning and incurring unnecessary costs.

Karpenter Diagram
Diagram Reference: https://karpenter.sh

Also worth noting of - Karpenter just recently graduated from Beta version. In August, v1.x was released.

Karpenter in Action

If you want to see Karpenter in action, you can use the OpenTofu template in the repository below to provision an Amazon EKS cluster with Karpenter pre-configured:

Scaling With Karpenter

This repository is made for a demo in AWS Community Day Philippines 2024. You may also want to watch Karpenter in action here.

Installation

Depending on your OS, select the installation method here: https://opentofu.org/docs/intro/install/

Provision the infrastructure

  1. Make necessary adjustment on the variables.
  2. Run tofu init to initialize the modules and other necessary resources.
  3. Run tofu plan to check what will be created/deleted.
  4. Run tofu apply to apply the changes. Type yes when asked to proceed.

Fetch kubeconfig to access the cluster

aws eks update-kubeconfig --region $REGION --name $CLUSTER_NAME
Enter fullscreen mode Exit fullscreen mode



For the NodePool configuration, you can use the one defined within the repository. The configuration would look like this:

Karpenter Nodepool - 1

Karpenter NodePool - 2

A video recording was also available to see Karpenter in action. Few things to note, the video shows two applications - (1) Terminal running eks-node-viewer on the top and (2) Lens showing the deployment we are about to scale and the Karpenter logs.

Karpenter Demo Guide

The video focuses on three key actions to illustrate how Karpenter responds to cluster resource autoscaling needs:

  1. Scaling from zero (0) to two (2) replicas: This demonstrates how Karpenter provisions new nodes when additional resources are required.
  2. Scaling from two (2) to six (6) replicas: This showcases Karpenter's ability to scale up further as demand increases.
  3. Scaling from six (6) back to zero (0): This demonstrates how Karpenter can also scale down and terminate nodes when resources are no longer needed, optimizing resource utilization.

By watching this video demonstration, you can gain a practical understanding of how Karpenter dynamically provisions and manages cluster resources based on workload demands.


Ready to explore the potential of Karpenter for your Kubernetes clusters? Check out the links below to get started 🚀

Documentations

Workshops

Blogs

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .