Using LLMs For Kubernetes: Enter k8sgpt

Michael Levan - Aug 5 - - Dev Community

GenAI and LLMs are a big “buzz” right now, but there’s some “good” behind it - automation.

In an environment like Kubernetes, GenAI can be used for Automation 2.0.

It can remove the tedious and repetitive tasks for you that you don’t need to do anything, just like when automation first started to become popular. GenAI is just taking it a step further.

In this blog post, you’ll learn how to use Automation 2.0 to scan and help troubleshoot a Kubernetes cluster.

Environment

k8sgpt works on any Kubernetes cluster.

If you don’t already have a Kubernetes cluster up and running, here’s some Infrastructure code that can help get things running.

EKS

https://github.com/AdminTurnedDevOps/Kubernetes-Quickstart-Environments/tree/main/aws/eks

AKS

https://github.com/AdminTurnedDevOps/Kubernetes-Quickstart-Environments/tree/main/azure/aks

On-Prem

https://github.com/AdminTurnedDevOps/Kubernetes-Quickstart-Environments/tree/main/Bare-Metal/kubeadm

GKE

https://github.com/AdminTurnedDevOps/Kubernetes-Quickstart-Environments/tree/main/Google/GKE

Application To Test With

If you don’t already have an application to use, you can use the demo app below to deploy so you have something to scan with k8sgpt. This demo app is from Google.

First, clone the repo and cd into the proper directory.



git clone --depth 1 --branch v0 https://github.com/GoogleCloudPlatform/microservices-demo.git

cd microservices-demo/


Enter fullscreen mode Exit fullscreen mode

Create a Namespace for the application stack to be deployed to.



kubectl create ns microdemo


Enter fullscreen mode Exit fullscreen mode

Run the Kubernetes Manifest which contains all of the decoupled apps.



kubectl apply -f ./release/kubernetes-manifests.yaml -n microdemo


Enter fullscreen mode Exit fullscreen mode

You should see an output similar to the below.



deployment.apps/currencyservice created
service/currencyservice created
serviceaccount/currencyservice created
deployment.apps/loadgenerator created
serviceaccount/loadgenerator created
deployment.apps/productcatalogservice created
service/productcatalogservice created
serviceaccount/productcatalogservice created
deployment.apps/checkoutservice created
service/checkoutservice created
serviceaccount/checkoutservice created
deployment.apps/shippingservice created
service/shippingservice created
serviceaccount/shippingservice created
deployment.apps/cartservice created
service/cartservice created
serviceaccount/cartservice created
deployment.apps/redis-cart created
service/redis-cart created
deployment.apps/emailservice created
service/emailservice created
serviceaccount/emailservice created
deployment.apps/paymentservice created
service/paymentservice created
serviceaccount/paymentservice created
deployment.apps/frontend created
service/frontend created
service/frontend-external created
serviceaccount/frontend created
deployment.apps/recommendationservice created
service/recommendationservice created
serviceaccount/recommendationservice created
deployment.apps/adservice created
service/adservice created
serviceaccount/adservice created


Enter fullscreen mode Exit fullscreen mode

To test that the application stack came up, run the following:



kubectl get pods -n microdemo


Enter fullscreen mode Exit fullscreen mode

You should see an output similar to the one below.



NAME                                     READY   STATUS    RESTARTS   AGE
adservice-77c7c455b7-nbdq8               1/1     Running   0          2m6s
cartservice-6dc9c7b4f8-fqxrn             1/1     Running   0          2m8s
checkoutservice-5f9954bf9f-cqwws         1/1     Running   0          2m9s
currencyservice-84cc8dbfcc-cwvtk         1/1     Running   0          2m10s
emailservice-5cc954c8cc-d595s            1/1     Running   0          2m7s
frontend-7d56967868-bjxj5                1/1     Running   0          2m6s
loadgenerator-66c47bdc74-2jh2c           1/1     Running   0          2m10s
paymentservice-55646bb857-clv8d          1/1     Running   0          2m7s
productcatalogservice-6d58b86c5f-mmn77   1/1     Running   0          2m10s
recommendationservice-5846958db7-lp969   1/1     Running   0          2m6s
redis-cart-bf5c68f69-lq9sf               1/1     Running   0          2m8s
shippingservice-67989cd745-82qp4         1/1     Running   0          2m9s


Enter fullscreen mode Exit fullscreen mode

Once all workloads are created, there’s a Service called frontend-external. By default, it creates a LoadBalancer in front of the Service. However, if you’re on a cluster that cannot create Load Balancers, you can use the following command to access the UI.



kubectl port-forward -n microdemo svc/frontend-external 8080:80


Enter fullscreen mode Exit fullscreen mode

Installing k8sgpt

k8sgpt can be installed in various environments. You can see all of the installation methods here.

To run k8sgpt on a Mac, run the following Homebrew commands.



brew tap k8sgpt-ai/k8sgpt
brew install k8sgpt


Enter fullscreen mode Exit fullscreen mode

Next, use the generate command. The generate command will open up an OpenAI web browser.



k8sgpt generate


Enter fullscreen mode Exit fullscreen mode

You’ll have to sign up for OpenAI for free and generate a token as k8sgpt uses it on the backend.

Image description

Once complete, run the auth add command and sub-command to use the backend AI from OpenAI via k8sgpt.



k8sgpt auth add


Enter fullscreen mode Exit fullscreen mode

Testing Workloads With k8sgpt

Now that k8sgpt is set up and you’re authenticated, you can begin running some troubleshooting commands.

The first and most straightforward command is analyze. Analyze gives you a brief description of any issues that are going on.



k8sgpt analyze


Enter fullscreen mode Exit fullscreen mode

Below is an example output that shows an issue with Kubeflow.



AI Provider: AI not used; --explain not set

0: StatefulSet kubeflow/metacontroller()
- Error: StatefulSet uses the service kubeflow/ which does not exist.


Enter fullscreen mode Exit fullscreen mode

The --with-doc subcommand gives you a bit more explanation.



k8sgpt analyze --with-doc


Enter fullscreen mode Exit fullscreen mode



0: StatefulSet kubeflow/metacontroller()
- Error: StatefulSet uses the service kubeflow/ which does not exist.
  Kubernetes Doc: serviceName is the name of the service that governs this StatefulSet. This service must exist before the StatefulSet, and is responsible for the network identity of the set. Pods get DNS/hostnames that follow the pattern: pod-specific-string.serviceName.default.svc.cluster.local where "pod-specific-string" is managed by the StatefulSet controller.


Enter fullscreen mode Exit fullscreen mode

You can also use the --explain subcommand to get a more verbose output.

The last method explained is the --filter subcommand, which allows you to filter based on Kubernetes Ressource/Object.

Below is an example filtering Pods and Services.



k8sgpt analyze --explain --filter=Pod --namespace=microdemo

Enter fullscreen mode Exit fullscreen mode


k8sgpt analyze --explain --filter=Pod --namespace=microdemo
AI Provider: openai

No problems detected

Enter fullscreen mode Exit fullscreen mode


k8sgpt analyze --explain --filter=Service --namespace=microdemo

Enter fullscreen mode Exit fullscreen mode




Closing Thoughts

In this quick tutorial, you walked through how to get an LLM-based Kubernetes troubleshooter up and running. Although GenAI and LLMs are grossly buzzy right now, they still do truly have a fair amount of good if used in ways that can actually help you. k8sgpt is one of those ways, especially if you don't want to run the same mundane commands over and over again to find issues. Yes, you still have to fix the issues, but k8sgpt helps you figure out the root case.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .