Screw The AI Hype: What Can It ACTUALLY Do For Cloud Native?

Michael Levan - Jul 9 - - Dev Community

The hype train is in full effect when it comes to AI, and for a good reason. The problem is that no one is talking about the good reason. Currently, the AI hype is all about generative text, terrible automation, and attempting to perform actions that don’t make sense.

In this blog post, engineers will learn what GenAI can actually do for them from a technical perspective.

The Problem With LLMs

Do you like glue on your pizza? Yeah, me either.

However, LLMs think we do! Yes, this was an actual suggestion from a popular LLM-based chatbot recently. Put glue on your pizza to help the cheese not fall off.

Let’s talk about what LLMs are. Large Language Models are Data Models that have the ability to “train themselves”, and by train themselves, I mean go out on the internet and find data that already exists. Standard Data Models are trained based on Data Sets that humans create. Those Data Models are then fed to AI workloads. LLMs, instead of needing to go through that training, pull data that already exists… and we all know the quality of data on the internet.

Therein lies the problem. We still need GOOD data to feed LLMs. Otherwise, they will go scrub the trenches of Reddit for information. This leads us to the question “should gatekeepers be put in place for where LLMs can pull data from?” The answer is yes, but the next question is “Who’s responsibility is that?”.

LLMs aren’t meant to help us with the creative aspect of anything. They’re meant to remove tasks That we shouldn’t have to do manually. It’s literally just a better version of automation.

Let’s talk about a few of those tasks

Low-Hanging Fruit

The first thing that comes to mind is highly repetitive tasks.

This brings us back to when automation first went through its popularity phase. In the world of Systems Administration, automation with PowerShell, Bash, or Python has become more and more necessary. Not because it was taking anyone’s job, but because it freed engineers up to do different tasks.

Running commands on a terminal was the far superior solution when comparing it to clicking “Next” a bunch of times on a GUI.

Now, we want to go faster. The terminal is still better for repeatability when compared to a GUI, but chatbots are superior. For example, running ‘kubectl logs’ for the millionth time isn’t necessary. Instead, engineers can use a tool like k8sgpt to pull all of the log errors and warnings for them. In this instance, k8sgpt isn’t taking jobs. It’s making the lives of engineers less trivial, just like automation did.

Troubleshooting

From a troubleshooting perspective, GenAI makes a lot of sense. Thinking about it in the “automation 2.0” mindset, you can take what you’ve learned from troubleshooting on a terminal and make your life much easier.

Let’s take the k8sgpt example from the previous section.

In today’s world, you have two choices:

  1. Run kubectl events and kubectl logs for the millionth time.
  2. Have a tool that can handle it for me.

Option two sounds better.

With some GenAI magic, k8sgpt can scan your cluster to tell you if any problems are happening. You as the engineer still have to fix those problems, so the creative aspect is still in your hands. You just don’t have to do the mundane task of running the same command hundreds of times anymore.

Do the following if you want to see an example in your Kubernetes cluster.

  1. Install k8sgpt


brew tap k8sgpt-ai/k8sgpt
brew install k8sgpt


Enter fullscreen mode Exit fullscreen mode
  1. Generate an OpenAI token.


k8sgpt generate



Enter fullscreen mode Exit fullscreen mode

Image description

  1. Add the authentication (k8sgpt uses OpenAI for authentication).


k8sgpt auth add


Enter fullscreen mode Exit fullscreen mode

Run the analyze command to see if any problems are occurring within your environment.



k8sgpt analyze --explain

Enter fullscreen mode Exit fullscreen mode




Templates

Last but certainly not least, and the most important for engineers, is code templates.

Think about it - how many times have you written the same function/method/code block outline? In reality, what primarily changes is the logic within the function/method. In terms of Kubernetes Manifests, it’s the same. The values change within a Manifest, but not the template itself.

Let’s take an example from Bing Copilot.

I based it to create a Manifest template for me and it did a really good job.

Image description

What I liked the most is that not only did it create the template, but it told me what I needed to change within the code to make it unique to my particular use case.

Image description

Closing Thoughts

Generative AI, chatbots, automation 2.0, or whatever else you’d like to call it is a good thing. It’s not a bad implementation or next step in our tech journey. Everyone has to remember that it’s not for:

  1. Thinking for you.
  2. Being creative.
  3. Getting you off the hook to do actual work.

It’s an implementation to remove the low-hanging fruit and repetitive tasks that don’t make sense for you to do.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .