KCSA Part 3: Kubernetes Cluster Component Security

Michael Levan - Apr 8 '23 - - Dev Community

In part three of the KCSA blog series, you'll learn about the second domain objective for the Kubernetes and Cloud Security Associate (KCSA) certification - Kubernetes Cluster Component Security.

Throughout this blog post, I’m going to explain each section of the domain/objective and put links into the direct Kubernetes docs. As I’m sure you can imagine, they’re vast topics and would end up being far too long to cover them all in one blog post.

You can find parts one and two of this series here:

Please note that the order of exam objectives isn’t accurate. For example, this blog post is technically domain objective two, but I’m writing it in the third part of the series. The reason why is that as I was writing this series, the domain objectives changed, which makes sense because the certification is currently in beta.

Special Announcement: I will be officially creating the KCSA course for LinkedIn Learning! It’ll be released most likely in Q3 or Q4 of 2023, so although it’s a ways out, it makes sense from a scheduling perspective as the certification should be out of beta around that time.

Control Plane Security

In this section, you’re going to learn about specific security concerns around Control Plane components and how you can mitigate risks. Although the Control Plane components can be split up (for example, Etcd can be on its own server), they’re typically categorized under the same place (that place being the Control Plane).

As you read through this section, you’ll also see aspects of Client Security in terms of overall authentication and authorization to each resource.

API Server

The API server is how you communicate with Kubernetes. All transactions, whether it’s you creating a new Pod, reading information about a Service, listing Kubernetes Resources, or automating deployments are all done via the Kubernetes API server. You or your automation protocol interacts with the API server and if approved (if you or the automation protocol has the proper access), an action is taken.

To secure the API server, do the following:

  1. Ensure that whoever has access to the API server has the proper authorization/permissions. For example, there’s almost no reason why every engineer/developer needs to have access to create Pods. They may, however, need access to list/read Pods.
  2. Ensure that the API server is up to date for any security issues and Kubernetes API server versioning issues.
  3. Ensure that the API server, when possible, is not exposed to the public internet.

More information here:

Controller and Scheduler

The Controller in a Kubernetes cluster is how Kubernetes ensures that the current state is the desired state. If the Kubernetes Manifest, for example, specifies that there should be three replicas and only two exist, the ReplicaSet Controller will perform actions to ensure that the third replica gets deployed (if possible).

The Scheduler ensures that the Kubernetes resources (Pods, for example) get Scheduled on specific Worker Nodes. It comes down to what Worker Nodes are available and is accepting Pods based on what resources (CPU, memory) are available.

In Kubernetes, you’ll see two types of Controllers:

  • Controllers for the cluster components (Deployment Controllers, ReplicaSet Controllers, Etc.).
  • Cloud Controllers.

The overall security around Controllers and Schedulers really comes down to ensuring:

  1. Whoever has access should have access.
  2. Controller versions are kept up to date.

More information here:

Etcd

Etcd is the Datastore of Kubernetes. Think of it like the database that contains the clusters state. If Etcd is compromised, your entire cluster and every Kubernetes Resource running on that cluster is compromised.

To mitigate risks in Etcd, do the following:

  1. Ensure Etcd is not exposed to the public internet.
  2. Ensure that only the individuals that need access to Etcd should have it. There’s no reason every engineer needs admin/root access to Etcd.
  3. mTLS should be used to communicate with Etcd where possible.

More information here:

Worker Node Components

Thinking about Worker Node security, you’ll typically see four components that will come up:

  • Kubelet (also on the Control Plane)
  • Container Runtime (containerd, CRI-O, etc.)
  • kube-proxy
  • Control Networking (CNI)

To secure all of these components, it comes down to the following:

  1. Ensuring that each is up to date. For example, if a new version of the CNI comes out, you want to see if there are any security bug fixes.
  2. RBAC.
  3. mTLS to interact with each resource and cluster component.
  4. Proper network policies and overall policy enforcement for the networking components (kube-proxy and CNI). You also want to ensure proper policy enforcement for everything in a Kubernetes cluster, but that’s outside of the scope of this discussion.
  5. Isolating nodes
  6. Audit logging
  7. Proper scanning of your cluster and Kubernetes resources along with CIS Benchmarks
  8. Monitoring network traffic.

Although each of the components is different, you can think about securing them in more or less the same way.

Storage

When you’re thinking about securing storage, you want to ensure that whatever has the ability to be encrypted is encrypted. For example, if you spin up your own Kubernetes cluster, you can encrypt Etcd (what Kubernetes uses as a data store).

Outside of Etcd, as long as your application stack will still work with it, you can encrypt the volumes that you store your application data on. For example, if you’re storing Pod data on an EBS volume in AWS, you can encrypt it as long as the application stack (the container) within the Pod is compatible with the encryption protocol.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .