4 Methods Of Kubernetes Isolation

Michael Levan - Oct 15 '23 - - Dev Community

The idea of isolating components in a platform so incredibly large and versatile isn’t a small task. In fact, the more you think about it, the harder isolating Kubernetes can get.

Think about it - you not only have the platform itself, but the cloud it’s running in if it’s connected to other platforms (like on-prem), and the containerized applications running inside of it.

Kubernetes as a whole in terms of isolation can be a large undertaking, so in this blog post, you’ll learn about the four key methods for isolating Kubernetes resources/workloads, like Pods.

Namespaces

The first level of isolation, or rather, the entry point to isolation is Namespaces. Namespaces give you the ability to logically isolate, but the traffic can still flow through. For example, PodA in NamespaceA can talk to PodB in NamespaceB by default as the Kubernetes network is flat by design. However, Namespaces give you the ability to isolate workloads logically and then use better security-related isolation techniques, like Network Policies, against said Namespace.

To create a Namespace, you can kubectl.

kubectl create namespace tester
Enter fullscreen mode Exit fullscreen mode

Once the Namespace is created, you can then assign a Pod or another Kubernetes Resource to be created inside of that Namespace.

apiVersion: v1
kind: Pod
metadata:
  name: static-web
  namespace: tester
spec:
  containers:
    - name: web
      image: nginx:latest
      ports:
        - name: web
          containerPort: 80
          protocol: TCP
Enter fullscreen mode Exit fullscreen mode

RBAC

Role-Based Access Control, or RBAC, is used for the authorization and permissions piece of Kubernetes. This is primarily around users, groups/teams, and service accounts. The biggest thing to keep in mind when it comes to Kubernetes is that although there’s an RBAC solution for authorization out of the box, there’s no authentication method out of the box. Because of this, engineers will typically go towards an OpenID Connect (OIDC) solution like Azure Active Directory or AWS IAM.

There are two kinds or RBAC implementations:

  • Cluster-scoped
  • Namespace-Scoped

Roles and RoleBindings are Namespace-scoped.

ClusterRoles and ClusterRoleBindings are cluster-scoped.

They are the exact same in terms of configuration other than the scoping.

To create a new role, you can use the Role resource/object within the rbac.authorization.k8s Named API group.

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: podcreator
rules:
- apiGroups: [""]
  resources: ["pods", "deployments"]
  verbs: ["get", "update", "list", "create"]
Enter fullscreen mode Exit fullscreen mode

You can then bind (attach) the role via the RoleBinding.

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: write-pod-default
  namespace: default
subjects:
- kind: ServiceAccount
  name: podcreator
  apiGroup: ""
  namespace: default
roleRef:
  kind: Role
  name: writer
  apiGroup: rbac.authorization.k8s.io
Enter fullscreen mode Exit fullscreen mode

As you can see below, the configuration for a ClusterRole and ClusterRoleBinding is the same other than the resource/object name.

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: read-pod-global
subjects:
- kind: ServiceAccount
  name: mikeuser
  apiGroup: ""
  namespace: default
roleRef:
  kind: ClusterRole
  name: reader
  apiGroup: rbac.authorization.k8s.io
Enter fullscreen mode Exit fullscreen mode

Network Policies

As mentioned in the Namespace section, the Kubernetes network is flat. Pods can talk to other Pods in various Namespaces without any boundary out of the box. To get the boundary for incoming (ingress) and outgoing (egress) traffic, you’d need to configure a Network Policy. Network Policies allow you to create a firewall rule of sorts.

To test out a Network Policy, run a few busybox Pods like in the code below.

kubectl run busybox1 --image=busybox --labels app=busybox1 -- sleep 3600
kubectl run busybox2 --image=busybox --labels app=busybox2 -- sleep 3600
Enter fullscreen mode Exit fullscreen mode

Next, get the IP Addresses of each Pod.

kubectl get pods -o wide
Enter fullscreen mode Exit fullscreen mode

With the IP address of busybox1, ping it. you’ll see that the ping goes through successfully.

kubectl exec -ti busybox2 -- ping -c3 ip_of_busybox_one
Enter fullscreen mode Exit fullscreen mode

Create a Network Policy that blocks all ingress/incoming traffic to busybox1.

kubectl apply -f - <<EOF
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: web-deny-all
spec:
  podSelector:
    matchLabels:
      app: busybox1
  ingress: []
EOF
Enter fullscreen mode Exit fullscreen mode

Once you try to ping busybox1 again, you’ll see that the ping now fails.

kubectl exec -ti busybox2 -- ping -c3 ip_of_busybox_one
Enter fullscreen mode Exit fullscreen mode

Policy Enforcement

Last but certainly not least is policy enforcement. Policy Enforcement gives you the ability to create rules for how your environment can work. The rules can be anything from applications that can access a particular component of the environment to what someone can do once they’re inside of the environment.

The three popular policy enforcement implementations today are:

  1. Open Policy Agent (OPA).
  2. Kyverno
  3. Admission Controllers.

OPA and Kyverno are both policy enforcers that are third-party implementations, so they aren’t out of the box when it comes to Kubernetes. The biggest difference between the two is that OPA works for platforms outside of Kubernetes and Kyverno does not.

Admission Controllers are like OPA and Kyverno from a technical and implementation perspective. Admission Controllers are great because they’re built into Kubernetes. The biggest reason that you’ll see engineers use something like OPA or Kyverno instead is because Admission Controllers must be written in Go.

As an example, let’s see how OPA works to create a policy that disallows users to use the latest tag of a Container Image.

First, you’d create a Constraint Configuration. The config specifies what resources can use the constraint. In this case, it’s Pods.

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: blocklatesttag
metadata:
  name: nolatestcontainerimage
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
  parameters:
    annotation: "no-latest-tag-used"
Enter fullscreen mode Exit fullscreen mode

Next, you’d implement the Constraint Template. The Template specifics the rule that you’re setting up. In this case, it’s to block the latest container image tag.

apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: blocklatesttag
  annotations:
    description: Blocks container images from using the latest tag
spec:
  crd:
    spec:
      names:
        kind: blocklatesttag # this must be the same name as the name on metadata.name (line 4)
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package blocklatesttag
        violation[{"msg": msg, "details": {}}]{
        input.review.object.kind == "Pod"
        imagename := input.review.object.spec.containers[_].image
        endswith(imagename,"latest")
        msg := "Images with tag the tag \"latest\" is not allowed"
        }
Enter fullscreen mode Exit fullscreen mode

Policy Enforcement is great as it gives you a significant amount of security when it comes to being as granular as possible within your Kubernetes environment to do anything from securing resources to creating best practices.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .