How To Install a Kubernetes Container Storage Interface Driver

Michael Levan - Sep 7 '22 - - Dev Community

When you want to work with storage inside of Kubernetes, as in, utilize Volumes, you need a way to “communicate” with those volumes. That way, you can do things like store Pod data inside of a hard drive or have stateful applications.

In this blog post, you’ll learn about what CSI is and how you can implement a CSI driver in your Kubernetes cluster.

What’s CSI?

CSI stands for Container Storage Interface, which is one of the many interfaces that you can use in Kubernetes. An interface is essentially a standard for how you interact with resources outside of Kubernetes. For example, the Container Network Interface (CNI) is a standard for how you get networking components up and running in a Kubernetes cluster. A CSI is a standard for how you get storage components up and running in a Kubernetes cluster.

With CSI’s, you can interact with storage on a local server or even outside of the cluster itself. For example, there’s a CSI for interacting with Azure Storage and a CSI to interact with AWS S3. The CSI’s allow you to use Azure Storage or AWS S3 to store Pod data. That way, if a Pod goes away (because they’re ephemeral), the data still exists.

What a CSI really does for vendors, like Azure and AWS, is give them a standard way to expose storage resources to Kubernetes. That way, Kubernetes can utilize the storage.

There are a lot of CSI’s, not just for the cloud. For example, there’s a CSI for NetApp.

By definition, a Container Storage Interface is a standard to expose arbitrary block and file storage systems to containerized workloads.

Before CSI

Even before Container Storage Interfaces existed, you could still interact with volumes and vendors that had storage solutions. The biggest problem was to do this, the Kubernetes release itself, as in, the actual core platform code, needed to include the interface. Without it, the storage interface wouldn’t work. This option was called “in-tree”. For example, if a vendor found a bug in the storage interface, they would have to wait until a new Kubernetes release to get the bug fix out to users. It was a hassle for the vendors, but it was also a hassle for the Kubernetes maintainers because they had to do a ton of extra work to manage the storage interfaces.

With CSI, all of those problems go away. Now called an “out-tree” approach, vendors can create their own storage interfaces and release them outside of the core Kubernetes platform code, giving vendors the ability to manage and maintain their own release cycles. The only thing they have to ensure is to follow the CSI standards and requirements when building a CSI.

Installing A CSI Driver

Now that you know why you’d want to use a CSI, let’s learn how to work with them.

First and foremost, installing the CSI driver in itself is going to be different for almost every vendor.

For example, you can install a CSI driver in AKS with the following command:

az aks update -n myAKSCluster -g myResourceGroup --enable-disk-driver --enable-file-driver --enable-snapshot-controller
Enter fullscreen mode Exit fullscreen mode

You can install a CSI driver for EKS with the following command:

eksctl create addon --name aws-ebs-csi-driver --cluster my-cluster --service-account-role-arn arn:aws:iam::111122223333:role/AmazonEKS_EBS_CSI_DriverRole --force
Enter fullscreen mode Exit fullscreen mode

You can install a CSI driver for GKE with the following command:

gcloud container clusters create CLUSTER-NAME \
    --addons=GcePersistentDiskCsiDriver \
    --cluster-version=VERSION
Enter fullscreen mode Exit fullscreen mode

As you can see, it’s going to vary based on the vendor. The best thing that you can do is literally Google “how to install CSI for xvendor”

Here’s a list of drivers that are currently supported: https://kubernetes-csi.github.io/docs/drivers.html

The good news is, there’s one thing that’s mostly always going to be the same - how you use the CSI. The way to use a CSI is to create a StorageClass inside of your Kubernetes cluster pointing to the CSI.

For example, here’s what it would look like for Azure.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: custom-managed-premium
provisioner: disk.csi.azure.com
reclaimPolicy: Delete
parameters:
  storageAccountType: Premium_LRS
Enter fullscreen mode Exit fullscreen mode

And here’s what it would look like for AWS

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp2
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  fsType: ext4
Enter fullscreen mode Exit fullscreen mode

As you can see, it’s utilizing the same kind along with API spec and similar maps.

When you’re using a CSI, there are two things to remember:

  • Ensure to research how you get a CSI driver installed for a specific vendor.
  • Ensure that you initialize the CSI driver with a StorageClass

That’s how you can get started with CSI! Thanks for reading.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .