Deploying Linkerd In The Cloud

Michael Levan - Apr 30 '23 - - Dev Community

Where are you running your Managed Kubernetes Cluster?

Azure? AWS? GCP?

Regardless of where you’re running it, you can deploy Linkerd in a straightforward and engineering-focused fashion.

In this blog post, you’ll see the various deployment methods inside the cloud to deploy Linkerd.

Generating Certificates

The goal of this blog post is to show a few different production-ready methods for deploying Linkerd in the cloud.

With some of the methods will come the need for your own certificates for mTLS.

Because you need certificates, you need a way to create them. You can create certificates in any way that you’re comfortable with , whether it’s with openssl or LetsEncrypt.

For the purposes of this blog post, you’ll see how to do it with step. If you don’t already have set[ installed, you can do so here.

Depending on the Operating System you’re on, the step installation varies. For example, on MacOS, you can use Brew.



brew install step


Enter fullscreen mode Exit fullscreen mode

If you’re running on a Linux distro, like Ubuntu, you can use the following Debian method:



wget https://dl.step.sm/gh-release/cli/docs-cli-install/v0.23.1/step-cli_0.23.1_amd64.deb
sudo dpkg -i step-cli_0.23.1_amd64.deb


Enter fullscreen mode Exit fullscreen mode

Once installed, you can create the certificate and key needed for mTLS with the command below.



step certificate create root.linkerd.cluster.local ca.crt ca.key \
--profile root-ca --no-password --insecure


Enter fullscreen mode Exit fullscreen mode

Image description

Next, create the certificate and key used to sign the Linkerd proxies CSR.



step certificate create identity.linkerd.cluster.local issuer.crt issuer.key \
--profile intermediate-ca --not-after 8760h --no-password --insecure \
--ca ca.crt --ca-key ca.key


Enter fullscreen mode Exit fullscreen mode

Image description

Now that the certificates and keys are created, you can start using them in various environments.

Azure Kubernetes Service (AKS)

With the certificates and keys now created, you can start to think about various installation methods when it comes to Linkerd.

The first installation method is the traditional installation method with the Linkerd CLI.

Please note that if you want to install Linkerd with the CLI, you don’t have to generate your own certificates and keys. Generating your own is the proper method to ensure that you know what certificates and keys you’re using in production, but if you decide not to, you don’t “have” to. However, as mentioned, it is recommended.

First, ensure that the Custom Resource Definitions for Linkerd are installed properly.



linkerd install --crds | kubectl apply -f -


Enter fullscreen mode Exit fullscreen mode

To check and confirm that the CRD’s were installed successfully, you can run the following command using kubectl.



kubectl get crd


Enter fullscreen mode Exit fullscreen mode

The output should look similar to the screenshot below.

Image description

Once the CRD’s are installed successfully, you can install Linkerd using the CLI.



linkerd install \
  --identity-trust-anchors-file ca.crt \
  --set proxyInit.runAsRoot=true \
  --identity-issuer-certificate-file issuer.crt \
  --identity-issuer-key-file issuer.key \
  | kubectl apply -f -


Enter fullscreen mode Exit fullscreen mode

You will see a screenshot similar to the below which shows all of the Linkerd resources created.

Image description

To test Linkerd with an application, you can use the Emojivoto app with the linkerd inject command, which injects a sidecar into the Pod.



curl -sL https://run.linkerd.io/emojivoto.yml | linkerd inject - | kubectl apply -f -


Enter fullscreen mode Exit fullscreen mode

As you can see from the screenshot below, the Deployments were “injected”, which means the sidecar container was deployed.

Image description

Elastic Kubernetes Service (EKS)

Now that you’ve deployed Linkerd utilizing the Linkerd CLI, which is great for Development environments and even local Environments using a tool like Minikube or Kind, let’s take a look at how you can deploy Linkerd using Helm.

If you’d like to re-deploy the Emojivoto app, you can use the same steps at the end of the Azure Kubernetes Service (AKS) section as the deployment of the application will not change.

The EKS cluster running in this example has three Worker Node.

Image description

First, add the Linkerd Helm repo.



helm repo add linkerd https://helm.linkerd.io/stable


Enter fullscreen mode Exit fullscreen mode

Next, install the Custom Resource Definitions (CRD) for Linkerd.



helm install linkerd-crds linkerd/linkerd-crds -n linkerd --create-namespace


Enter fullscreen mode Exit fullscreen mode

Image description

The last step is to use Helm to install Linkerd. The helm install command will consist of:

  • The certificates that you created in the first section for mTLS.
  • The keys that you created in the first section for mTLS.
  • The linkerd Namespace to install Linkerd in.


helm install linkerd-control-plane -n linkerd \
  --set-file identityTrustAnchorsPEM=ca.crt \
  --set-file identity.issuer.tls.crtPEM=issuer.crt \
  --set-file identity.issuer.tls.keyPEM=issuer.key \
  linkerd/linkerd-control-plane


Enter fullscreen mode Exit fullscreen mode

💡 If you’re using EKS with Kubernetes version 1.23 (which is still available and the default installation method for EKS, you’ll see an error that states:

there are nodes using the docker container runtime and proxy-init container must run as root user

The fix is to use the following flag in the installation:
--set proxyInit.runAsRoot=true \

Once installed, you should see the Helm output stating that the installation was successful.

Image description

To confirm that the installation was successful, run the following command:



linkerd check


Enter fullscreen mode Exit fullscreen mode

Image description

Google Kubernetes Engine (GKE)

In the last section, you’ll learn how to deploy Linkerd using the same Helm approach, except this time it will be with Linkerd High Availability (HA).

If you’d like to re-deploy the Emojivoto app, you can use the same steps at the end of the Azure Kubernetes Service (AKS) section as the deployment of the application will not change.

For the GKE cluster, you should use a minimum of three Worker Nodes for Kubernetes high availability.

Image description

First, add the Helm repo (you probably already have it if you ran through the previous section for EKS).



helm repo add linkerd https://helm.linkerd.io/stable


Enter fullscreen mode Exit fullscreen mode

Next, install the Custom Resource Definitions (CRD) for Linkerd.



helm install linkerd-crds linkerd/linkerd-crds -n linkerd --create-namespace


Enter fullscreen mode Exit fullscreen mode

For the High Availability (HA) config, there’s a values.yaml file available for the Helm chart. You can retrieve it by using the helm fetch command against the repository.



helm fetch --untar linkerd/linkerd-control-plane


Enter fullscreen mode Exit fullscreen mode

The last step is to install Linkerd with Helm. Notice the key difference between the EKS installation and this installation. There’s the -f flag pointing to the values-ha.yaml file.



helm install linkerd-control-plane -n linkerd \
  --set-file identityTrustAnchorsPEM=ca.crt \
  --set-file identity.issuer.tls.crtPEM=issuer.crt \
  --set-file identity.issuer.tls.keyPEM=issuer.key \
  -f linkerd-control-plane/values-ha.yaml \
  linkerd/linkerd-control-plane


Enter fullscreen mode Exit fullscreen mode

As you can see in the screenshot below in comparison to when you deployed Linkerd on EKS - there are three replicas of each Deployment instead of one.

Image description

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .