Running Serverless Applications on Kubernetes with Knative

Peter Mbanugo - Nov 29 '21 - - Dev Community

Kubernetes provides a set of primitives to run resilient, distributed applications. It takes care of scaling and automatic failover for your application and it provides deployment patterns and APIs that allow you to automate resource management and provision new workloads.

One of the main challenges that developers face is how to focus more on the details of the code rather than the infrastructure where that code runs. For that, serverless is one of the leading architectural paradigms to address this challenge. There are various platforms that allow you to run serverless applications either deployed as single functions or running inside containers, such as AWS Lambda, AWS Fargate, and Azure Functions. These managed platforms come with some drawbacks like:

  • Vendor lock-in
  • Constraint in the size of the application binary/artifacts
  • Cold start performance

You could be in a situation where you're only allowed to run applications within a private data center, or you may be using Kubernetes but you'd like to harness the benefits of serverless. There are different open-source platforms, such as Knative and OpenFaaS, that use Kubernetes to abstract the infrastructure from the developer, allowing you to deploy and manage your applications using serverless architecture and patterns. Using any of those platforms takes away the problems mentioned in the previous paragraph.

This article will show you how to deploy and manage serverless applications using Knative and Kubernetes.

Serverless Landscape

Serverless computing is a development model that allows you to build and run applications without having to manage servers. It describes a model where a cloud provider handles the routine work of provisioning, maintaining, and scaling the server infrastructure, while the developers can simply package and upload their code for deployment. Serverless apps can automatically scale up and down as needed, without any extra configuration by the developer.

As stated in a white paper by the CNCF serverless working group, there are two primary serverless personas:

  1. Developer: Writes code for and benefits from the serverless platform that provides them with the point of view that there are no servers and that their code is always running.
  2. Provider: Deploys the serverless platform for an external or internal customer.

The provider needs to manage servers (or containers) and will have some cost for running the platform, even when idle. A self-hosted system can still be considered serverless: Typically, one team acts as the provider and another as the developer.

In the Kubernetes landscape, there are various ways to run serverless apps. It can be through managed serverless platforms like IBM Cloud Code and Google Cloud Run, or open-source alternatives that you can self-host, such as OpenFaaS and Knative.

Introduction to Knative

Knative is a set of Kubernetes components that provides serverless capabilities. It provides an event-driven platform that can be used to deploy and run applications and services that can auto-scale based on demand, with out-of-the-box support for monitoring, automatic renewal of TLS certificates, and more.

Knative is used by a lot of companies. In fact, it powers the Google Cloud Run platform, IBM Cloud Code Engine, and Scaleway serverless functions.

The basic deployment unit for Knative is a container that can receive incoming traffic. You give it a container image to run and Knative handles every other component needed to run and scale the application. The deployment and management of the containerized app is handled by one of the core components of Knative, called Knative Serving. Knative Serving is the component in Knative that manages the deployment and rollout of stateless services, plus its networking and autoscaling requirements.

The other core component of Knative is called Knative Eventing. This component provides an abstract way to consume Cloud Events from internal and external sources without writing extra code for different event sources. This article focuses on Knative Serving but you will learn about how to use and configure Knative Eventing for different use-cases in a future article.

Development Set Up

In order to install Knative and deploy your application, you'll need a Kubernetes cluster and the following tools installed:

  • Docker
  • kubectl, the Kubernetes command-line tool
  • kn CLI, the CLI for managing Knative application and configuration

Installing Docker

To install Docker, go to the URL docs.docker.com/get-docker and download the appropriate binary for your OS.

Installing kubectl

The Kubernetes command-line tool kubectl allows you to run commands against Kubernetes clusters. Docker Desktop installs kubectl for you, so if you followed the previous section in installing Docker Desktop, you should already have kubectl installed and you can skip this step. If you don't have kubectl installed, follow the instructions below to install it.

If you're on Linux or macOS, you can install kubectl using Homebrew by running the command brew install kubectl. Ensure that the version you installed is up to date by running the command kubectl version --client.

If you're on Windows, run the command curl -LO https://dl.k8s.io/release/v1.21.0/bin/windows/amd64/kubectl.exe to install kubectl, and then add the binary to your PATH. Ensure that the version you installed is up to date by running the command kubectl version --client. You should have version 1.20.x or v1.21.x because in a future section, you're going to create a server cluster with Kubernetes version 1.21.x.

Installing kn CLI

The kn CLI provides a quick and easy interface for creating Knative resources, such as services and event sources, without the need to create or modify YAML files directly. kn also simplifies completion of otherwise complex procedures, such as autoscaling and traffic splitting.

To install kn on macOS or Linux, run the command brew install kn.

To install kn on Windows, download and install a stable binary from https://mirror.openshift.com/pub/openshift-v4/clients/serverless/latest. Afterward, add the binary to the system PATH.

Creating a Kubernetes Cluster

You need a Kubernetes cluster to run Knative. For this article, you're going to work with a local Kubernetes cluster running on Docker. You should have Docker Desktop installed.

Create a Cluster with Docker Desktop

Docker Desktop includes a standalone Kubernetes server and client. This is a single-node cluster that runs within a Docker container on your local system and should be used only for local testing.

To enable Kubernetes support and install a standalone instance of Kubernetes running as a Docker container, go to Preferences > Kubernetes and then click Enable Kubernetes.

Click Apply & Restart to save the settings and then click Install to confirm, as shown in the image below.

Figure 1: Enable Kubernetes on Docker Desktop

This instantiates the images required to run the Kubernetes server as containers.

The status of Kubernetes shows in the Docker menu and the context points to docker-desktop, as shown in the image below.

Figure 2 : kube context

Alternatively, Create a Cluster with Kind

You can also create a cluster using kind, a tool for running local Kubernetes clusters using Docker container nodes. If you have kind installed, you can run the following command to create your kind cluster and set the kubectl context.

curl -sL https://raw.githubusercontent.com/csantanapr/knative-kind/master/01-kind.sh | sh
Enter fullscreen mode Exit fullscreen mode

Install Knative Serving

Knative Serving manages service deployments, revisions, networking, and scaling. The Knative Serving component exposes your service via an HTTP URL and has safe defaults for its configurations.

For kind users, follow these instructions to install Knative Serving:

  1. Run the command curl -sL https://raw.githubusercontent.com/csantanapr/knative-kind/master/02-serving.sh | sh to install Knative Serving.
  2. When that's done, run the command curl -sL https://raw.githubusercontent.com/csantanapr/knative-kind/master/02-kourier.sh | sh to install and configure Kourier.

For Docker Desktop users, run the command curl -sL https://raw.githubusercontent.com/csantanapr/knative-docker-desktop/main/demo.sh | sh.

Deploying Your First Application

Next, you'll deploy a basic Hello World application so that you can learn how to deploy and configure an application on Knative. You can deploy an application using a YAML file and the kubectl command, or using the kn command and passing the right options. For this article, I'll be using the kn command. The sample container image you'll use is hosted on gcr.io/knative-samples/helloworld-go.

To deploy an application, you use the kn service create command, and you need to specify the name of the application and the container image to use.

Run the following command to create a service called hello using the image https://gcr.io/knative-samples/helloworld-go.

kn service create hello \
--image gcr.io/knative-samples/helloworld-go \
--port 8080 \
--revision-name=world
Enter fullscreen mode Exit fullscreen mode

The command creates and starts a new service using the specified image and port. An environment variable is set using the --env option.

The revision name is set to world using the --revision-name option. Knative uses revisions to maintain the history of each change to a service. Each time a service is updated, a new revision is created and promoted as the current version of the application. This feature allows you to roll back to previous version of the service when needed. Specifying a name for the revision allows you to easily identify them.

When the service is created and ready, you should get the following output printed in the console.

Service hello created to latest revision 'hello-world'
is available at URL: http://hello.default.127.0.0.1.nip.io
Enter fullscreen mode Exit fullscreen mode

Confirm that the application is running by running the command curl http://hello.default.127.0.0.1.nip.io. You should get the output Hello World! printed in the console.

Update the Service

Suppose you want to update the service; you can use the kn service update command to make any changes to the service. Each change creates a new revision, and directs all traffic to the new revision once it's started and is healthy.

Update the TARGET environment variable by running the command:

kn service update hello \
--env TARGET=Coder \
--revision-name=coder
Enter fullscreen mode Exit fullscreen mode

You should get the following output when the command has completed.

Service 'hello' updated to latest revision
'hello-coder' is available at
URL: http://hello.default.127.0.0.1.nip.io
Enter fullscreen mode Exit fullscreen mode

Run the curl command again and you should get Hello Coder! printed out.

~ curl http://hello.default.127.0.0.1.nip.io
~ Hello Coder!
Enter fullscreen mode Exit fullscreen mode

Traffic Splitting and Revisions

Knative Revision is similar to a version control tag or label and it's immutable. Every Knative Revision has a corresponding Kubernetes Deployment associated with it; it allows the application to be rolled back to any of the previous revisions. You can see the list of available revisions by running the command kn revisions list. This should print out a list of available revisions for every service, with information on how much traffic each revision gets, as shown in the image below. By default, each new revision gets routed 100% of traffic when created.

Figure 5 : Revision list

With revisions, you may wish to deploy applications using common deployment patterns such as Canary or blue-green. You need to have more than one revision of a service in order to use these patterns. The hello service you deployed in the previous section already have two revisions named hello-world and hello-coder respectively. You can split traffic 50% for each revision using the following command:

kn service update hello \
--traffic hello-world=50 \
--traffic hello-coder=50
Enter fullscreen mode Exit fullscreen mode

Run the curl http://hello.default.127.0.0.1.nip.io command a few times to see that you get Hello World! sometimes, and Hello Coder! other times.

Figure 6 : Traffic Splitting

Autoscaling Services

One of the benefits of serverless is the ability to scale up and down to meet demand. When there's no traffic coming in, it should scale down, and when it peaks, it should scale up to meet demand. Knative scales out the pods for a Knative Service based on inbound HTTP traffic. After a period of idleness (by default, 60 seconds), Knative terminates all of the pods for that service. In other words, it scales down to zero. This autoscaling capability of Knative is managed by Knative Horizontal Pod Autoscaler in conjunction with the Horizontal Pod Autoscaler built into Kubernetes.

If you've not accessed the hello service for more than one minute, the pods should have already been terminated. Running the command kubectl get pod -l serving.knative.dev/service=hello -w should show you an empty result. To see the autoscaling in action, open the service URL in the browser and check back to see the pods started and responding to the request. You should get an output similar to what's shown below.

Scaling Up
Scaling Up

Scaling Down
Scaling Down

There you have the awesome autoscaling capability of serverless.

If you have an application that is badly affected by the coldstart performance, and you'd like to keep at least one instance of the application running, you can do so by running the command kn service update <SERVICE_NAME> --scale-min <VALUE>. For example, to keep at least one instance of the hello service running at all times, you can use the command kn service update hello --scale-min 1.

What's Next?

Kubernetes has become a standard tool for managing container workloads. A lot of companies rely on it to build and scale cloud native applications, and it powers many of the products and services you use today. Although companies are adopting Kubernetes and reaping some benefits, developers aren't interested in the low-level details of Kubernetes and therefore want to focus on their code without worrying about the infrastructure bits of running the application.

Knative provides a set of tools and CLI that developers can use to deploy their code and have Knative manage the infrastructure requirement of the application. In this article, you saw how to install the Knative Serving component and deploy services to run on it. You also learned how to deploy services and manage their configuration using the kn CLI. If you want to learn more about how to use the kn CLI, check out this free cheat sheet I made at cheatsheet.pmbanugo.me/knative-serving.

In a future article, I'll show you how to work with Knative Eventing and how your application can respond to Cloud Events in and out of your cluster.

In the meantime, you can get my book How to build a serverless app platform on Kubernetes. It will teach you how to build a platform to deploy and manage web apps and services using Cloud Native technologies. You will learn about serverless, Knative, Tekton, GitHub Apps, Cloud Native Buildpacks, and more!

Get your copy at books.pmbanugo.me/serverless-app-platform

Originally published on Code Magazine

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .