GCP Cloud Run vs Kubernetes

Michael Levan - May 29 - - Dev Community

The world of Kubernetes is intertwined with several different platforms, software, and third-party services. Aside from things like GitOps and Service Mesh, the core implementation of Kubernetes is to help with one thing - orchestrate and manage containers.

In this blog post, we’ll get back to the original concept of orchestration and talk about the key differences between a service like Cloud Run and a full-blown orchestrator like Kubernetes.

What Is Kubernetes

The question of “What is Kubernetes?” is long, vast, and takes up a fair amount of talk in itself (that’s why there are 10’s of books available around Kubernetes), but what was the original need for Kubernetes?

The long and short of it is that Kubernetes was created to manage and scale containers.

Kubernetes contains several plugins and the plugin to run containers is the Container Runtime Interface (CRI). Why is the plugin necessary? Because Kubernetes does not know how to run/stop/start containers by itself.

What Is GCP Cloud Run

Cloud Run, much like Kubernetes, is an orchestrator. It gives you the ability to deploy a container, scale that container, set up autoscaling, configure resource optimization, and several other health-based containerization methods.

The key to remember is that with Cloud Run, you’re using a GCP-based solution for orchestration.

In the next few sections, you’ll see how to run containers (Pods contain containers), Cloud Run containers, and resource optimization in each environment.

Deploying A Pod On Kubernetes

To deploy a Pod on Kubernetes, you’ll use a Kubernetes Manifest, which is a YAML (or JSON) configuration to deploy containers within Pods.

Below is an example of using a higher-level controller called Deployment.



apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginxdeployment
  replicas: 2
  template:
    metadata:
      labels:
        app: nginxdeployment
    spec:
      containers:
      - name: nginxdeployment
        image: nginx:latest
        ports:
        - containerPort: 80



Enter fullscreen mode Exit fullscreen mode

Once you write the Manifest, you perform a HTTPS POST request to the Kubernetes API server with the kubectl apply command.



kubectl apply -f deployment.yaml


Enter fullscreen mode Exit fullscreen mode

Deploying A Container on Cloud Run

Aside from using the GCP console, you can use the gcloud CLI to deploy a container in Cloud Run.

💡 Both GCP Cloud Run and Kubernetes are declarative in nature and use YAML configurations. You can go into the YAML of a container deployed and re-configure it if you’d like.


gcloud run deploy gowebapi --image adminturneddevops/golangwebapi


Enter fullscreen mode Exit fullscreen mode

Once the container is deployed, you can go in and edit it via the blue EDIT & DEPLOY NEW REVISION button.

Image description

If you click the YAML button, you can edit your container configuration.

Image description

💡 On the backend of a Cloud Run service, it’s using Knative, which is the method of running Serverless workloads on Kubernetes. One could say that Cloud Run is using Kubernetes on the backend.

Resource Optimization On Kubernetes

When it comes to ensuring that Pods are running as performant and efficiently as possible, engineers must ensure to implement resource optimization.

There are a few methods for resource/performance optimization on Kubernetes.

First, there are ResourceQuotas. ResourceQuotas are a way to set limits for memory and CPU on a Namespace. When using ResourceQuotas, Namespaces are only allowed X amount of CPU and X amount of CPU.



apiVersion: v1
kind: ResourceQuota
metadata:
  name: memorylimit
  namespace: test
spec:
  hard:
    requests.memory: 512Mi
    limits.memory: 1000Mi


Enter fullscreen mode Exit fullscreen mode


apiVersion: v1
kind: ResourceQuota
metadata:
  name: memorylimit
  namespace: test
spec:
  hard:
    cpu: "5"
    memory: 10Gi
    pods: "10"


Enter fullscreen mode Exit fullscreen mode

You can also set resource requests and limits within the Deployment/Pod configuration itself.



apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginxdeployment
replicas: 2
template:
metadata:
namespace: webapp
labels:
app: nginxdeployment
spec:
containers:
- name: nginxdeployment
image: nginx:latest
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
ports:
- containerPort: 80

Enter fullscreen mode Exit fullscreen mode




Resource Optimization On Cloud Run

Within Cloud Run, you can manage resources from a CPU and memory perspective just like you can within Kubernetes.

Image description

You can also set up requests and CPU allocation, which helps if you want to implement cost optimization (FinOps).

Image description

From an autoscaling perspective, Kubernetes has HPA and VPA. Cloud Run allows you to scale out instances just like you can scale out Pods with HPA.

Image description

If you don’t want to use button clicks, you can also edit the YAML configuration for resource optimization.

Image description

Closing Thoughts

When you’re deciding on Kubernetes vs a “serverless container” solution like GCP Cloud Run, the biggest things you need to think about are:

  1. Do you need a multi-orchestration solution? Go with Kubernetes.
  2. Does your organization have the capability for engineers to train on Kubernetes?
  3. Do you have a smaller application stack to manage? If so, go with Cloud Run.

Overall, the concept of what Kubernetes does (orchestrate containers) and what Cloud Run does (orchestrate containers) is the same, but the key difference is how large the workloads are. Cloud Run is great for truly decoupled workloads and Kubernetes is great for very large containers at scale.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .