Google Anthos: The Kubernetes Manager

Michael Levan - Jan 25 '23 - - Dev Community

Managing Kubernetes clusters, whether it’s one or ten is a hassle.

What tools do you use? How do you manage users? How do you have one location to manage the entire fleet of clusters?

Every engineer needs an easy, concise, and straightforward way to manage clusters all while using tools that are native to Kubernetes. That’s where Google Anthos can come into play.

In this blog post, you’ll learn about what Google Anthos is and how you can manage not only Kubernetes clusters, but how you interact with Kubernetes clusters in one location.

Why Anthos? (It’s Not Just For GCP)

The five-second pitch of Anthos is: Run containerized apps in Kubernetes from anywhere, managed in one place.

Kubernetes isn’t the easy platform to implement in the world and containers aren’t the easiest to understand. However, many engineers and organizations want to run both Kubernetes and containers. The thing is, it’s typically never because “they want Kubernetes”. It’s because they want what Kubernetes gives us - the ability to schedule and self-heal containers all while managing it in a declarative fashion.

Because of that realization, Kubernetes in today’s world is less about managing the underlying components and more about managing workloads with a layer of abstraction. Think about Managed Kubernetes Services like Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), and Elastic Kubernetes Service (EKS) from AWS. The purpose of these services is that the control plane is abstracted from you, which makes Kubernetes “easier” to adopt.

The whole idea behind Anthos is to give us exactly that. Just enough abstraction to manage all Kubernetes clusters, deployments, and various tools like monitoring and service mesh while still knowing what’s happening within our deployed workloads.

With Anthos, you can do everything from:

  • Migrate applications that aren’t containerized to containers.
  • Utilize Service Mesh, Ingress, and other security/routing Kubernetes capabilities.
  • Identity management.

And various other components of Kubernetes that you would otherwise have to piece together in your cluster.

Not only that, but it gives you the ability to keep your deployments consistent across clusters as you can deploy all resources from the same location (Anthos).

When you’re thinking of Anthos, think of it like your Control Plane for various Kubernetes clusters.

What About Outside Of GCP?

In today’s world, you may have workloads running in multiple locations. In fact, it’s pretty common for engineering departments to have accounts for various cloud providers.

With Anthos, you can manage the following:

  • Clusters running in AWS.
  • Clusters running in Azure.
  • Bare-metal clusters.
  • Clusters running in VMWare.
  • Clusters running at the edge.
  • Serverless Kubernetes clusters using GKE AutoPilot.

This makes Anthos not just a GCP solution, but a solution that shows the wide net that’s needed for Kubernetes cluster and containerization management.

On-Prem

A key aspect to point out about Anthos is that it can be used as a hybrid solution. This means you can either create an instance of Anthos in GCP or you can deploy it on-prem. For example, you can deploy an Anthos cluster on a bare-metal server.

Managing GKE With Anthos

Now that you know the theory behind Anthos, what it’s used for, and why a solution like Anthos (among various other solutions that are in the space), let’s dive into the hands-on piece of this blog post.

First, you’ll see how you can deploy a GKE cluster in Anthos with Terraform. After that, you’ll see how to do the same thing in the UI.

Terraform

Create a [main.tf](http://main.tf) file that will contain the code below.

To start the Terraform configuration, first set the GCP provider and ensure that you add in the Project and Region for the provider (the variables will be set later).



terraform {
  required_providers {
    google-beta = {
      source  = "hashicorp/google-beta"
      version = ">= 3.67.0"
    }
  }
}

provider "google" {
  project     = var.project_id
  region      = var.region
}


Enter fullscreen mode Exit fullscreen mode

Next, create the Terraform resource for the GKE cluster.



resource "google_container_cluster" "primary" {
  name     = var.cluster_name
  location = var.region

  remove_default_node_pool = true
  initial_node_count       = 1

  network    = var.vpc_name
  subnetwork = var.subnet_name
}


Enter fullscreen mode Exit fullscreen mode

Once the GKE cluster itself is created, create the node pool for the GKE cluster. This will contain the machine type and node count for the worker nodes.



resource "google_container_node_pool" "nodes" {
  name       = "${google_container_cluster.primary.name}-node-pool"
  location   = var.region
  cluster    = google_container_cluster.primary.name
  node_count = var.node_count

  node_config {
    oauth_scopes = [
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
    ]

    labels = {
      env = var.project_id
    }

    disk_size_gb = 50

    machine_type = var.machine_type
    tags         = ["gke-node", "${var.project_id}-gke"]
    metadata = {
      disable-legacy-endpoints = "true"
    }
  }
}


Enter fullscreen mode Exit fullscreen mode

Create the implementation to connect the GKE cluster to Anthos.



resource "google_gke_hub_membership" "anthos_registration" {
  provider      = google-beta
  project = var.project_id
  membership_id = "${var.cluster_name}-fleet"
  endpoint {
    gke_cluster {
      resource_link = "//container.googleapis.com/${google_container_cluster.primary.id}"
    }
  }
}


Enter fullscreen mode Exit fullscreen mode

Once the [main.tf](http://main.tf) file is complete with the code above, create the [variable.tf](http://variable.tf) file which will contain the variables for the GKE/Anthos Terraform code.

Create the variables below which are needed to run the configuration in a repeatable fashion. Add in the values below that reflect your GCP account.



variable "project_id" {
  type = string
  default = ""
}

variable "region" {
  type = string
  default = ""
}

variable "machine_type" {
  type = string
  default = ""
}

variable "vpc_name" {
  type = string
  default = ""
}

variable "subnet_name" {
  type = string
  default = ""
}

variable "node_count" {
  type = string
  default = 2
}

variable "cluster_name" {
  type = string
  default = ""
}


Enter fullscreen mode Exit fullscreen mode

Run the following commands to deploy the GKE and Anthos configuration.



terraform init
terraform plan
terraform apply --auto-approve


Enter fullscreen mode Exit fullscreen mode

After the Terraform configuration is deployed, you can log into GCP, go to the Anthos pane, and you’ll be able to see that the GKE cluster was deployed successfully.

Image description

UI

In the GCP portal, on the left pane, scroll down until you see the Anthos option.

Under Anthos, click Clusters —> CREATE CLUSTER.

Image description

Under the Create a cluster pane, you’ll see several options for both GKE and Kubernetes clusters that are created outside of GCP. You can either choose the Standard implementation which will create a regular GKE cluster, or you can use the Autopilot implementation for Serverless Kubernetes.

Image description

The last step is to configure your cluster and click the blue CREATE button. The example screenshot below shows an Autopilot configuration.

Image description

Congrats! You officially have a GKE cluster up and running in Anthos.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .