How to Manage On-premise Infrastructure with Terraform

Spacelift team - Jun 26 - - Dev Community

Terraform is a popular infrastructure as code (IaC) tool generally associated with managing cloud infrastructure, but its capabilities extend far beyond the cloud. It is versatile enough to use in on-premises environments, VCS providers, Kubernetes, and more.

Can you use Terraform on-premise?

Terraform works with the APIs of various service providers and systems, so technically, if your tool has an API you can use Terraform with it. This means that Terraform can be used with on-premise systems.

Some of the most popular on-premise providers are:

  • VMware vSphere
  • OpenStack
  • Kubernetes (this can be used with cloud services, too)

Differences between managing cloud and on-premise resources

There are no technical differences between cloud and on-premise resource management, but using on-premise infrastructure limits you in some areas, such as:

  • Resource availability - Resources are finite on-premise and must be managed carefully.
  • Scalability - Scaling is achieved by increasing the physical hardware capabilities, allowing you to scale from Terraform too.
  • Maintenance - Cloud providers usually handle maintenance, so this becomes the user's responsibility.

💡 You might also like:

How to use Terraform on-premise?

As previously mentioned, there are no technical differences between on-premise and cloud use of Terraform. You still need to use a Terraform provider and write the code as normal.  We will review three examples:

  • Configuring Terraform for virtualization platforms
  • Configuring Terraform for bare metal servers
  • Setting up Terraform with Kubernetes on-premise

Example 1: Configuring Terraform for virtualization platforms

For this example, we will use Terraform VMware vSphere provider. As I don't have a vSphere account, I will use a mock server that mimics vSphere's API and show a terraform plan on it.

First, you need to install and configure a couple of prerequisites:

  • Go
  • VCSIM (this will mimic vSphere's API)
go install github.com/vmware/govmomi/vcsim@latest
Enter fullscreen mode Exit fullscreen mode

Add golang binaries to the path:

export PATH=$PATH:$(go env GOPATH)/bin
source ~/.bashrc
Enter fullscreen mode Exit fullscreen mode

Now, let's start the mock server:

vcsim

export GOVC_URL=https://user:pass@127.0.0.1:8989/sdk GOVC_SIM_PID=58373

Enter fullscreen mode Exit fullscreen mode

When we started the mock server, we could see its username, password, and address in the GOVC_URL:

  • username is user
  • password is pass
  • the mock server address is localhost:8989

Now we are ready to write the Terraform code:

provider "vsphere" {
 user           = "user"
 password       = "pass"
 vsphere_server = "localhost:8989"

 # Accept the self-signed certificate used by vcsim
 allow_unverified_ssl = true
}
Enter fullscreen mode Exit fullscreen mode

Before we can create our first virtual machine, we need to get some information from our cluster. We will do that using the following data sources:

data "vsphere_datacenter" "dc" {
 name = "DC0"
}

data "vsphere_compute_cluster" "cluster" {
 name          = "DC0_C0"
 datacenter_id = data.vsphere_datacenter.dc.id
}

data "vsphere_datastore" "datastore" {
 name          = "LocalDS_0"
 datacenter_id = data.vsphere_datacenter.dc.id
}

data "vsphere_network" "network" {
 name          = "VM Network"
 datacenter_id = data.vsphere_datacenter.dc.id
}
Enter fullscreen mode Exit fullscreen mode

The names used for the data center, compute cluster, datastore, and network are the default ones vcsim uses, so you won't need to make any changes if you plan to test this using a mock server.

Now, we are ready to create the code for the vSphere virtual machine, by leveraging the above data sources:

resource "vsphere_virtual_machine" "vm" {
 name             = "example_vm"
 resource_pool_id = data.vsphere_compute_cluster.cluster.resource_pool_id
 datastore_id     = data.vsphere_datastore.datastore.id

 num_cpus = 2
 memory   = 4096
 guest_id = "otherGuest"

 network_interface {
   network_id   = data.vsphere_network.network.id
   adapter_type = "vmxnet3"
 }

 disk {
   label            = "disk0"
   size             = 20
   eagerly_scrub    = false
   thin_provisioned = true
 }
}
Enter fullscreen mode Exit fullscreen mode

Let's run a terraform init:

terraform init

Initializing the backend...

Initializing provider plugins...
- Finding latest version of hashicorp/vsphere...
- Installing hashicorp/vsphere v2.8.1...
- Installed hashicorp/vsphere v2.8.1 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Enter fullscreen mode Exit fullscreen mode

As you can see, we have successfully initialized the backend and installed the latest version of the VSphere provider.

Now, let's see a terraform plan in action:

terraform plan
data.vsphere_datacenter.dc: Reading...
data.vsphere_datacenter.dc: Read complete after 0s [id=datacenter-2]
data.vsphere_datastore.datastore: Reading...
data.vsphere_network.network: Reading...
data.vsphere_compute_cluster.cluster: Reading...
data.vsphere_network.network: Read complete after 0s [id=network-7]
data.vsphere_datastore.datastore: Read complete after 0s [id=datastore-52]
data.vsphere_compute_cluster.cluster: Read complete after 0s [id=domain-c27]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # vsphere_virtual_machine.vm will be created
  + resource "vsphere_virtual_machine" "vm" {
      + annotation                              = (known after apply)
      + boot_retry_delay                        = 10000
      + change_version                          = (known after apply)
      + cpu_limit                               = -1
      + cpu_share_count                         = (known after apply)
      + cpu_share_level                         = "normal"
      + datastore_id                            = "datastore-52"
      + default_ip_address                      = (known after apply)
      + ept_rvi_mode                            = "automatic"
      + extra_config_reboot_required            = true
      + firmware                                = "bios"
      + force_power_off                         = true
      + guest_id                                = "otherGuest"
      + guest_ip_addresses                      = (known after apply)
      + hardware_version                        = (known after apply)
      + host_system_id                          = (known after apply)
      + hv_mode                                 = "hvAuto"
      + id                                      = (known after apply)
      + ide_controller_count                    = 2
      + imported                                = (known after apply)
      + latency_sensitivity                     = "normal"
      + memory                                  = 4096
      + memory_limit                            = -1
      + memory_share_count                      = (known after apply)
      + memory_share_level                      = "normal"
      + migrate_wait_timeout                    = 30
      + moid                                    = (known after apply)
      + name                                    = "example_vm"
      + num_cores_per_socket                    = 1
      + num_cpus                                = 2
      + power_state                             = (known after apply)
      + poweron_timeout                         = 300
      + reboot_required                         = (known after apply)
      + resource_pool_id                        = "resgroup-26"
      + run_tools_scripts_after_power_on        = true
      + run_tools_scripts_after_resume          = true
      + run_tools_scripts_before_guest_shutdown = true
      + run_tools_scripts_before_guest_standby  = true
      + sata_controller_count                   = 0
      + scsi_bus_sharing                        = "noSharing"
      + scsi_controller_count                   = 1
      + scsi_type                               = "pvscsi"
      + shutdown_wait_timeout                   = 3
      + storage_policy_id                       = (known after apply)
      + swap_placement_policy                   = "inherit"
      + sync_time_with_host                     = true
      + tools_upgrade_policy                    = "manual"
      + uuid                                    = (known after apply)
      + vapp_transport                          = (known after apply)
      + vmware_tools_status                     = (known after apply)
      + vmx_path                                = (known after apply)
      + wait_for_guest_ip_timeout               = 0
      + wait_for_guest_net_routable             = true
      + wait_for_guest_net_timeout              = 5

      + disk {
          + attach            = false
          + controller_type   = "scsi"
          + datastore_id      = "<computed>"
          + device_address    = (known after apply)
          + disk_mode         = "persistent"
          + disk_sharing      = "sharingNone"
          + eagerly_scrub     = false
          + io_limit          = -1
          + io_reservation    = 0
          + io_share_count    = 0
          + io_share_level    = "normal"
          + keep_on_remove    = false
          + key               = 0
          + label             = "disk0"
          + path              = (known after apply)
          + size              = 20
          + storage_policy_id = (known after apply)
          + thin_provisioned  = true
          + unit_number       = 0
          + uuid              = (known after apply)
          + write_through     = false
        }

      + network_interface {
          + adapter_type          = "vmxnet3"
          + bandwidth_limit       = -1
          + bandwidth_reservation = 0
          + bandwidth_share_count = (known after apply)
          + bandwidth_share_level = "normal"
          + device_address        = (known after apply)
          + key                   = (known after apply)
          + mac_address           = (known after apply)
          + network_id            = "network-7"
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.
Enter fullscreen mode Exit fullscreen mode

We won't be able to apply the code because this is just a mock server. If you have an existing vSphere cluster and want to test this automation, you will need to make a couple of changes to the provider and the data sources.

Because I used a mock server, I didn't care about securing my credentials, but you can at least read them from the environment and remove the entries from the provider:

  • VSPHERE_USER - will load your vSphere username
  • VSPHERE_PASSWORD - will load your vSphere password
  • VSPHERE_SERVER - will load your vSphere server

Example 2: Configuring Terraform for bare metal servers

Leveraging Terraform to manage your bare metal server on-premise, allows you to apply IaC principles in your own data centers.

The following Terraform providers can be leveraged, depending on what you are using inside of your infrastructure:

Let's take a look at how you could create an example Terraform configuration for MaaS. We will assume you have MaaS running on the host that runs your Terraform code:

terraform {
 required_providers {
   maas = {
     source  = "maas/maas"
     version = "2.2.0"
   }
 }
}

provider "maas" {
 api_url = "http://your-maas-server/MAAS/api/2.0"
 api_key = "your-api-key"
}
Enter fullscreen mode Exit fullscreen mode

We will need to declare a terraform block to specify the MaaS provider and its version. Next, in the provider block, we will configure a couple of parameters:

  • api_key - the MaaS API key
  • api_url - the MaaS API url

Next, let's define a configuration that will allow us to create a MaaS instance:

resource "maas_instance" "kvm" {
 allocate_params {
   hostname      = "my_hostname"
   min_cpu_count = 1
   min_memory    = 2048
 }
 deploy_params {
   distro_series = "focal"
   user_data     = <<EOF
#cloud-config
users:
- name: ubuntu
  ssh_authorized_keys:
    - ${file("~/.ssh/id_rsa.pub")}
  sudo: ALL=(ALL) NOPASSWD:ALL
  groups: sudo
  shell: /bin/bash
EOF
 }
}
Enter fullscreen mode Exit fullscreen mode

The above configuration will set up a mass instance. In the mass instance, we've added a cloud init script that creates an Ubuntu user, adds an SSH key inside its ssh_authorized_keys, and adds this user to the sudoers group.

Example 3: Setting up Terraform with Kubernetes on-premise

For this example, you can set up your Kubernetes however you want -- I will use kind.

Let's first create a kind cluster:

kind create cluster --name onprem
Creating cluster "onprem" ...
✓ Ensuring node image (kindest/node:v1.26.3) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-onprem"
You can now use your cluster with:

kubectl cluster-info --context kind-onprem

Not sure what to do next? 😅  Check out https://kind.sigs.k8s.io/docs/user/quick-start/
Enter fullscreen mode Exit fullscreen mode

Then, let's create a Kubernetes namespace, a deployment for an NGINX container, and a service that exposes that container using Terraform:

provider "kubernetes" {
 config_path = "~/.kube/config"
}

resource "kubernetes_namespace" "example" {
 metadata {
   name = "nginx-ns"
 }
}

resource "kubernetes_deployment" "nginx" {
 metadata {
   name      = "nginx-deployment"
   namespace = kubernetes_namespace.example.metadata[0].name
 }

 spec {
   replicas = 1

   selector {
     match_labels = {
       app = "nginx"
     }
   }

   template {
     metadata {
       labels = {
         app = "nginx"
       }
     }

     spec {
       container {
         image = "nginx:latest"
         name  = "nginx"

         port {
           container_port = 80
         }
       }
     }
   }
 }
}

resource "kubernetes_service" "nginx" {
 metadata {
   name      = "nginx-service"
   namespace = kubernetes_namespace.example.metadata[0].name
 }

 spec {
   selector = {
     app = "nginx"
   }

   port {
     port        = 80
     target_port = 80
   }
 }
}
Enter fullscreen mode Exit fullscreen mode

When we created the kind cluster, the kubeconfig file was automatically updated with the authentication to cluster, and the K8s context was automatically set to use this cluster. So, for the provider configuration, we can simply refer to the kubeconfig file.

You declare the rest of the resources as you would do them in Kubernetes - the only difference being that we are now using HCL instead of YAML.

Let's apply the code:

Plan: 3 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
 Terraform will perform the actions described above.
 Only 'yes' will be accepted to approve.

 Enter a value: yes

kubernetes_namespace.example: Creating...
kubernetes_namespace.example: Creation complete after 0s [id=nginx-ns]
kubernetes_service.nginx: Creating...
kubernetes_deployment.nginx: Creating...
kubernetes_service.nginx: Creation complete after 0s [id=nginx-ns/nginx-service]
kubernetes_deployment.nginx: Creation complete after 3s [id=nginx-ns/nginx-deployment]
Enter fullscreen mode Exit fullscreen mode

To check if our service is working properly, we can create an ephemeral container and access our application:

kubectl run -i --tty --rm debug --image=busybox --restart=Never -- sh
Enter fullscreen mode Exit fullscreen mode

We've created an ephemeral container based on the busybox image that will be deleted when we exit the shell. Let's access the Nginx app:

/ # wget -qO- http://nginx-service.nginx-ns.svc.cluster.local
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
   body {
       width: 35em;
       margin: 0 auto;
       font-family: Tahoma, Verdana, Arial, sans-serif;
   }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Enter fullscreen mode Exit fullscreen mode

As you can see, everything is working smoothly.

Managing Terraform with Spacelift

Spacelift takes managing Terraform to the next level by giving you access to a powerful CI/CD workflow and unlocking features such as:

  • Policies (based on Open Policy Agent) - You can control how many approvals you need for runs, what kind of resources you can create, and what kind of parameters these resources can have, and you can also control the behavior when a pull request is open or merged.
  • Multi-IaC workflows - Combine Terraform with Kubernetes, Ansible, and other IaC tools such as OpenTofu, Pulumi, and CloudFormation,  create dependencies among them, and share outputs
  • Build self-service infrastructure - You can use Blueprints to build self-service infrastructure; simply complete a form to provision infrastructure based on Terraform and other supported tools.
  • Integrations with any third-party tools - You can integrate with your favorite third-party tools and even build policies for them.

Spacelift enables you to create private workers inside your infrastructure, which helps you execute Spacelift-related workflows on your end. For more information on how to configure private workers, you can look into the documentation.

Key points

Terraform may not be designed for managing on-premise infrastructure, but it remains a viable solution for avoiding the manual work associated with provisioning it.

If you want to elevate your Terraform management, create a free account for Spacelift today or book a demo with one of our engineers.

Note: New versions of Terraform are placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that expands on Terraform's existing concepts and offerings. It is a viable alternative to HashiCorp's Terraform, being forked from Terraform version 1.5.6.

Written by Flavius Dinu.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .