Driftctl and Terraform, they're two of a kind!

Alexis - Jun 30 '21 - - Dev Community

Driftctl is an Open Source project developed to help Operation team to maintain their infrastructure.

Driftctl website / Driftctl source code

Terraform a brief introduction

Let's start by a reminder or an explanation on what is Terraform. Terraform is an Open Source software support by Hashicorp such as Vault, Packer, Consul and many other.

The aim of Terraform is to build, change and version your infrastructure. You will need to declare the infrastructure into Terraform files.

Terraform website

Why use Driftctl?

Infrastructure as code is awesome, but there are too many moving parts: codebase, state file, actual cloud state. Things tend to drift.

Drift can have multiple causes: from developers creating or updating infrastructure through the web console without telling anyone, to uncontrolled updates on the cloud provider side. Handling infrastructure drift vs the codebase can be challenging.

You can't efficiently improve what you don't track. We track coverage for unit tests, why not infrastructure as code coverage?

Driftctl tracks how well your IaC codebase covers your cloud configuration. Driftctl warns you about drift.

official documentation

What Driftcl does?

It read a tfstate file to catch how your project is build and will give an output to see if everything is correct.
That in order to avoid from scratch modification.

You can added these check into a CI to have a result at your convenience.

I will try in this post to show you with how to use it.
For the moment Drifctl works only on AWS but it is still under development and GCP will come :)

tfstate?

A Terraform State file represent your infrastructure and configuration. Terraform will use it to map real world resources to your configuration.

tfstate

First of all! Deploy your infra.

I choose to deploy a LAMP (Linux, Apache, Mysql, PHP) infra with a bastion host, a web-server and a database.

provider.tf

provider "aws" {
  profile = "default"
  region  = "eu-west-3"
}
Enter fullscreen mode Exit fullscreen mode

We can declare our instances:

instance.tf

resource "aws_instance" "nat" {
    ami = "ami-08755c4342fb5aede" #Red Hat Enterprise Linux 8 
    instance_type = "t2.micro"
    key_name = "${var.aws_key_name}"
    vpc_security_group_ids = ["${aws_security_group.nat.id}"]
    subnet_id = "${aws_subnet.eu-west-3a-public.id}"
    associate_public_ip_address = true
    source_dest_check = false

    tags = {
        Name = "VPC NAT"
    }
}

resource "aws_instance" "web-1" {
    ami = "${lookup(var.amis, var.aws_region)}"
    instance_type = "t2.micro"
    key_name = "${var.aws_key_name}"
    vpc_security_group_ids = ["${aws_security_group.web.id}"]
    subnet_id = "${aws_subnet.eu-west-3a-public.id}"
    associate_public_ip_address = true
    source_dest_check = false


    tags = {
        Name = "Web Server 1"
    }
}

resource "aws_instance" "db-1" {
    ami = "${lookup(var.amis, var.aws_region)}"
    instance_type = "t2.micro"
    key_name = "${var.aws_key_name}"
    vpc_security_group_ids = ["${aws_security_group.db.id}"]
    subnet_id = "${aws_subnet.eu-west-3a-private.id}"
    source_dest_check = false

    tags = {
        Name = "DB Server 1"
    }
}
Enter fullscreen mode Exit fullscreen mode

Now we deploy your network

vpc.tf

resource "aws_vpc" "main_vpc" {
    cidr_block = "${var.vpc_cidr}"
    enable_dns_hostnames = true
}

resource "aws_eip" "nat" {
    instance = "${aws_instance.nat.id}"
    vpc = true
}

resource "aws_eip" "web-1" {
    instance = "${aws_instance.web-1.id}"
    vpc = true
}

resource "aws_route_table" "eu-west-3a-public" {
    vpc_id = "${aws_vpc.main_vpc.id}"

    route {
        cidr_block = "0.0.0.0/0"
        gateway_id = "${aws_internet_gateway.ig-main.id}"
    }

    tags = {
        Name = "Public Subnet"
    }
}

resource "aws_route_table_association" "eu-west-3a-public" {
    subnet_id = "${aws_subnet.eu-west-3a-public.id}"
    route_table_id = "${aws_route_table.eu-west-3a-public.id}"
}

resource "aws_route_table" "eu-west-3a-private" {
    vpc_id = "${aws_vpc.main_vpc.id}"

    route {
        cidr_block = "0.0.0.0/0"
        instance_id = "${aws_instance.nat.id}"
    }

    tags = {
        Name = "Private Subnet"
    }
}

resource "aws_route_table_association" "eu-west-3a-private" {
    subnet_id = "${aws_subnet.eu-west-3a-private.id}"
    route_table_id = "${aws_route_table.eu-west-3a-private.id}"
}

resource "aws_subnet" "eu-west-3a-public" {
    vpc_id = "${aws_vpc.main_vpc.id}"

    cidr_block = "${var.public_subnet_cidr}"
    availability_zone = "eu-west-3a"

    tags = {
        Name = "Public Subnet"
    }
}

resource "aws_subnet" "eu-west-3a-private" {
    vpc_id = "${aws_vpc.main_vpc.id}"

    cidr_block = "${var.private_subnet_cidr}"
    availability_zone = "eu-west-3a"

    tags = {
        Name = "Private Subnet"
    }
}

resource "aws_internet_gateway" "ig-main" {
    vpc_id = "${aws_vpc.main_vpc.id}"
}
Enter fullscreen mode Exit fullscreen mode

Add rule to be sure that just yout bastion host is joinable by SSH for example

security_groups.tf

resource "aws_security_group" "nat" {
    name = "vpc_nat"
    description = "Can access both subnets"

    ingress {
        from_port = 80
        to_port = 80
        protocol = "tcp"
        cidr_blocks = ["${var.private_subnet_cidr}"]
    }
    ingress {
        from_port = 443
        to_port = 443
        protocol = "tcp"
        cidr_blocks = ["${var.private_subnet_cidr}"]
    }
    ingress {
        from_port = 22
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }
    ingress {
        from_port = -1
        to_port = -1
        protocol = "icmp"
        cidr_blocks = ["0.0.0.0/0"]
    }
    egress {
        from_port = 80
        to_port = 80
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }
    egress {
        from_port = 443
        to_port = 443
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }
    egress {
        from_port = 22
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }
    egress {
        from_port = -1
        to_port = -1
        protocol = "icmp"
        cidr_blocks = ["0.0.0.0/0"]
}

    vpc_id = "${aws_vpc.main_vpc.id}"

    tags = {
        Name = "NATSG"
    }
}

resource "aws_security_group" "web" {
    name = "vpc_web"
    description = "Allow incoming HTTP connections."

    ingress {
        from_port = 22
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["${var.vpc_cidr}"]
    }
    ingress {
        from_port = 80
        to_port = 80
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }
    ingress {
        from_port = 443
        to_port = 443
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }
    ingress {
        from_port = -1
        to_port = -1
        protocol = "icmp"
        cidr_blocks = ["0.0.0.0/0"]
    }

    egress {
        from_port = 80
        to_port = 80
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }
    egress {
        from_port = 443
        to_port = 443
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }
    egress { # MySQL
        from_port = 3306
        to_port = 3306
        protocol = "tcp"
        cidr_blocks = ["${var.private_subnet_cidr}"]
    }

    vpc_id = "${aws_vpc.main_vpc.id}"

    tags = {
        Name = "WebServerSG"
    }
}

resource "aws_security_group" "db" {
    name = "vpc_db"
    description = "Allow incoming database connections."

    ingress { # MySQL
        from_port = 3306
        to_port = 3306
        protocol = "tcp"
        security_groups = ["${aws_security_group.web.id}"]
    }

    ingress {
        from_port = 22
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["${var.vpc_cidr}"]
    }
    ingress {
        from_port = -1
        to_port = -1
        protocol = "icmp"
        cidr_blocks = ["${var.vpc_cidr}"]
    }

    egress {
        from_port = 80
        to_port = 80
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }
    egress {
        from_port = 443
        to_port = 443
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }

    vpc_id = "${aws_vpc.main_vpc.id}"

    tags = {
        Name = "DBServerSG"
    }
}
Enter fullscreen mode Exit fullscreen mode

Input variables serve as parameters for a Terraform module, allowing aspects of the module to be customized without altering the module's own source code, and allowing modules to be shared between different configurations.

Three of them are not used in this deployment but could be useful :

  • aws_access_key
  • aws_secret_key
  • aws_key_path

vars.tf

variable "aws_access_key" {
    default = "ACCESS"
}
variable "aws_secret_key" {
    default = "SECRET"
}
variable "aws_key_path" {
    default = "~/.ssh/id_rsa.pub"
}
variable "aws_key_name" {
    default = "ansible"
}

variable "aws_region" {
    description = "EC2 Region for the VPC"
    default = "eu-west-3"
}

variable "amis" {
    description = "AMIs by region"
    default = {
        eu-west-3 = "ami-08755c4342fb5aede" #Red Hat Enterprise Linux 8 
    }
}

variable "vpc_cidr" {
    description = "CIDR for the whole VPC"
    default = "10.0.0.0/16"
}

variable "public_subnet_cidr" {
    description = "CIDR for the Public Subnet"
    default = "10.0.0.0/24"
}

variable "private_subnet_cidr" {
    description = "CIDR for the Private Subnet"
    default = "10.0.1.0/24"
}
Enter fullscreen mode Exit fullscreen mode

All files are available here: https://gitlab.com/aaurin/lamp

You can now launch your infrastructure managed by terraform,

Module initialization (aws here):

$ terraform init
Enter fullscreen mode Exit fullscreen mode

If you want to see that Terraform will deploy you can plan your deployment (not mandatory in our case)

$ terraform plan
Enter fullscreen mode Exit fullscreen mode

Then deploy your LAMP infra

$ terraform apply

...
...

aws_vpc.main_vpc: Creating...
aws_vpc.main_vpc: Still creating... [10s elapsed]
aws_vpc.main_vpc: Creation complete after 12s [id=vpc-0c45226d21c59070a]
aws_internet_gateway.ig-main: Creating...
aws_subnet.eu-west-3a-public: Creating...
aws_subnet.eu-west-3a-private: Creating...
aws_security_group.web: Creating...
aws_security_group.nat: Creating...
aws_subnet.eu-west-3a-private: Creation complete after 1s [id=subnet-04cd40add9de539dc]
aws_subnet.eu-west-3a-public: Creation complete after 1s [id=subnet-0d41d297b9ed3d325]
aws_internet_gateway.ig-main: Creation complete after 1s [id=igw-09faa5fd0eb66734d]
aws_route_table.eu-west-3a-public: Creating...
aws_route_table.eu-west-3a-public: Creation complete after 1s [id=rtb-0e182dc2f4256fd80]
aws_route_table_association.eu-west-3a-public: Creating...
aws_route_table_association.eu-west-3a-public: Creation complete after 0s [id=rtbassoc-0efc5f42b16f742ef]
aws_security_group.nat: Creation complete after 2s [id=sg-0f17e55a720cc32e9]
aws_instance.nat: Creating...
aws_security_group.web: Creation complete after 2s [id=sg-02b9aeddf0653d42c]
aws_instance.web-1: Creating...
aws_security_group.db: Creating...
aws_security_group.db: Creation complete after 2s [id=sg-01f95dcabb4f176b2]
aws_instance.db-1: Creating...
aws_instance.nat: Still creating... [10s elapsed]
aws_instance.web-1: Still creating... [10s elapsed]
aws_instance.db-1: Still creating... [10s elapsed]
aws_instance.nat: Still creating... [20s elapsed]
aws_instance.web-1: Still creating... [20s elapsed]
aws_instance.db-1: Still creating... [20s elapsed]
aws_instance.web-1: Creation complete after 23s [id=i-02658d8ceccbcd8e4]
aws_instance.nat: Creation complete after 23s [id=i-0a296ff78695c550a]
aws_eip.web-1: Creating...
aws_eip.nat: Creating...
aws_route_table.eu-west-3a-private: Creating...
aws_route_table.eu-west-3a-private: Creation complete after 1s [id=rtb-0215aaa607d739fce]
aws_route_table_association.eu-west-3a-private: Creating...
aws_eip.web-1: Creation complete after 1s [id=eipalloc-01a1f6f45b5de1323]
aws_eip.nat: Creation complete after 1s [id=eipalloc-0dd787a99bf35fdcf]
aws_route_table_association.eu-west-3a-private: Creation complete after 0s [id=rtbassoc-030ac110c46b505ff]
aws_instance.db-1: Still creating... [30s elapsed]
aws_instance.db-1: Still creating... [40s elapsed]
aws_instance.db-1: Creation complete after 43s [id=i-052fe5f0b1dd45dd6]

Apply complete! Resources: 16 added, 0 changed, 0 destroyed.
Enter fullscreen mode Exit fullscreen mode

once is ok, we can install drifctl.

drifctl installation

Download binary file using curl or wget command:

$ curl -L https://github.com/cloudskiff/driftctl/releases/latest/download/driftctl_linux_amd64 -o driftctl

# x86
$ curl -L https://github.com/cloudskiff/driftctl/releases/latest/download/driftctl_linux_386 -o driftctl

# macOS
$ curl -L https://github.com/cloudskiff/driftctl/releases/latest/download/driftctl_darwin_amd64 -o driftctl
Enter fullscreen mode Exit fullscreen mode

Make it executable

$ chmod +x driftctl
Enter fullscreen mode Exit fullscreen mode

Finally move it into your PATH:

$ sudo mv driftctl /usr/local/bin/
Enter fullscreen mode Exit fullscreen mode

You can also use brew:

$ brew install driftctl
Enter fullscreen mode Exit fullscreen mode

Or you can use a container:

$ docker run -t --rm \
  -v ~/.aws:/root/.aws:ro \
  -v $(pwd):/app:ro \
  -v ~/.driftctl:/root/.driftctl \
  -e AWS_PROFILE=non-default-profile \
  cloudskiff/driftctl scan
Enter fullscreen mode Exit fullscreen mode

-v ~/.aws:/root/.aws:ro (optionally) mounts your ~/.aws containing AWS credentials and profile

-v $(pwd):/app:ro (optionally) mounts your working dir containing the terraform state

-v ~/.driftctl:/root/.driftctl (optionally) prevents driftctl to download the provider at each run

-e AWS_PROFILE=cloudskiff (optionally) exports the non-default AWS profile name to use

cloudskiff/driftctl:<VERSION_TAG> run a specific Driftctl tagged release

and run your first check:

driftctl scan --from tfstate://terraform.tfstate

...
...

Found 139 resource(s)
 - 15% coverage
 - 21 covered by IaC
 - 118 not covered by IaC
 - 0 missing on cloud provider
 - 0/21 changed outside of IaC
Enter fullscreen mode Exit fullscreen mode

Let's explain this output:
% coverage -- coverage percent on your Infra as Code
covered by IaC -- number of resource managed by IaC
not covered by IaC -- number of resources found in IaC but not managed by IaC
missing on cloud provider -- number of resources found in IaC but not on remote
changed outside of IaC -- !! Drift !! number of changes on managed resources

We can see that I'm not alone on this project and my coverage is not full but I have no changed from my IaC.

Now I will make a change: add an SSH access to everywhere for the web security group

I run again the scan

driftctl scan --from tfstate://terraform.tfstate
...
Found changed resources:
  - Table: rtb-0215aaa607d739fce, Destination: 0.0.0.0/0 (aws_route):
    ~ InstanceOwnerId: "" => "836683081860" (computed)
    ~ NetworkInterfaceId: <nil> => "eni-0aea178a98f2cd0f5" (computed)
Found 140 resource(s)
 - 15% coverage
 - 21 covered by IaC
 - 119 not covered by IaC
 - 0 missing on cloud provider
 - 1/21 changed outside of IaC
Enter fullscreen mode Exit fullscreen mode

We can now see the change and their it could be security vulnerability

Conclusion

I really like this tool, he could be very helpful for changes that terraform are not handle or to keep an eyes on your infra.
You can add it to your CI tool, and make workflow around the Driftctl output. find all CI compatible here

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .