How to create a flexible Dev Environment with Vagrant and Docker

Rolf Streefkerk - Jun 25 - - Dev Community

Today we're discussing how you can create a fully automated, virtualized development environment. One that's customizable, and ready to use in minutes.

In this article, I'll show you how to set up such an environment using PHP and Laravel, though the principles can be applied to your preferred tech stack.

We'll dive into creating a robust setup powered by VirtualBox and Vagrant, featuring:

  • Apache Gateway
  • PHP with Apache for front-end
  • Laravel API
  • Redis Cache with Redis Commander
  • MySQL database with PHPMyAdmin

By the end of this guide, you'll have a secure, containerized environment accessible via specific routes, with core services in a private Docker network.

Furthermore, it’s very easy to connect Visual Studio Code to VirtualBox, and start your programming workflow.

Let's get started!

How does it all work?

TLDR: Skip to step 4. A - Installation, to get right into setting this up for your system.

There are 4 parts to this solution I’ll discuss today. I’ll finish with the Development workflow.

Vagrant apache deployment diagram

If you like more in-depth articles on Apache, Docker, or Vagrant?

Let me know in the comments below, and Connect with me on Twitter.


1, Apache as a Gateway (Reverse Proxy)

Apache 2 is a HTTP server with a modular setup. An HTTP server in essence serves files according to the HTTP protocol of which there are two major versions currently used across the web, http v1.1 and http v2

Apache is typically used as an HTTP server to serve files, however in this case we’re using it in Proxy “mode” primarily and specifically as a Reverse Proxy (also known as a Gateway).

The reverse proxy acts like a regular web-server, and will decide where to send the requests and then returns the content as if it was the origin server.

Typical use-cases:

  • Load balancing. Is a topic for another day.
  • Provide access to servers behind a firewall. This is what we’re doing today.
ProxyPass "/foo" "http://foo.example.com/bar"
ProxyPassReverse "/foo" "http://foo.example.com/bar"
Enter fullscreen mode Exit fullscreen mode

ProxyPass is a remote server mapping to the local path. This is considered in Apache as a “Worker”. This is an object that holds its own connections and configuration associated with that connection.

ProxyPassReverse ensures that the return headers generated from the backend are modified to point to the Gateway path instead of the origin server.

These are the basic 2 directives to create mappings that appear to orginate from the Reverse Proxy.

Now you need to map these to specific Locations.

<Location "/api/">
    ProxyPass http://api:80/
    ProxyPassReverse http://api:80/
</Location>
Enter fullscreen mode Exit fullscreen mode

A Location directive operates on the URL or webpages that are generated only! Not file system paths.

In this example api is a named server in the docker-compose file that is exposed to port 80.

within a Docker network you can reference containers by their name

The api service needs to be accessible on the reverse proxy at the URL path /api/. So that means, you can access the API in your browser at: http://your-ip/api/

To make sure that the api and the other Locations are made available on port 80 we need to encapsulate the Location and ProxyPass, ProxyPassReverse in a VirtualHost directive like this:

<VirtualHost *:80>
    ServerName ${SERVER_NAME}
    ProxyPreserveHost On

    # Laravel Frontend app server on root path
    ProxyPass / http://frontend/
    ProxyPassReverse / http://frontend/

    <Location "/api/">
        ProxyPass http://api:80/
        ProxyPassReverse http://api:80/
    </Location>

    # ... other directives ....
</VirtualHost>
Enter fullscreen mode Exit fullscreen mode

The VirtualHost is a grouping based on an ip-address or wildcard, match-all * and a port number. For that grouping you can override global directives such as Location and Directory.

ProxyPreserveHost On: This directive ensures that the original Host header is passed to the backend server, which can be important for applications that rely on the Host header for their logic.


2. Docker with Docker compose

Fun note about their logo, it’s a whale with shipping containers. Matching perfectly with the core idea of the product.

Docker is software that can package other software in a concept called a Docker Container. This container is portable across many different operating environments such as Linux and Windows.

Docker provides a way to isolate the container environment (the Guest machine) from the operating system (the Host machine).

This allows for many different kinds of software and operating environment to run within the containers without interfering with the host machine.

What is Docker

As seen in this diagram, Docker requires a lot of work from the Host Operating System. The benefit is that the containers can be small, since much of the code and the work is happening underneath the containers.

Docker compose

Docker compose is the “compositor” document standard to create a Docker network with one or more Docker containers.

Below is an example of a docker-compose yaml document:

  • for both the gateway and db service we configure:
    • image: references an image that must be on DockerHub.
    • ports: is an “outside:inside” mapping. Where outside means outside the Docker network (Your host machine or in this case, Virtual Machine)
    • In this case we use the same port outside of the Docker network to access this service as inside the network.
    • expose is similar to ports, except this port is not available outside the Docker network
    • volumes: are mounts, again it’s an “outside:inside” mapping.
    • ./apache-config is the git repo directory where our httpd.conf lives. That’s mapped straight to the Apache 2 configuration file inside the container.
    • depends_on: this service needs to wait until these other named services are online.
    • environment are environment variables available inside the container. Typically used to set passwords, ports, and username configurations.
services:
  gateway:
    image: httpd:2.4
    container_name: gateway
    ports:
      - "80:80"
    volumes:
      - ./apache-config/httpd.conf:/usr/local/apache2/conf/httpd.conf
    depends_on:
      - frontend
      - api
      - redis
      - redis-ui
      - phpmyadmin
    environment:
      SERVER_NAME: ${APACHE_SERVER_NAME}

  db:
    image: mysql:5.7
    container_name: db
    expose:
      - "3306"
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
      MYSQL_USER: ${MYSQL_USER}
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
    volumes:
      - db-data:/var/lib/mysql

  # ... other services here

volumes:
  db-data:
Enter fullscreen mode Exit fullscreen mode

To run this file we can use: docker-compose up --build -d
This will: - bring the containers online up - build the images --build - run it in the backgroud -d

To take all of the containers offline, we simply type; docker-compose down

To manage containers easily from the CLI, I use dry, it provides an easier way to see statistics on all containers (memory and cpu usage) as well as a log viewer that can be searched, and much more.

Summary of useful Docker commands:

  • docker ps show all running Docker containers.
  • docker-compose up --build -d: build the Docker images, bring the containers online and run them in the background.
  • docker-compose down bring the containers offline.
  • docker-compose stop stop the containers.
  • docker-compose restart <name> restart the specified Docker container.
  • dry run the Docker manager and monitoring / logs command line utility
    • F2 show all containers (stopped and running)
    • select container enter > feth logs > enter > f to tail the logs.
    • select container enter > fetch logs > 30m > f show logs from the last 30 minutes and tail.

3. Vagrant with VirtualBox

Vagrant can create complete development environments on Virtual Machine (VM) in the cloud and on your local machine with a simple workflow.

Vagrant provides consistent environments such that code works regardless of what kind of systems your team members use for their development, or creative, work.

Vagrant is in my opinion ideal to setup different development environments that require different dependencies, really quickly. Additionally, because it’s all in code, you can easily adapt the environments to match evolving requirements for your use cases.

For this solution we use a base image generic/ubuntu2204 to build the Virtual Machine with using a Vagrantfile.

A Vagrantfile is, similar to a Dockerfile, because it provides instructions for the Vagrant program how to create the virtual image (using the base image box and provisioners) and what to use to run it on (a provider, in this case VirtualBox).

A quick run down of what this all means using an example (shortened version):

  • Vagrant.configure("2") denotes the version of Vagrant that we’re using; 2.
  • config.vm.box = "generic/ubuntu2204" the box specification running Ubuntu 22.04.
  • config.vm.provider "virtualbox" we use VirtualBox to run the VM.
  • config.vm.provision "shell", inline: we use bash scripts to install our environment.
  • config.vm.synced_folder: enables the VirtualBox built in folder sharing, requires VBox Guest Editions to work.
  • config.vm.provider "virtualbox" has VirtualBox specific configurations such as the cpu memory and name of the virtual machine.
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|

  # Variables
  git_user_email = 'youremail@mail.com'

  # boxes at https://vagrantcloud.com/search.
  config.vm.define "docker-apache" do |dockerApache|
    config.vm.box = "generic/ubuntu2204"

    # via 127.0.0.1 to disable public access
    config.vm.network "forwarded_port", guest: 80, host: 80
    config.vm.network "public_network"

    # Share an additional folder to the guest VM.
    config.vm.synced_folder "./data", "/vagrant_data"

    config.vm.provider "virtualbox" do |vb|
    # Display the VirtualBox GUI when booting the machine
    # vb.gui = true

    # Customize the amount of memory on the VM:
      vb.memory = "8192"
      vb.name = "docker-apache"
      vb.cpus = 6
    end

    config.vm.provision "shell", inline: <<-SHELL
      sudo apt-get update
      sudo apt-get install -y apt-transport-https ca-certificates curl

      su - vagrant << EOF
        # clone repo
        mkdir -p /home/vagrant/docker-apache
        cd /home/vagrant/docker-apache
        git clone https://github.com/rpstreef/docker-apache-reverse-proxy .
     EOF
    SHELL
  end
end
Enter fullscreen mode Exit fullscreen mode

To create a VM out of a Vagrantfile we can simply run vagrant up, this will start the process of download the box and then the provisioning step with the bash scripts to the provider VirtualBox.

When it’s all finished, connect using vagrant ssh and start using it!.

Summary of useful Vagrant commands

  • vagrant up this will create the Virtual Machine.
  • vagrant validate used to verify the Vagrantfile is semantically correct.
  • vagrant halt to stop the VM, use --force to shut it down immediately.
  • vagrant ssh connects to the VM via the command-line.
  • vagrant destroy completely remove the Virtual Machine.

4. Development workflow

A - Installation

Now that it’s clear how the solution parts work, let’s go and install it:

  1. Git clone https://github.com/rpstreef/flexible-dev-environment in your local projects directory.
  2. Install Vagrant using these instructions
  3. Install VirtualBox using these instructions.
  4. Edit the Vagrantfile :
    1. You want to use GitHub on the Virtual Machine?\
      1. Change the git_user_email and git_user_name values.
      2. There’s an additional step to complete after the VM is installed. See How to setup your Personal Access Token (PAT) for GitHub:.
    2. Check machine settings in this block; config.vm.provider "virtualbox". Adjust vb.memory = "8192" and vb.cpus = 6 the number of processors.
  5. From the Git folder, run vagrant up, this will setup the Virtual Machine with VirtualBox
    • When asked which network, choose the adapter with internet access.
  6. Connect to the Virtual Machine, run vagrant ssh.
    • Take note of the ip address and use that to connect to with your browser. Or on the CLI type: ip address and look for the network adapter name you chose earlier.
      1. For the Web Landing-page: http://virtual-machine-ip/
      2. For the Redis Commander UI http://virtual-machine-ip/cache
      3. For PHPMyAdmin: http://virtual-machine-ip/phpmyadmin
      4. For Laravel API: http://virtual-machine-ip/api
  7. When connected to the VM:
    • Execute dry on the command line and you should see several containers running.
    • Refer to the summary of useful Docker commands chapter for more guidance with Docker commands.
      • and this summary for Vagrant commands.

B - Accessing the web application services

The Apache Gateway provides access via port 80 with your browser to only the parts that need to be exposed to the outside:

  • Front-end → http//ip-address/
  • Redis Commander → http//ip-address/cache/
  • Laravel API → http//ip-address/api/
  • PHPMyAdmin → http//ip-address/phpmyadmin/

That means that the Redis and MySQL services are not accessible directly from outside the docker network (private network).

C - Developing with VSCode

1. Configure SSH Access

Configure the ssh access on your machine with vagrant is really easy to do, just run vagrant ssh-config, copy paste the text into your ~/.ssh/config file.

From the command line, anywhere, you can connect to your VM with ssh docker-apache

To get the IP address from your VM, run ip address and check which adapter you used for the network access, and take note.

2. Connect to VirtualBox with VSCode

When this works, we can connect VSCode:

  1. Open up VSCode
  2. Click on the lower left icon, Connect current window to Host then enter docker-apache. This name was retrieved from the configuration we did in step 1.
  3. This will install the VSCode server files on the virtual machine.
  4. Open the directory /home/vagrant/docker-apache
  5. Yes I trust the authors, when asked.

To get GitHub to work, we just need to add our Personal Access Token in the next step.

D - Updating code with GitHub.

To setup the Personal Access Token for GitHub, do the following:

  1. Create a PAT here: https://github.com/settings/tokens
  2. For most cases, repositories only is sufficient (unless you want to activate GitHub Actions?):
    • Check the repo checkbox
  3. Execute the below on the command-line to set your Personal Access Token for GitHub:
git credential-store store <<EOF
protocol=https
host=github.com # Replace with your Git provider's hostname
username=<your-username> # Replace with your Git username
password=<personal-access-token>
EOF
Enter fullscreen mode Exit fullscreen mode

these are all stored in cat ~/.git-credentials

E - (Extra) GitOps with GitHub Actions

With GitHub Actions you can automate, for example;

  • docker-compose deployment to your favorite public cloud
  • convert docker-compose to Kubernetes and then deploy to your cluster.
  • or perform other automation as per the standard workflows offered by GitHub, to get started:
    1. Fork my repository: https://github.com/rpstreef/flexible-dev-environment
    2. Then go to Actions, there’s a list presented of all kinds of ready made automations.

If you’d like more details on how to automatically deploy using GitHub Actions (GitOps), give me a shout on Twitter or in the comments.


Conclusion

Setting up a virtualized development environment might seem daunting at first, but the benefits are well worth the effort. With this setup, you've gained a powerful, flexible, and secure platform for your development work.

I'm curious to hear about your experiences down in the comments:

  • How does this compare to your current development workflow?
  • Do you see yourself adopting a similar setup, or have you already implemented something like this?
  • What other tools or services would you add to enhance this environment?

If you found this guide helpful, consider following me on Twitter for more tech tips and discussions.

For IT professionals looking to balance career growth with personal well-being, I invite you to join our community, The Health & IT Insider. We cover a range of topics from DevOps and software development to maintaining a healthy lifestyle in the tech industry.

Thanks for reading, and see you in the next one!

. . . . . . . . . . . . . . . . . .