Why We Chose NGINX + HashiStack Over Kubernetes for Our Service Discovery Needs

Athreya aka Maneshwar - Oct 6 - - Dev Community

We recently switched from Kubernetes to Nomad to manage our infrastructure. At first, with two nodes and multiple services,we had a hard time getting the request routing to work reliably.
In this post, I’ll walk through how we built an efficient and low-cost service discovery solution for our infrastructure—and why it could benefit others facing similar routing issues.

Spoiler: You can achieve smooth results without needing NGINX Plus, thanks to NGINX’s robust features and the power of open-source modules.

The Routing Problem: A Snapshot of Our Setup

At the core of our infrastructure lies a typical setup: a browser making requests to our server, an NGINX reverse proxy forwarding those requests, and Nomad managing the services on multiple nodes.
Group 565

Initially, we had hardcoded the node IPs into our Nginx configuration,
which caused a major problem when services were redeployed to different nodes.

Every redeployment required manual NGINX configuration updates.
This quickly became unsustainable as the number of services grew.

That’s when we decided to integrate service discovery into our stack.

Service Discovery: Why It Matters

Service discovery is the process of automatically detecting services on a network.
Group 571

In a dynamic, multi-node setup like ours, services constantly shift between nodes and deployments. Without proper service discovery, it’s impossible to route requests to the right service on the right node.

Without this, a request arriving at our server might hit the wrong service, or worse, fail completely because NGINX had no idea where the service was running.

Nomad alone doesn’t solve this issue. That’s where Consul comes in.

Consul tracks the location and port of each service deployed via Nomad, and NGINX uses this data to ensure requests reach their intended destinations. This is key if you want scalable, robust routing without hardcoding IP addresses or relying on static configurations.

Why Should You Care?

This solution isn’t just a neat technical solution, but a necessary move if you want to avoid downtime by making request go to healthy services.

Whether you’re a startup or running a large-scale app, the right service discovery mechanism helps reduce complexity, improve reliability, and keep your infrastructure flexible.

Think of it like moving from a fragile system to something far more robust without the need for heavyweight orchestration like Kubernetes.
0_72Hjj4cz4_GTMDTO

NGINX and Service Discovery: Exploring the Options

While NGINX offers built-in service discovery features, they are part of NGINX Plus, the enterprise version.

We opted to explore open-source alternatives and found a pre-built open-source NGINX module that allows NGINX to retrieve service location data from Consul.

Here’s why we made this choice:

  1. Cost Considerations: NGINX Plus requires paid subscriptions, which means ongoing management of licenses.
  2. Feature Set: NGINX Plus comes with a broad set of features, but many of them were unnecessary for our specific use case.
  3. Full Control: By using a purely FOSS solution, we maintain 100% control over our infrastructure, without relying on external enterprise solutions.

Why This Solution Worked

We’re now serving all internal and customer-facing Hexmos apps using this custom NGINX and HashiStack setup. The custom NGINX is necessary because of our legacy configurations as we transitioned from Kubernetes. This makes our case particularly interesting for others facing similar transitions.

A lot of smaller teams are using NGINX with PM2 to manage their processes. While that works, it doesn’t scale easily if you’re trying to handle multiple nodes or containers.

For teams using NGINX+PM2, moving to NGINX + HashiStack is a more robust and flexible solution—a great fit for startups looking for scalability without the complexity of Kubernetes.

In fact, many startups are likely using PM2, and very few truly need Kubernetes.

Moving to NGINX+HashiStack

Larger organizations like Zerodha and Cloudflare are using Nomad to manage their infrastructure. Both companies have substantial setups but avoid Kubernetes, showing that Nomad + Consul can scale effectively without the overhead of Kubernetes.

For startups, HashiStack is like PM2 on steroids—providing multi-node and Docker control. It allows you to easily manage different workloads—whether binaries or Docker container—across multiple nodes, while being lightweight enough for smaller operations.

Kubernetes is often overkill unless you’re running at a very large scale. HashiStack with custom NGINX offers a much simpler, cost-effective, and scalable alternative.

Group 579

Our transition from Kubernetes to Nomad was eye-opening. Here’s why this solution could be a good fit for teams considering an upgrade:

  • Simplicity

    • HashiStack + NGINX: Lightweight and easy to manage with just two binaries—Nomad (orchestration) and Consul (service discovery). The custom NGINX module integrates seamlessly without complex setups.
    • Kubernetes: A full-featured but complex platform, requiring numerous services and configurations. Often needs a dedicated team for ongoing management.
  • Flexibility and Scale

    • HashiStack + NGINX: Supports a range of workloads (containers, binaries, VMs) and scales smoothly across nodes and regions. Ideal for startups or teams seeking flexible deployment management.
    • Kubernetes: Excels in container-heavy environments but can be overkill for smaller setups. Its complexity makes scaling harder to manage.
  • Cost Efficiency and Operational Effort

    • Both HashiStack + NGINX and Kubernetes are open-source, offering flexibility without upfront licensing fees. However, their cost efficiency varies when factoring in operational complexity and labor hours.
    • HashiStack + NGINX: Free and open-source, avoiding enterprise license costs (like NGINX Plus). Easier to set up and maintain, making it a cost-effective solution for smaller teams with limited DevOps resources.
    • Kubernetes: Also open-source may require additional tools (e.g., Ingress controllers), increasing operational complexity. Its steep learning curve and management demands can lead to higher labor costs.

Options for Service Discovery Integration with NGINX

When comparing Nomad's template stanza with Consul Template, the choice largely depends on your use case, but both have their strengths and challenges. Let’s break down the pros and cons of each approach:

1. Nomad Template Stanza

  • Usage: The template stanza in Nomad is often used for injecting dynamic content (like load balancer configs) directly into tasks. It relies heavily on the integration with Consul to fetch service details and generate configurations dynamically.
  • Pros
    • Tight integration: Works seamlessly with Nomad jobs and Consul service discovery. It automatically reconfigures services when Nomad or Consul detects changes.
    • No extra processes: Since it's native to Nomad, you don’t need to run a separate daemon for templates to update.
    • Signal-based reload: Can signal the containerized service (e.g., NGINX) to reload configurations on updates (SIGHUP signal).
    • All-in-one Job Spec: Everything is packed into the same Nomad job file (code, template logic, service configuration), which could simplify management for some.
  • Cons
    • Complexity: The inline template can get quite complex and difficult to maintain, especially as your configuration grows. Writing Nomad templates with complex range statements to handle service discovery (like the upstream block for NGINX) can become cumbersome. Example: {{ range service "echo-server" }} ... {{ else }}server 127.0.0.1:65535;{{ end }} could be tricky for large applications.
    • Limited portability: The template configuration is tied to Nomad’s job files, which can make it harder to migrate or adapt to environments where Nomad is not in use.
    • Steeper learning curve: The embedded logic in the template stanza can feel overwhelming. For newcomers, this can make understanding and debugging more difficult.

2. Consul Template Daemon

  • Usage: Consul Template is a standalone daemon that fetches data from Consul, Nomad, or Vault and renders it into templates, offering more flexibility for updating service configurations. It can be used independently or alongside Nomad.
  • Pros

    • Separation of concerns: The configuration and template management are decoupled from Nomad, so you can manage templates independently. This is useful when you have multiple services and configurations that need to be updated based on Consul data.
    • Powerful templating features: Consul Template can handle more complex scenarios and logic than the Nomad template stanza due to its broader templating syntax.
    • Run custom commands: It can run any arbitrary command after rendering a template (like restarting a service), offering more flexibility in how you manage updates.
    • Cross-system: Consul Template can be used for other systems as well (e.g., Vault or just plain Consul), making it more versatile and portable.
  • Cons

    • Extra daemon: You need to run an additional process (consul-template) which adds operational overhead.
    • Manual setup and management: It requires setting up configuration and managing the lifecycle of the daemon. You’ll also need to configure reload logic manually, which could be overkill for smaller systems.
    • Reloading complexity: You have to configure signals or restart logic to handle service restarts correctly, and incorrect configurations could lead to service downtime or stale configurations.

3. DNS Service Discovery with NGINX Plus

We wish we could tell you more about NGINX Plus, but it's a paid tool and we haven't had a chance to try it out. From what I've heard, it's a really smooth experience. It automatically keeps track of where your services are and sends traffic to the right places. If you're looking for a hassle-free solution and don't mind spending a bit extra, NGINX Plus might be a great fit.

4. NGINX’s ngx_http_consul_backend_module

ngx_http_consul_backend_module is a NGINX add-on that I've found incredibly useful for establishing a direct connection between NGINX and Consul. This module uses the Consul Go backend to efficiently discover and route to healthy services.

Group 564

  • Pros

    • No need for NGINX reloads: Since NGINX queries the Consul Go API client for healthy services on each request, there’s no need to reload NGINX whenever a service moves between nodes or when new instances are added.
    • Simplified service discovery: Module directly route each request through Consul, ensuring that traffic is always directed to healthy services. NGINX fetches the healthy services without needing custom health checks, external scripts, or manual intervention.
    • Improved reliability: Since the Consul backend only provides information about healthy hosts, there is no risk of requests being sent to dead or unhealthy services.
    • Efficient connection pooling: By using the official Consul Go API client, the module benefits from efficient connection management, contributing to faster and more reliable service discovery.
    • Familiar configuration interface: The setup with Consul and NGINX is relatively straightforward, and familiar configuration directives (like proxy_pass and $backend) make it easy to integrate into existing NGINX configurations.
  • Cons

    • Need to rebuild NGINX from source: The biggest downside is that you need to rebuild NGINX from source with this module. This adds an extra step to your deployment process and makes updates or migrations slightly more cumbersome. If you’re using packaged NGINX versions from repositories, this could be a hassle.
    • Maintenance overhead: Rebuilding from source means you’ll need to maintain your own version of NGINX, handle upgrades, and ensure compatibility with other NGINX modules you may want to use.

Workflow

  1. A request arrives at NGINX that fits a specific location block which includes a Consul directive.
    location / {
        consul $backend echo-server-lovestaco-com;
        add_header X-Debug-Backend $backend;
        proxy_pass http://$backend;
    }
Enter fullscreen mode Exit fullscreen mode
  1. NGINX then calls the ngx_http_consul_backend function, providing it with two pieces of information.
  • The first piece of information is a variable where the result will be stored (for example, $backend).
  • The second piece of information is the name of the Consul service to which the request should be routed (like echo-server-lovestaco-com).
  1. The ngx_http_consul_backend function uses dlopen to load the shared C library (the .so file) and calls the Go function defined within that library.

  2. This Go function interacts with Consul using the official API client library. It gathers a list of available IP addresses and selects one to return.

  3. The chosen IP address is sent back to the ngx_http_consul_backend function, and assigned to $backend.

  4. The next step involves using NGINX's built-in proxy_pass directive to forward the traffic to the selected host.

Below image shows the flow of a request using consul
Consul 1-2024-10-02-145903

Step-by-Step Guide on How We Made It Work by Rebuilding NGINX from Source

1. Install the Essential Build Tools

apt-get -yqq install build-essential curl git libpcre3 libpcre3-dev libssl-dev zlib1g-dev
Enter fullscreen mode Exit fullscreen mode

2. Download and Extract NGINX from Source

cd /tmp
curl -sLo nginx.tgz https://nginx.org/download/nginx-1.24.0.tar.gz
Enter fullscreen mode Exit fullscreen mode
  • Extract the downloaded tarball to access the NGINX source code
tar -xzvf nginx.tgz
Enter fullscreen mode Exit fullscreen mode

4. Download and Extract the NGINX Development Kit (NDK)

  • Download the ngx_devel_kit module, which is required for building the backend.
curl -sLo ngx_devel_kit-0.3.0.tgz https://github.com/simpl/ngx_devel_kit/archive/v0.3.0.tar.gz
Enter fullscreen mode Exit fullscreen mode
tar -xzvf ngx_devel_kit-0.3.0.tgz
Enter fullscreen mode Exit fullscreen mode

6. Clone the ngx_http_consul_backend_module Repository

git clone https://github.com/hashicorp/ngx_http_consul_backend_module.git /go/src/github.com/hashicorp/ngx_http_consul_backend_module
Enter fullscreen mode Exit fullscreen mode

7. Change Ownership of the NGINX Extensions Directory

sudo chown -R $(whoami):$(whoami) /go/src/github.com/hashicorp/ngx_http_consul_backend_module
sudo chown -R $(whoami):$(whoami) /usr/local/nginx/ext/
Enter fullscreen mode Exit fullscreen mode

8. Tidy Go Modules

go mod tidy
Enter fullscreen mode Exit fullscreen mode

9. Compile the Go Code as a Shared C Library That NGINX Will Dynamically Load

  • Set the CGO flags to include the ngx_devel_kit directory
CGO_CFLAGS="-I /tmp/ngx_devel_kit-0.3.0/src" \
go build \
  -buildmode=c-shared \
  -o /usr/local/nginx/ext/ngx_http_consul_backend_module.so \
  ./ngx_http_consul_backend_module.go
Enter fullscreen mode Exit fullscreen mode
  • This will compile the object file with symbols to /usr/local/nginx/ext/nginx_http_consul_backend_module.so

10. Configure NGINX with Required Paths and Modules

  • To add a module during the NGINX build process, use the following configuration command
cd /tmp/nginx-1.24.0

CFLAGS="-g -O0" \
./configure \
  --with-debug \
  --prefix=/etc/nginx \
  --sbin-path=/usr/sbin/nginx \
  --conf-path=/etc/nginx/nginx.conf \
  --pid-path=/var/run/nginx.pid \
  --error-log-path=/var/log/nginx/error.log \
  --http-log-path=/var/log/nginx/access.log \
  --add-module=/tmp/ngx_devel_kit-0.3.0 \
  --add-module=/go/src/github.com/hashicorp/ngx_http_consul_backend_module
Enter fullscreen mode Exit fullscreen mode

Common Configuration Options

  • --prefix=/etc/nginx: Installation directory for Nginx binaries and configuration files.
  • --sbin-path=/usr/sbin/nginx: Path to the Nginx binary executable.
  • --conf-path=/etc/nginx/nginx.conf: Path to the main Nginx configuration file.
  • --pid-path=/var/run/nginx.pid: Path to the Nginx process ID file.
  • --error-log-path=/var/log/nginx/error.log: Path to the Nginx error log file.
  • --http-log-path=/var/log/nginx/access.log: Path to the Nginx access log file.
  • (Add other desired modules with --with-modulename_module)
  • Make sure to include the --add-module option for each static module you want to build with NGINX.

11. Build and Install NGINX

make
Enter fullscreen mode Exit fullscreen mode
sudo make install
Enter fullscreen mode Exit fullscreen mode

12. Verify NGINX Installation and Configuration

/usr/sbin/nginx -V
Enter fullscreen mode Exit fullscreen mode

Hardcoded Backend vs. Consul-driven Backend

Let's compare two scenarios

1. Hardcoded Backend

This is sort of a traditional approach where you manually specify the IP address and port of the backend server in your NGINX configuration. Here's an example

server {
  listen        80;
  server_name   one.example.com www.one.example.com;

  location / {
    proxy_pass     http://127.0.0.1:8080/;  # Hardcoded IP and port

    proxy_set_header  Host      $host;
    proxy_set_header  X-Real-IP  $remote_addr;
  }
}
Enter fullscreen mode Exit fullscreen mode

This approach has limitations

  • Static Configuration: If the backend server IP or port changes, you need to manually update the NGINX configuration and reload NGINX.
  • Scalability Issues: Manually managing configurations becomes cumbersome as your infrastructure grows.

2. Consul-driven Backend with ngx_http_consul_backend_module

The ngx_http_consul_backend_module simplifies backend management by leveraging Consul's service discovery capabilities. Here's how it works:

  • Consul Service Listing: First, list the available services registered in Consul using the consul catalog services -tags command. This will display service names and tags for easier identification.
ubuntu@master:~$ consul catalog services -tags
consul           consul
one-example-com   one-example-com,primary
dns              primary
echo-server-1
nomad            http,rpc,serf
nomad-client     http
python-http-server  http,python-http-server
Enter fullscreen mode Exit fullscreen mode
  • NGINX Configuration: Update your NGINX configuration to utilize the consul directive within the location block. This directive retrieves the healthy backend server information for the specified service name and stores it in a variable (e.g., $backend).
server {
  listen        80;
  server_name   one.example.com www.one.example.com;

  location / {
    consul        $backend one-example-com;  # Retrieve backend from Consul
    proxy_pass     http://$backend/;          # Use retrieved backend address

    proxy_set_header  Host      $host;
    proxy_set_header  X-Real-IP  $remote_addr;
  }
}
Enter fullscreen mode Exit fullscreen mode

Benefits of Consul-driven Backend

  • Dynamic Configuration: NGINX automatically discovers healthy backend servers registered in Consul, eliminating the need for manual configuration updates.
  • Scalability: As your infrastructure grows with more backend servers, NGINX seamlessly adjusts to route traffic to healthy instances.

Additional Notes

  • Remember to install and configure ngx_http_consul_backend_module for this approach to work.
  • Refer to the module's documentation for advanced configuration options.

By employing ngx_http_consul_backend_module, you can achieve a dynamic and scalable backend management system for your NGINX server, simplifying configuration and enhancing overall application reliability.

Conclusion: A Lightweight, Flexible Solution

Switching from Kubernetes to Nomad allowed me to streamline our deployments, but it also required better service discovery to ensure smooth routing between services.

By using Consul and an open-source NGINX module, we avoided the complexity and cost of NGINX Plus while still getting an efficient, scalable solution.

For anyone currently running NGINX with PM2 or those looking for a simpler alternative to Kubernetes, NGINX with the HashiStack (Nomad + Consul) is a flexible, powerful, and cost-effective solution.
It’s lightweight, robust, and much easier to manage at scale.

If you're exploring service discovery for a similar setup, give it a try—it might be the neat solution you need.

Stay ahead of the curve! Subscribe for a weekly dose of insights on
development, IT, operations, design, leadership and more.






. . . . . . . .