Installing Gloo Gateway on HashiCorp Nomad

Motivation

HashiCorp Nomad is a popular workload scheduler that can be used in place of, or in combination with Kubernetes as a way of running long-lived processes on a cluster of hosts.

Nomad supports native integration with Consul and Vault, making configuration, service discovery, and credential management easy for application developers.

This guide shows how to run Gloo as an Ingress/Gateway for the Nomad platform, using Consul for service discovery/configuration and Vault for secret storage.

Installing on Nomad

Installation on Nomad requires the following:

Note: This nomad job is experimental and designed to be used with a specific Vault + Consul + Nomad setup.

Inputs for the job can be tweaked by modifying variables/variables-*.yaml files

Download the Installation Files

This tutorial uses files stored on the Gloo GitHub repository.

In order to install on Nomad, we’ll want to clone the repository:

git clone --branch v0.18.23 https://github.com/solo-io/gloo
cd gloo/install/nomad

Installation files live in the install/nomad directory.

The Gloo Nomad Job lives at install/nomad/jobs/gloo.nomad Levant Variables for the Gloo Nomad Job live at install/nomad/variables/variables-{mac,linux}.yaml

Run the complete Demo

If your environment is set up with docker, nomad, consul, vault, and levant, you can simply run demo.sh to run a local demo of Gloo routing to the PetStore Nomad.

Running Nomad, Consul and Vault

Running Nomad Locally

If you’ve installed Nomad/Consul/Vault locally, you can use launch-consul-vault-nomad-dev.sh to run them on your local system.

If running locally (without Vagrant) on macOS, you will need to install the Weave Net Docker Plugin:

docker swarm init # if your docker host is not currently a swarm manager
docker plugin install weaveworks/net-plugin:latest_release --grant-all-permissions
docker plugin enable weaveworks/net-plugin:latest_release
docker network create --driver=weaveworks/net-plugin:latest_release --attachable weave

If running locally on Linux, you’ll need to disable SELinux in order to run the demo (or add permission for docker containers to access / on their filesystem):

sudo setenforce 0

Running Nomad Using Vagrant

The provided Vagrantfile will run Nomad, Consul, and Vault inside a VM on your local machine. Download and install HashiCorp Vagrant.

Then run the following command:

vagrant up

Ports will be forwarded to your local system, allowing you to access services on the following ports (on localhost):

| service | port | | —– | —- |
| nomad | 4646 | | consul | 8500 | | vault | 8200 | | gloo/http | 8080 | | gloo/https | 8443 | | gloo/admin | 19000 |

Installing Gloo on Nomad (Linux)

levant deploy \
    -var-file variables/variables-linux.yaml \
    -address <nomad-host>:<nomad-port> \
    -consul-address <nomad-host>:<nomad-port> \
    jobs/gloo.nomad

If running locally or with vagrant, you can omit the address flags:

levant deploy \
    -var-file variables/variables-linux.yaml \
    jobs/gloo.nomad

Installing Gloo on Nomad (Mac)

This option assumes you have Nomad, Consul, Vault, and Docker installed and running on your local macOS workstation.

levant deploy \
    -var-file variables/variables-mac.yaml \
    jobs/gloo.nomad

Example

To run a test example, let’s deploy the petstore application to Nomad as well:

Deploy the PetStore on Nomad (Linux)

levant deploy \
    -var-file variables/variables-linux.yaml \
    -address <nomad-host>:<nomad-port> \
    -consul-address <nomad-host>:<nomad-port> \
    jobs/petstore.nomad

If running locally or with vagrant, you can omit the address flags:

levant deploy \
    -var-file variables/variables-linux.yaml \
    jobs/petstore.nomad

Deploy the PetStore on Nomad (Mac)

This option assumes you have Nomad, Consul, Vault, and Docker installed and running on your local macOS workstation.

levant deploy \
    -var-file variables/variables-mac.yaml \
    jobs/petstore.nomad

Create a Route to the PetStore

We can now use glooctl to create a route to the PetStore app we just deployed:

glooctl add route \
    --path-prefix / \
    --dest-name petstore \
    --prefix-rewrite /api/pets \
    --use-consul
{"level":"info","ts":"2019-08-22T17:15:24.117-0400","caller":"selectionutils/virtual_service.go:100","msg":"Created new default virtual service","virtualService":"virtual_host:<domains:\"*\" > status:<> metadata:<name:\"default\" namespace:\"gloo-system\" > "}
+-----------------+--------------+---------+------+---------+-----------------+--------------------------------+
| VIRTUAL SERVICE | DISPLAY NAME | DOMAINS | SSL  | STATUS  | LISTENERPLUGINS |             ROUTES             |
+-----------------+--------------+---------+------+---------+-----------------+--------------------------------+
| default         |              | *       | none | Pending |                 | / -> gloo-system.petstore      |
|                 |              |         |      |         |                 | (upstream)                     |
+-----------------+--------------+---------+------+---------+-----------------+--------------------------------+

The --use-consul flag tells glooctl to write configuration to Consul Key-Value storage

And finally curl the Gateway Proxy:

curl <nomad-host>:8080/

If running on macOS or with Vagrant:

curl localhost:8080/

If running on Linux, use the Host IP on the docker0 interface:

curl 172.17.0.1:8080/
[{"id":1,"name":"Dog","status":"available"},{"id":2,"name":"Cat","status":"pending"}]

Congratulations! You’ve successfully deployed Gloo to Nomad and created your first route.

Most of the existing tutorials for Gloo can be reused with Nomad, however glooctl commands should be used with the --use-consul flag.