hind

Hashistack-IN-Docker (single container with nomad + consul + caddy)


Project maintained by internetarchive Hosted on GitHub Pages — Theme by mattgraham

HinD - Hashistack-in-Docker

+       ___                                              ·
+      /\  \                      ___                    ·
+      \ \--\       ___          /\  \        __ __      ·
+       \ \--\     /\__\         \ \--\     / __ \__\    ·
+   ___ /  \--\   / /__/     _____\ \--\   / /__\ \__\   ·
+  /\_ / /\ \__\ /  \ _\    / ______ \__\ / /__/ \ |__|  ·
+  \ \/ /_ \/__/ \/\ \ _\__ \ \__\  \/__/ \ \__\ / /__/  ·
+   \  /__/         \ \/\__\ \ \__\        \ \__/ /__/   ·
+    \ \ _\          \  /_ /  \ \__\        \ \/ /__/    ·
+     \ \__\         / /_ /    \/__/         \  /__/     ·
+      \/__/         \/__/                    \/__/      ·
+                                                        ·

install

Installs nomad, consul, and caddyserver (router) together as a mini cluster running inside a single podman container.

Nomad jobs will run as podman containers on the VM itself, orchestrated by nomad, leveraging /run/podman/podman.sock.

The brilliant consul-template will be used as “glue” between consul and caddyserver – turning caddyserver into an always up-to-date reverse proxy router from incoming requests’ Server Name Indication (SNI) to running containers :)

Setup and run

This will “bootstrap” your cluster with a private, unique NOMAD_TOKEN, and sudo podman run a new container with the hind service into the background. (source)

curl -sS https://internetarchive.github.io/hind/install.sh | sudo sh

Minimal requirements:

https

The ideal experience is that you point a dns wildcard at the IP address of the VM running your hind system.

This allows automatically-created hostnames from CI/CD pipelines [deploy] stage to use the [git group/organization + repository name + branch name] to create a nice semantic DNS hostname for your webapps to run as and load from - and everything will “just work”.

For example, *.example.com DNS wildcard pointing to the VM where hind is running, will allow https://myteam-my-repo-name-my-branch.example.com to “just work”.

We use caddy (which incorporates zerossl and Let’s Encrypt) to on-demand create single host https certs as service discovery from consul announces new hostnames.

build locally - if desired (not required)

This is our Dockerfile

git clone https://github.com/internetarchive/hind.git
cd hind
sudo podman build --network=host -t ghcr.io/internetarchive/hind:main .

Setting up jobs

We suggest you use the same approach mentioned in nomad repo README.md which will ultimately use a templated project.nomad file.

Nicely Working Features

We use this in multiple places for nomad clusters at archive.org. We pair it with our fully templatized project.nomad Working nicely:

Nomad credentials

Get your nomad access credentials so you can run nomad status anywhere that you have downloaded nomad binary (include home mac/laptop etc.)

From a shell on your VM:

export NOMAD_ADDR=https://$(hostname -f)
export NOMAD_TOKEN=$(sudo podman run --rm --secret NOMAD_TOKEN,type=env hind sh -c 'echo $NOMAD_TOKEN')

Then, nomad status should work. (Download nomad binary to VM or home dir if/as needed).

You can also open the NOMAD_ADDR (above) in a browser and enter in your NOMAD_TOKEN

You can try a trivial website job spec from the cloned repo:

# you can manually set NOMAD_VAR_BASE_DOMAIN to your wildcard DNS domain name if different from
# the domain of your NOMAD_ADDR
export NOMAD_VAR_BASE_DOMAIN=$(echo "$NOMAD_ADDR" |cut -f2- -d.)
nomad run https://internetarchive.github.io/hind/etc/hello-world.hcl

Optional ways to extend your setup

Here are a few environment variables you can pass in to your intitial install.sh run above, eg:

curl -sS https://internetarchive.github.io/hind/install.sh | sudo sh -s -- -e ON_DEMAND_TLS_ASK=...

GUI, Monitoring, Interacting

nom-tunnel () {
  [ "$NOMAD_ADDR" = "" ] && echo "Please set NOMAD_ADDR environment variable first" && return
  local HOST=$(echo "$NOMAD_ADDR" |sed 's/^https*:\/\///')
  ssh -fNA -L 8500:localhost:8500 $HOST
}

Add more Virtual Machines to make a HinD cluster

The process is very similar to when you setup your first VM. This time, you pass in the first VM’s hostname (already in cluster), copy 2 secrets, and run the installer. You essentially run the shell commands below on your 2nd (or 3rd, etc.) VM.

FIRST=vm1.example.com
# copy secrets from $FIRST to this VM
ssh $FIRST 'sudo podman run --rm --secret HIND_C,type=env hind sh -c "echo -n \$HIND_C"' |sudo podman secret create HIND_C -
ssh $FIRST 'sudo podman run --rm --secret HIND_N,type=env hind sh -c "echo -n \$HIND_N"' |sudo podman secret create HIND_N -

curl -sS https://internetarchive.github.io/hind/install.sh | sudo sh -s -- -e FIRST=$FIRST

Inspiration

Docker-in-Docker (dind) and kind:

for caddyserver + consul-connect:

VM Administration

Here are a few helpful admin scripts we use at archive.org – some might be helpful for setting up your VM(s).

Problems?

sudo systemctl enable --now podman.socket

to persist across reboots:

echo ‘ net.core.netdev_max_backlog=30000 net.core.rmem_max=134217728 net.core.wmem_max=134217728’ |sudo tee /etc/sysctl.d/90-tcp-memory.conf



# Miscellaneous
- client IP addresses will be in request header 'X-Forwarded-For' (per `caddy`)
- pop inside the HinD container:
```sh
sudo podman exec -it hind zsh

Maintenance:

Self-Signed or Internal CA

When you use Caddy tls internal, caddy automatically creates its own Certificate Authority (CA) with:

This happens automatically on first run. The root CA cert is stored at:

/pv/CERTS/pki/authorities/local/root.crt

This is exactly how Let’s Encrypt works - you trust their root CA once (built into browsers), and any cert they sign “just works.”

What Devs Need To Do (One Time Setup)

  1. Get the root cert from your Caddy server:
    # On the Caddy VM
    cat /pv/CERTS/pki/authorities/local/root.crt
    
  2. Devs install it in their OS/browser:
    • macOS:
      sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain root.crt
      
    • Windows: Double-click root.crt → Install Certificate → Local Machine → Place in “Trusted Root Certification Authorities”
    • Linux (Chrome/Chromium):
      cp root.crt /usr/local/share/ca-certificates/caddy-local.crt
      sudo update-ca-certificates
      
    • Firefox: Preferences → Privacy & Security → Certificates → View Certificates → Authorities → Import
  3. Done forever

Superior to clicking through certificate warnings, which:

The internal CA approach is the professional way to handle internal dev HTTPS. You give devs a Slack message with instructions; your devs install one cert.