Hashistack-IN-Docker (single container with nomad + consul + caddy)
+ ___ ·
+ /\ \ ___ ·
+ \ \--\ ___ /\ \ __ __ ·
+ \ \--\ /\__\ \ \--\ / __ \__\ ·
+ ___ / \--\ / /__/ _____\ \--\ / /__\ \__\ ·
+ /\_ / /\ \__\ / \ _\ / ______ \__\ / /__/ \ |__| ·
+ \ \/ /_ \/__/ \/\ \ _\__ \ \__\ \/__/ \ \__\ / /__/ ·
+ \ /__/ \ \/\__\ \ \__\ \ \__/ /__/ ·
+ \ \ _\ \ /_ / \ \__\ \ \/ /__/ ·
+ \ \__\ / /_ / \/__/ \ /__/ ·
+ \/__/ \/__/ \/__/ ·
+ ·

Installs nomad, consul, and caddyserver (router) together as a mini cluster running inside a single podman container.
Nomad jobs will run as podman containers on the VM itself, orchestrated by nomad, leveraging /run/podman/podman.sock.
The brilliant consul-template will be used as “glue” between consul and caddyserver – turning caddyserver into an always up-to-date reverse proxy router from incoming requests’ Server Name Indication (SNI) to running containers :)
This will “bootstrap” your cluster with a private, unique NOMAD_TOKEN,
and sudo podman run a new container with the hind service into the background.
(source)
curl -sS https://internetarchive.github.io/hind/install.sh | sudo sh
ssh intoferm, etc.) make sure the following ports are open from the VM to the world:
The ideal experience is that you point a dns wildcard at the IP address of the VM running your hind system.
This allows automatically-created hostnames from CI/CD pipelines [deploy] stage to use the [git group/organization + repository name + branch name] to create a nice semantic DNS hostname for your webapps to run as and load from - and everything will “just work”.
For example, *.example.com DNS wildcard pointing to the VM where hind is running, will allow https://myteam-my-repo-name-my-branch.example.com to “just work”.
We use caddy (which incorporates zerossl and Let’s Encrypt) to on-demand create single host https certs as service discovery from consul announces new hostnames.
This is our Dockerfile
git clone https://github.com/internetarchive/hind.git
cd hind
sudo podman build --network=host -t ghcr.io/internetarchive/hind:main .
We suggest you use the same approach mentioned in nomad repo README.md which will ultimately use a templated project.nomad file.
We use this in multiple places for nomad clusters at archive.org. We pair it with our fully templatized project.nomad Working nicely:
Get your nomad access credentials so you can run nomad status anywhere
that you have downloaded nomad binary (include home mac/laptop etc.)
From a shell on your VM:
export NOMAD_ADDR=https://$(hostname -f)
export NOMAD_TOKEN=$(sudo podman run --rm --secret NOMAD_TOKEN,type=env hind sh -c 'echo $NOMAD_TOKEN')
Then, nomad status should work.
(Download nomad binary to VM or home dir if/as needed).
You can also open the NOMAD_ADDR (above) in a browser and enter in your NOMAD_TOKEN
You can try a trivial website job spec from the cloned repo:
# you can manually set NOMAD_VAR_BASE_DOMAIN to your wildcard DNS domain name if different from
# the domain of your NOMAD_ADDR
export NOMAD_VAR_BASE_DOMAIN=$(echo "$NOMAD_ADDR" |cut -f2- -d.)
nomad run https://internetarchive.github.io/hind/etc/hello-world.hcl
Here are a few environment variables you can pass in to your intitial install.sh run above, eg:
curl -sS https://internetarchive.github.io/hind/install.sh | sudo sh -s -- -e ON_DEMAND_TLS_ASK=...
-e TRUSTED_PROXIES=[CIDR IP RANGE]
X-Forwarded-* headers, otherwise defaults to private_ranges
more info-e NOMAD_ADDR_EXTRA=[HOSTNAME]
-e ON_DEMAND_TLS_ASK=[URL]
on_demand_tls, URL to use to respond with 200/400 status codes.-e CERTS_SELF_SIGNED=true
tls internal, this will make self-signed certs with caddy making
an internal Certificate Authority (CA).
@see #self-signed-or-internal-ca below-e ACME_DNS=true
-e CLIENT_ONLY_NODE=true
...
podman run invocation.ssh into it, customized deploys, and more.ssh tunnel thru your VM so that you can see consul in a browser, eg:nom-tunnel () {
[ "$NOMAD_ADDR" = "" ] && echo "Please set NOMAD_ADDR environment variable first" && return
local HOST=$(echo "$NOMAD_ADDR" |sed 's/^https*:\/\///')
ssh -fNA -L 8500:localhost:8500 $HOST
}
nom-tunnel and you can see with a browser:
consul http://localhost:8500/The process is very similar to when you setup your first VM. This time, you pass in the first VM’s hostname (already in cluster), copy 2 secrets, and run the installer. You essentially run the shell commands below on your 2nd (or 3rd, etc.) VM.
FIRST=vm1.example.com
# copy secrets from $FIRST to this VM
ssh $FIRST 'sudo podman run --rm --secret HIND_C,type=env hind sh -c "echo -n \$HIND_C"' |sudo podman secret create HIND_C -
ssh $FIRST 'sudo podman run --rm --secret HIND_N,type=env hind sh -c "echo -n \$HIND_N"' |sudo podman secret create HIND_N -
curl -sS https://internetarchive.github.io/hind/install.sh | sudo sh -s -- -e FIRST=$FIRST
Docker-in-Docker (dind) and kind:
for caddyserver + consul-connect:
Here are a few helpful admin scripts we use at archive.org – some might be helpful for setting up your VM(s).
ferm and here you can see how we
open the minimum number of HTTP/TCP/UDP ports we need to run./pv/ disk
across our nomad VMs (when cluster is 2+ VMs)ubuntu focal) may not enable podman.socket.
If bootstrapping fails, on linux, you can run:sudo systemctl enable --now podman.socket
podman run is not completing, check your podman version to see how recent it is. The nomad binary inside the setup container can segfault due to a perms change. You can either upgrade your podman version or try adding this install.sh CLI option:
--security-opt seccomp=unconfined
docker push repeated fails and “running out of memory” deep errors?
Try:
```sh
sysctl net.core.netdev_max_backlog=30000
sysctl net.core.rmem_max=134217728
sysctl net.core.wmem_max=134217728echo ‘ net.core.netdev_max_backlog=30000 net.core.rmem_max=134217728 net.core.wmem_max=134217728’ |sudo tee /etc/sysctl.d/90-tcp-memory.conf
# Miscellaneous
- client IP addresses will be in request header 'X-Forwarded-For' (per `caddy`)
- pop inside the HinD container:
```sh
sudo podman exec -it hind zsh
consul services:
wget -qO- 'localhost:8500/v1/catalog/services?tags=1' | jq .
caddy config:
wget -qO- localhost:2019/config/ | jq .
num_locks part
in install.sh and consider increasing or opening a GitHub issue
# https://docs.podman.io/en/latest/markdown/podman-system-renumber.1.html
podman -r system renumber
--pids-limit CLI arg part
in install.sh and consider increasing or opening a GitHub issue
# check HinD container's current pids limit:
cat /sys/fs/cgroup/$(podman inspect --format '' hind)/pids.max
https://*.example.com {
# use caddy's internal certificate authority -- no ACME challenges needed
tls internal
reverse_proxy ...
}
When you use Caddy tls internal,
caddy automatically creates its own Certificate Authority (CA) with:
This happens automatically on first run. The root CA cert is stored at:
/pv/CERTS/pki/authorities/local/root.crt
This is exactly how Let’s Encrypt works - you trust their root CA once (built into browsers), and any cert they sign “just works.”
# On the Caddy VM
cat /pv/CERTS/pki/authorities/local/root.crt
macOS:
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain root.crt
Windows: Double-click root.crt → Install Certificate → Local Machine → Place in “Trusted Root Certification Authorities”Linux (Chrome/Chromium):
cp root.crt /usr/local/share/ca-certificates/caddy-local.crt
sudo update-ca-certificates
Firefox: Preferences → Privacy & Security → Certificates → View Certificates → Authorities → ImportSuperior to clicking through certificate warnings, which:
The internal CA approach is the professional way to handle internal dev HTTPS. You give devs a Slack message with instructions; your devs install one cert.