Hashistack-IN-Docker (single container with nomad + consul + caddy)
+ ___ ·
+ /\ \ ___ ·
+ \ \--\ ___ /\ \ __ __ ·
+ \ \--\ /\__\ \ \--\ / __ \__\ ·
+ ___ / \--\ / /__/ _____\ \--\ / /__\ \__\ ·
+ /\_ / /\ \__\ / \ _\ / ______ \__\ / /__/ \ |__| ·
+ \ \/ /_ \/__/ \/\ \ _\__ \ \__\ \/__/ \ \__\ / /__/ ·
+ \ /__/ \ \/\__\ \ \__\ \ \__/ /__/ ·
+ \ \ _\ \ /_ / \ \__\ \ \/ /__/ ·
+ \ \__\ / /_ / \/__/ \ /__/ ·
+ \/__/ \/__/ \/__/ ·
+ ·
Installs nomad
, consul
, and caddyserver
(router) together as a mini cluster running inside a single podman
container.
Nomad jobs will run as podman
containers on the VM itself, orchestrated by nomad
, leveraging /run/podman/podman.sock
.
The brilliant consul-template
will be used as “glue” between consul
and caddyserver
– turning caddyserver
into an always up-to-date reverse proxy router from incoming requests’ Server Name Indication (SNI) to running containers :)
This will “bootstrap” your cluster with a private, unique NOMAD_TOKEN
,
and sudo podman run
a new container with the hind service into the background.
(source)
curl -sS https://internetarchive.github.io/hind/install.sh | sudo sh
ssh
intoferm
, etc.) make sure the following ports are open from the VM to the world:
The ideal experience is that you point a dns wildcard at the IP address of the VM running your hind
system.
This allows automatically-created hostnames from CI/CD pipelines [deploy] stage to use the [git group/organization + repository name + branch name] to create a nice semantic DNS hostname for your webapps to run as and load from - and everything will “just work”.
For example, *.example.com
DNS wildcard pointing to the VM where hind
is running, will allow https://myteam-my-repo-name-my-branch.example.com to “just work”.
We use caddy (which incorporates zerossl
and Let’s Encrypt) to on-demand create single host https certs as service discovery from consul
announces new hostnames.
This is our Dockerfile
git clone https://github.com/internetarchive/hind.git
cd hind
sudo podman build --network=host -t ghcr.io/internetarchive/hind:main .
We suggest you use the same approach mentioned in nomad repo README.md which will ultimately use a templated project.nomad file.
We use this in multiple places for nomad clusters at archive.org. We pair it with our fully templatized project.nomad Working nicely:
Get your nomad access credentials so you can run nomad status
anywhere
that you have downloaded nomad
binary (include home mac/laptop etc.)
From a shell on your VM:
export NOMAD_ADDR=https://$(hostname -f)
export NOMAD_TOKEN=$(sudo podman run --rm --secret NOMAD_TOKEN,type=env hind sh -c 'echo $NOMAD_TOKEN')
Then, nomad status
should work.
(Download nomad
binary to VM or home dir if/as needed).
You can also open the NOMAD_ADDR
(above) in a browser and enter in your NOMAD_TOKEN
You can try a trivial website job spec from the cloned repo:
# you can manually set NOMAD_VAR_BASE_DOMAIN to your wildcard DNS domain name if different from
# the domain of your NOMAD_ADDR
export NOMAD_VAR_BASE_DOMAIN=$(echo "$NOMAD_ADDR" |cut -f2- -d.)
nomad run https://internetarchive.github.io/hind/etc/hello-world.hcl
Here are a few environment variables you can pass in to your intitial install.sh
run above, eg:
curl -sS https://internetarchive.github.io/hind/install.sh | sudo sh -s -- -e REVERSE_PROXY=...
-e TRUSTED_PROXIES=[CIDR IP RANGE]
X-Forwarded-*
headers, otherwise defaults to private_ranges
more info-e UNKNOWN_SERVICE_404=[URL]
-e NOMAD_ADDR_EXTRA=[HOSTNAME]
-e REVERSE_PROXY=[HOSTNAME]:[PORT]
reverse_proxy
mappings
to internal ports (CSV format).
This is helpful if you have additional backends you want proxy rules added into the Caddy config.
Examples:
-e REVERSE_PROXY=example.com:81
- make https://example.com & http://example.com (with auto-upgrade) reverse proxy to localhost:81-e REVERSE_PROXY=https://example.com:81
- make https://example.com reverse proxy to localhost:81-e REVERSE_PROXY=http://example.com:81
- make http://example.com reverse proxy to localhost:81-e REVERSE_PROXY=https://example.com:82,http://example.com:82
- make https://example.com reverse proxy to localhost:82; http://example.com reverse proxy to localhost:82 (no auto-upgrade)-e ON_DEMAND_TLS_ASK=[URL]
on_demand_tls
, URL to use to respond with 200/400 status codes....
podman run
invocation.ssh
into it, customized deploys, and more.ssh
tunnel thru your VM so that you can see consul
in a browser, eg:nom-tunnel () {
[ "$NOMAD_ADDR" = "" ] && echo "Please set NOMAD_ADDR environment variable first" && return
local HOST=$(echo "$NOMAD_ADDR" |sed 's/^https*:\/\///')
ssh -fNA -L 8500:localhost:8500 $HOST
}
nom-tunnel
and you can see with a browser:
consul
http://localhost:8500/The process is very similar to when you setup your first VM. This time, you pass in the first VM’s hostname (already in cluster), copy 2 secrets, and run the installer. You essentially run the shell commands below on your 2nd (or 3rd, etc.) VM.
FIRST=vm1.example.com
# copy secrets from $FIRST to this VM
ssh $FIRST 'sudo podman run --rm --secret HIND_C,type=env hind sh -c "echo -n \$HIND_C"' |sudo podman secret create HIND_C -
ssh $FIRST 'sudo podman run --rm --secret HIND_N,type=env hind sh -c "echo -n \$HIND_N"' |sudo podman secret create HIND_N -
curl -sS https://internetarchive.github.io/hind/install.sh | sudo sh -s -- -e FIRST=$FIRST
Docker-in-Docker (dind) and kind
:
for caddyserver
+ consul-connect
:
Here are a few helpful admin scripts we use at archive.org – some might be helpful for setting up your VM(s).
ferm
and here you can see how we
open the minimum number of HTTP/TCP/UDP ports we need to run./pv/
disk
across our nomad VMs (when cluster is 2+ VMs)ubuntu
focal
) may not enable podman.socket
.
If bootstrapping fails, on linux, you can run:sudo systemctl enable --now podman.socket
podman run
is not completing, check your podman
version to see how recent it is. The nomad
binary inside the setup container can segfault due to a perms change. You can either upgrade your podman version or try adding this install.sh
CLI option:
--security-opt seccomp=unconfined
docker push
repeated fails and “running out of memory” deep errors?
Try:
```sh
sysctl net.core.netdev_max_backlog=30000
sysctl net.core.rmem_max=134217728
sysctl net.core.wmem_max=134217728echo ‘ net.core.netdev_max_backlog=30000 net.core.rmem_max=134217728 net.core.wmem_max=134217728’ |sudo tee /etc/sysctl.d/90-tcp-memory.conf
# Miscellaneous
- client IP addresses will be in request header 'X-Forwarded-For' (per `caddy`)
- pop inside the HinD container:
sudo podman exec -it hind zsh
- get list of `consul` services:
wget -qO- ‘localhost:8500/v1/catalog/services?tags=1’ | jq .
- get `caddy` config:
wget -qO- localhost:2019/config/ | jq . ```