www.garrlab.it

All the content of this website (HTMLs, CSS, IMGs, etc.) is served by a lighthttpd daemon running inside a container, reverse-proxied by a local Traefik container, terminating TLS connections and reverse-proxied by an external HAProxy via SNI-based, layer 7 routing. The container is built by a pipeline triggered by PUSHes to our self-hosted GitLab instance, taking care of the static-site-generation process, handled by Hugo.

1 - How the content is written

The content of this website have been written mostly in Markdown files using an IDE (in my case, VSCodium)

Those set of files, have been compiled with Hugo which kindly gaved back us a bunch of (static) HTMLs and CSSs.

Hugo provides an embedded web-server (for development purposes) supporting browser hot-reload via some websocket magic. So, while I was preparing this very content, I had a shell on my Linux box running hugo server, than kindly re-rendered all the website in a matter of a couple of seconds, letting my (local) browser "re-load" the page…​ so, for me, to constantly check what was going on. In other words:

  1. edit an MD (or a CSS)

  2. press CTRL-S

  3. give a look to the browser window

When I am satisfied with the new content, I commit the modification on my local git repo and then PUSH that commit to our self-hosted Gitlab service.

2 - How the content is shipped on-line

As soon as Gitlab received that PUSH, a pipeline get triggered ( I know…​ I know…​ we should distinguish between `main` and `devel` but…​ that’s another story ! )

The pipeline:

  • fire a container ready with a pre-installed Hugo instance;

  • let the container got access to the sources of the website (the related git repo, indeed);

  • let the container "compile" the sources, storing the results…​ somewhere;

  • fire a new container ready with a lighthttpd daemon;

  • add to the previous container the output of the compilation process;

  • assemble a resulting container

Most of the above happens thanks to Buildah, Podman and old-good Make

The resulting container — that, actually, have been "filled" with references to the underlying commit short-hash and date — is pushed in the GitLab registry.

In order to publish it on-line, a manual "click" is performed within a Portainer webui, to let the related "stack" being reloaded with the updated container image.

3 - How the content is served to browsers

When your browser send an HTTPS request to www.garrlab.it, the TCP connection on port 443 arrives to our border firewall, where an HAProxy extract the SNI and based on it, properly relay the TCP connection to GARRLab main application server (let’s call it vm-garristini), a docker-host, running several containers, including the one assembled above.

On such a VM, the connection is taken care by Traefik, our cloud-reverse-proxy. Traefik properly manage the TLS connection (BTW: …​while ensuring proper and automatic renewal of the various Let’s Encrypt certificates…​.) and extract the requested URL. At this stage, finally, Traefik see that the request is for this very website and proxy it (via HTTP) to the container sitting aside, assembled above.

That’s it!

Are you interested in the above process? Would you like to improve it? Would you like to take it a step further (like auto-deploy )?

Join us, and let we know!

P.S.: if you’re still curious about other technicalities from us, feel free to give a look to last technical problem we just solved !