Famous last words

“It’s done!”

Hopefully that statement doesn’t come back to bite me in a tender area. Everything is migrated. After too many WxSim data problems and forum issues, things seem to have settled down and systems seem to be running smoothly. It’s IT, so it’s bound to break again at some point, but for now it’s quiet and I’m looking forward to some gentle monitoring activities rather than a hardcore implementation project.

This journey started months ago with a feeling that I could do things better, and cheaper. I guess that suggests that my IT Architect background is still active in my brain. I knew where the world had gone - containerisation - but I’d never really been there. However, I relish a challenge so I set out to singlehandedly do what would normally be done by a team. I’ve learned a lot of new skills in the last few months. It’s not been easy, but I’ve had fun doing it. I suspect I’ve not made it as much fun for you though! I apologise for making things difficult along the way.

The journey has completely changed my computing environment. Here’s a brief overview of how I progressed and where I ended up…


I started out with two ‘big’ dedicated servers. Each had 12 core (24 thread) CPUs, 128GB RAM and 2x7TB disks. They were expensive to run and my apps were poorly balanced between them with 80% on one and 20% on the other. I’d intended it to be more balanced but I started out from the wrong place and never got where I hoped I could get to.

In my first migration iteration I moved to 3 cloud servers with 4 cores, 8GB RAM and 160GB disk. These worked remarkably well, but ultimately proved to have insufficient disk space to operate reliably without me tidying up the disk every day. I then moved to 3 bigger cloud servers, 8 core, 16GB RAM and 250GB disk. This gave me more disk space, but see storage below! Storage space was still an issue, so I finally moved to 3 dedicated servers - 4 core/8 thread, 64GB RAM and 2*512GB NVME SSDs. These are fairly beefy but I got them in the Hetzner auction for little more per month than the previous 3 cloud servers.

Operating System

My starting point here was Debian 11. I dabbled briefly with Ubuntu, but found it too feature rich. I’m trying to run servers with minimal software and Ubuntu had a bit too much in the base build. There’s probably a minimal Ubuntu version I could have used, but ultimately I ended up back on Debian 12.5. For the cloud servers I also learned a new skill…I created my own server snapshots that had the common components that I needed built in so I could just deploy a server exactly how I needed it. I also learned how to script the creation and configuration of servers, using Terraform. It was weird to run a script and 30 minutes later have a configured, containerised environment running on 3 cloud servers.


My starting point was Docker. Discourse runs on Docker, but that was all I used it for. I quickly realised that Docker on it’s own wasn’t going to get me where I wanted to be. So I experimented with Kubernetes (K8s) which proved way too complex for me. It’s designed for Enterprises and has all the components you need for a 1000 server farm, but that’s not my world. I then tried another version of Kubernetes (K3s) which is cut down version. I made more progress, but it was still a step too far for what I wanted.

I finally settled on Docker Swarm. I’d probably have started out with this, but I read comments about support for it being dropped. Eventually I realised that referred to a separate Docker Swarm product…what I’m using is Swarm built into Docker. Swarm allows me to deploy containers across three servers and even run multiple copies of a container on all three servers at the same time (although this doesn’t make sense in all cases). It also handles server failures, so if a container is running on server 1 and server 1 crashes, Swarm will restart the container on another server. This had storage connotations though…see below!

Containers were also new to me. I’ve had to learn how to deploy ‘standard’ containers, e.g. Apache+PHP, and also build my own containers, e.g. Apache+PHP+Perl+OpenGrads. This took me into the territories of GitHub and GitLab. My own containers are hosted on GitLab and if I update them the Swarm system can download new container versions automatically from GitLab (or not download automatically if that’s my choice).


I started off just using the standard disks in the dedicated servers just formatted as Linux system partitions. They were huge - 7TB of RAID1 and I needed less than 1TB. Big, but a waste of space. With K8s/K3s/Swarm there was an additional need - persistent storage. Basically put, a container has only internal storage where it can conduct it’s own operations, so for example a web server might use some internal disk as a cache. If the container is restarted it recreates an empty cache. So where do the web files (HTML, PHP, etc) go? They go on volumes. These can be managed by Swarm and are retained from one run of the container to the next, i.e. they are persistent. I wanted to go a step further…if a container started on server 1 and then that died and the container was moved to server 2 I needed the same persistent files available, in the same directory structure, on server 2 so that things could continue (more or less) from where they left off.

This brought a requirement for replicated file storage. Replicated means that all files exist on each server. I started by looking at GlusterFS which I’d used in a Raspberry Pi cluster. It worked, but it had previously been supported by Red Hat and they’d dropped support. Releases seemed to be drying up. I didn’t want to implement something that might go end of life in the near future!

I then found SeaweedFS which seemed to be supported and did what I wanted. I set it up for the two cloud server environments. It initially worked well, but wasn’t suited to running a production cluster. Unfortunately, it had a way of operating that liked to consume disk space…it never deleted anything…any data writes were classed as appends, i.e. put a new copy of the file onto disk and leave the previous version available but marked as old. It did tidy up from time to time, but with very limited disk space this was always living too close to the edge. On a number of occasions I effectively ran out of disk space, which lead to file corruptions and outages whilst I fixed things.

Back to the drawing board…I reviewed my previous notes and went back to look at Ceph. I’d originally discounted this for two reasons. Firstly, it seemed to need a minumum of 4 servers and secondly it required dedicated disks to use for it’s storage. Cloud servers (at least the ones I was using) could only have a single disk. A deeper read of the documentation suggested that it would run happily on just 3 servers, as long as I accepted that there might be some limitations. Then I looked at the Hetzner server auction and found I could replace my 3 cloud servers with three much better spec’d dedicated servers - with two NVME disks - for little more than the cloud servers cost. That was it…time to go back to dedicated servers and deploy Ceph. It’s working well (so far!)


This has been a simpler journey. I started out running MariaDB v10 which worked well, but just on one server. I could just have run MariaDB on one server and have all containers on all servers access it, but if that server crashed I had a single point of failure. I briefly tried running MariaDB in containers on each server, but that wasn’t ideal.

A bit of digging later turned up Galera. It’s a layer on top of MariaDB that allows you to run a MariaDB cluster, i.e. run the database on three servers which then talk amongst themselves to make sure that all servers have an identical copy of the database. One server acts as ‘master’ for all database updates and Galera gracefully handle server crashes. If the master dies then another server takes over as master. You can read from any server, e.g. reading from the local database will be quicker than reading over the network. I’ve also got another layer above Galera - Galera Load Balancer (GLB). This runs on each server and allows applications to talk to it. GLB knows the status of all servers, so it will route reads and writes appropriately so the applications don’t need to worry about switching database servers.

Interestingly, the database shows one of the advantages of using containers. One of the apps I use can’t (currently) work with MariaDB v11. So the container to run that app creates it’s own MariaDB v10 database for it’s own exclusive use.

Management and Monitoring

I’ve used Webmin for server management and Linux ‘mon’ for process monitoring for many years. Moving to containers meant that these were no longer appropriate. I dabbled with some tools for Kubernetes (K9s and others), but once I decided on Docker Swarm I had to choose appropriate tools for that environment. Early on I happened across Portainer which allows management of a Docker Swarm environment. It’s a Freemium type of product and whilst it’s good I don’t really want to pay for it year on year. So I’m now using a combination of Swarmpit and good old command line tools (Swarmpit only manages Swarm Stacks and Services - not individual containers). I can’t find anything better for now, but I’m still looking.

For monitoring I’m using Grafana + Prometheus (you’ve seen examples of Grafana in a previous post). Prometheus gathers data and makes it available to Grafana which has dashboards to display the data as graphs, etc. There are also additional data sources, e.g. Blackbox which make it easier to monitor web site availability and SSL certificate expiry, and there’s even a Discourse data source so I can plot metrics from the forum. There’s also alerting built in, so I can set rules that say “If the WxSim data hasn’t updated for more than 7 hours then send me a notification”. Notifications can be sent using a number of mechanisms. I’m currently sending them to Telegram so I see them pretty quickly. I only minimal alerts for now - adding more is my next activity.


I started with the basic external ethernet connection to the Internet with Cloudflare acting as a DNS service and cache. Traffic between the server and Cloudflare was in the open across the Internet. I connected to the servers using SSH.

With the cloud servers and the latest dedicated servers there’s an internal 1Gbps network that allows them to talk to each other. Swarm runs across this network so all inter-server traffic is on a private network (VLAN).

I’ve also extended my use of Cloudflare to use a Zero Trust Tunnel. This uses a piece of software on each server that creates a secure/encrypted connection to Cloudflare. I also have the same software running on my laptop, so I have a private network (10.x.x.x) with the servers and my laptop running through Cloudflare. The tunnel also provides a link for all external access to the servers. So all web traffic goes to Cloudflare and then down the tunnel to the servers. The tunnel software and Swarm route the incoming connection to the server that’s running the application that’s being accessed. The tunnel connection is originated from the servers, so the firewalls don’t need any inbound ports open for the connection which massively improves the security of the servers.

So that’s it. I’ve moved from a single container (Discourse) and a bunch of other applications coaxed into working together on the same server, to 61 active containers (plus some other paused containers) running all the weather-watch.com services, plus a bunch of personal applications and some applications for a local charity that I do some work for. That’s where it’s staying for now too! I’m sure I’ll move on at some point, but I’ve no intentions of doing anything major in the near future.


:+1: :+1: :+1: Thanks for all the hard work Chris!!!

1 Like

Ooops lot of work !!!

Thanks !

1 Like