I know I’ve been quiet. I’m sure you all missed me I’m trying to slowly catch up on what I’ve missed.
It’s been a busy time with Dad’s Taxi runs to airports, train stations and other places. International visitors going home, international visitors arriving. Entertaining international visitors. Readying a house for sale. Watching the aurora. Not a lot going on really!
I’ve also been continuing to work on my master plan for ruling the weather-watch.com (and associated domains) universe in a cloud environment when I’ve had a few moments to spare! It’s taken a lot of Google-fu and experiments to achieve but I think I now have a working environment that I can start to move things across to. What I’ve ended up with is:
- 3 cloud servers (4-core CPU, 16GB RAM, 160GB disk)
- Each server runs Debian 12.5
- Each server has the latest version of Docker Swarm installed. Docker Swarm allows multiple containers (think ‘little virtual servers’) to be run on each cloud server and allows the containers to be switched between servers should a server crash or have to be rebooted.
- Each server has a replicated disk storage area which is accessible to all other servers. Think of this like a shared directory on Windows. Putting files here, e.g. a web site, is part of what allows containers to jump from server to server. They always have the files they need in what looks like a local server directory.
- Each server is also running a clustered version of MariaDB database (using Galera). This means that any database can be accessed from any server. This is another feature that allows containers to move from server to server because there’s always a live copy of the database available to access. The database also has a load balancer in front of it, so a container just accesses the load balancer which routes the database access to the most lightly loaded server.
- The servers are accessed from the Internet through a Cloudflare tunnel. So when you connect to a web site, e.g. you actually connect to the Cloudflare end of the tunnel. The tunnel is encrypted from Cloudflare to my cloud servers and at my end your connection pops out into the approriate server. This mechanism also provides a private network that I can use. So from my laptop I have an encrypted connection to all of my servers, with the servers effectively having local IP addresses on my home LAN. The tunnels and private network mean that the firewall protecting my cloud servers only allows one single port through, the tunnel port. So all the dangerous ports that hackers like to attack are inaccessible.
- Pretty much all of this environment can be re-created by running a script. The script can create and configure networks, servers, firewalls, etc, and it installs and configures additional software as required. So I can rebuild pretty much all of the environment in about 30 minutes. This will make it easier for me to do major upgrades in future. I can build a new parallel environment using the latest software and then (hopefully) use Docker Swarm to migrate the containers from the ‘old’ environment to the new one when I’ve finished testing. I suspect I’ll have some more work to do the first time I attempt this but I’ve got the building blocks in place.
If I’ve missed anything important on the forum, please let me know.