

I run it in a container on Kubernetes. Definitely recommend.


I run it in a container on Kubernetes. Definitely recommend.
More than a PDP-11, but I would recommend against both.
I wonder how a “first computer” stats break down when some people (me) were born before laptops and tablets. (Also now I feel old…)


So I’ve done public DNS zone hosting and you can use let’s encrypt for certs and such, but
Basically, cloudflare is free, and i get all that. If I find another place better, I’m open to jumping ship.
That’s pretty neat, what adapter is that?


“Best Friends Forever, Forever”?


Currently, I have a 3 1L Dell node Proxmox cluster with 6 kube nodes on it (3 masters, 3 workers). Lets me do things like migrate services off of a host so I can take it out, do upgrades/maintenance, and put it back without hearing about downtime from the family/friends.
For storage, I’ve got a Synology NAS with NFS setup and then the pods are configured to use that for their storage if they need it (So, Jellyfin, Immich, etc). I do regular backups of the NAS with rsync. So, if that goes down, I can restore or standup a new NAS with NFS and it’ll be back to normal.


If feel like, for me at least, GitOps for containers is peace of mind. I run a small Kubernetes cluster as my home lab, and all the configs are in git. If need be, I know (because i tested it) if something happens to the cluster and I lose it all, I can spin up a new cluster and apply the configs from git and be back up and running. Because I do deployments directly from git, I know that everything in git is up to date and versioned so i can roll back.
I previously ran a set of docker containers with compose and then swarm, and I always worried something wouldn’t be recoverable. Adding GitOps here reduced my “What If?” Quotient tremendously.
I use an rsync job to do it. Rsync by default uses the files metadata to determine if the file has changed or not and updates based on that. You can opt to use checksums instead if you’d rather. IIRC, you can do it with a Synology task, or just do it yourself on the command line. Ive got a Jenkins setup to run it so i can gather the logs and not have to remember the command all the time (and i use it for other periodic jobs as well), but its pretty straightforward on its own.
If you were able to capture some traffic, you could probably figure out what its hitting and the response its looking for and then override that dns entry and fake that from your homelab or you’re own cloud hosted app/lambda/api.
Honestly, If you are eiving into Kubernetes, just add some more of those 1L PCs in there. I tend to find them on ebay cheaper than Pi’s. Last year I snagged 4x 1L Dells with 16GB RAM for $250 shipped. I swapped some RAM around, added some new SSD’s and now have 3x Kube masters, 3x Kube worker nodes and a few VMs running a Proxmox cluster across 3 of the 1L’s with 32GB and a 512GbB SSD each and its been great. The other one became my wife’s new desktop.
Big plus, there are so many more x86_64 containers out there compared to Pi compatible ARM ones.


I second this. Especially for the PiHole access. Its also handy as it covers any of my self-hosted stuffs.
Instead of building our own clouds, I want us to own the cloud. Keep all of the great parts about this feat of technical infrastructure, but put it in the hands of the people rather than corporations. I’m talking publicly funded, accessible, at cost cloud-services.
I worry that quickly this will follow this path:


I’ve heard lots of good things about Ghost. I’ve also hosted Grav for a while and it’s pretty solid. You can do Wordpress, but I’d stay away as it gets bad fast and there are better alternatives. If you needed even more scale, Mediawiki is selfhostable too.
What do you use for your music server if I might ask?
… by a simple majority in the case of purely internal affairs, but by a two-thirds majority in the case of more major…