

Tie it to your internet bandwidth usage, so that the bulb starts dimming when utilization goes up and maybe flicker a bit, as if you’re drawing too much power off the grid when you’re downloading stuff.


Tie it to your internet bandwidth usage, so that the bulb starts dimming when utilization goes up and maybe flicker a bit, as if you’re drawing too much power off the grid when you’re downloading stuff.


To add to that: health checks in Docker containers are mostly for self-healing purposes. Think about a system where you have a web app running in many separate containers across some number of nodes. You want to know if one container has become too slow or non-responsive so you can restart it before the rest of the containers are overwhelmed, causing more serious downtime. So, a health check allows Docker to restart the container without manual intervention. You can configure it to give up if it restarts too many times, and then you would have other systems (like a load balancer) to direct traffic away from the failed subsystems.
It’s useful to remember that containers are “cattle not pets”, so a restart or shutdown of a container is a “business as usual” event and things should continue to run in a distributed system.
True open source products are your best bet. TruNAS and Proxmox are popular options, but you can absolutely set up a vanilla Debian server with Samba and call it a NAS. Back in the old days we just called those “file servers”.
Most importantly, just keep good backups. If you have to choose between investing in a raid or a primary + backup drive, choose the latter every time. Raid will save you time to recover, but it’s not a backup.
This is a qualified truth. In theory what you’re saying is true but for example with Synology they use their own raid format and while they ostensibly use btrfs they overlay their own metadata system on top.
Gotcha. I face similar issues with Synology. Their hyper backup format doesn’t seem to be standard. I’m considering setting up Borg Backup for offsite so I can restore it onto non Synology devices later.
Was this Synology by chance?
Remote, because my commute would be 140 miles round-trip again. Otherwise I mostly enjoy working in an office with people and I don’t mind going in every few months or so.
Remote is also nice because it actually makes it easier to collaborate with other developers when we can both be at our own keyboards and share screens.
I work well alone, but I spend a lot in time in calls, either work meetings or collaborating on code. I do enjoy the social aspect of that as well.
I use AI pretty much every day, but mostly as a search engine/SO replacement. I rarely let it write my code for me, since I’ve had overall poor results with that. Besides, I have to verify the code anyway. I do use it for simple refactoring or code generation like “create a c# class mapped to this table with entity framework”.
I have a DS923+ with four Seagate 8TB drives in it that I really like. It’s easy to use and offers a lot of services.
However, like others have said, I do not recommend it for new purchases. If I were to do it again I would most likely set up an old PC as a server (though I went with the Synology mainly for power use reasons).
Synology is getting increasingly customer hostile, and from what I’ve read online their Linux version is so full of bespoke patches that they have painted themselves in a corner it will be hard to get out of. So, they’re likely to fall behind on keeping up with third party software. Their software is usually pretty slick and easy to use, but they discontinue things every few cycles.
The main thing I still use of theirs is Synology Drive, which was a pretty seamless move from Google Drive. On the flipside, their stuff is proprietary, so getting off of their platform can be challenging.
For my self-hosting needs I try not to tie anything to the Synology and just use it as a plain NAS. I use my Raspberry Pi or a VM instead.


Yaml editor? Business therapist? Email author? Paid meeting actor? Scrum participant? Office cynic? Idk.
I know it’s ELI5, but this is a common misconception and will lead you astray. They do not have the same level of isolation, and they have very different purposes.
For example, containers are disposable cattle. You don’t backup containers. You backup volumes and configuration, but not containers.
Containers share the kernel with the host, so your container needs to be compatible with the host (though most dependencies are packaged with images).
For self hosting maybe the difference doesn’t matter much, but there is a difference.
See if a light weight kubernetes installation is for you. Secrets are first class citizens in k8s. You can maintain secrets in a number of different ways, but they are exposed to containers the same way. They can become files or environment variables, whether you need.
I recommend looking at k3s to run on your Pi and see if that works for you. You can add vault software on top of that later without changing your containers.


The distinction is between bare metal and virtual machine. Most cloud deployments will be hosted in a virtual machine, inside which you host your containers.
So the nested dolls go:


I have one of my domains on Cloudflare and was thinking of moving the rest of them there. What makes it harder to move name servers away from Cloudflare than other places?
Both MySQL and MariaDb are named after the developer’s daughters.