

deleted by creator


deleted by creator
Cloud backups.


deleted by creator


How isolated could it really be as a docker container vs a separate machine or proxmox?
You can get much better isolation with separate machines but that gets very expensive very fast.
It’s not that it provides complete isolation - but it provides enough isolation very cheaply. You still compete with other applications for compute resources but you run in your own little filesystem jail and can run that janky python version that your application needs and not worry about breaking yum. Or you can bundle that old out-of-support version of libaio that your application requires. All of your dependencies are bundled with your application so you don’t affect the rest of the system.
And since containers are standardized it allows you to move between physical computers without any modification or server setup other than installing docker or podman. You can run on Amazon Linux, RedHat, Ubuntu, etc. If it can run containers it can run your application. Containers can also be multi-platform so you can run on both ARM64 and AMD64 seamlessly.
And given that isolation you can run on a kubernetes cluster, or Amazon ECS with FARGATE instances, etc.
But that starts to get very enterprisey. For the home-gamer there is still a ton of benefit to just having file-system isolation and an easy way to run an application regardless of the local system version and installed packages. It’s a bit of an “experience” thing to truly appreciate it I suppose. Like I said - if you’ve tried running a lot of services on a system in the past without containers it gets kinda complicated rather fast. Especially if they all need databases (with containers you can spin up one db for each application easily).


But what do you actually gain from this?
Isolation. The number one reason to use docker is isolation. If you’ve not tried to run half a dozen services on a single server then this may not mean much to you but it’s a “pretty big deal.”
I have no idea how the synology app store works from this pov - maybe it’s docker under the covers. But in general I despise the idea of a NAS being anything than a storage server. So running Nextcloud, Immich, etc. on a NAS is pretty anathema to me either way.
This is… Pretty stupid. There are things to be careful about but it’s pretty straight forward to use iptables.
But absolutely none of the issues you listed are issues with iptables.
point is, firewalld and iptables is for amateur hour and hobbyists.
Which is weird for you to say since practically all of the issues you list are mistakes that amateurs and hobbyists make.
Containers run “on bare metal” just as much as non-containerized applications.


guess what, I know how these work.
Neat. I don’t care.


so please tell me “how to do things right”, or shut up if you can’t tell any useful info
WTF? I’m not trying to tell you how to do anything. I’m sick of selfhosted twerps bitching about “how hard it is to self host” when they think everything should be like an app on their phone. You need to learn how networks, dhcp, dns, ssl, certificates, etc. work.


They’re cheap. You can also generate your own certs and use your own ca. But otherwise yes - quit yer bitching and learn how to do things right.


You don’t need to if you’re just using things locally.
But also - domains are cheap.


That’s a lot easier said that done for hobbyists that need a certificate for their home server.
I’d you’re going to self host you need to learn. I have no time for kids who just want “Google but free” and don’t want to spend any time learning what it takes to make that happen.


It’s being deiven by the browsers. Shorter certs mean less time for a compromised certificate to be causing trouble.
https://cabforum.org/working-groups/server/baseline-requirements/requirements/


Will we need to log in every morning and expect to refresh every damn site cert we connect to soon?
Automate your certificate renewals. You should be automating updates for security anyway.
“Bare metal” has traditionally meant without any os either. Your code executes directly on hardware and has direct control over everything. Like a micro controller.
Code in a container executes on the hardware in exactly my the same way as code not running in a container - with the os as an intermediary.
“not running in a container” is not “running on bare metal”. It’s just running outside a container.


enough, a lot, more demanding.
You need to give some sort of guidance here.
Flatpaks are similar, but more aimed at desktop applications. Docker containers are made for services and give more isolation on the network.
Docker containers get their own IP addresses, they can discover each other internally, you get port forwarding, etc. Additionally you get volume mounts for persistent storage and other features.
Docker compose allows you to bring up multiple dependent containers as a group and manage connections between them, with persistent volumes. It’ll handle lifecycle issues (restarting crashed containers) and health checks.
An example - say you want a Nextcloud service and an immich service running on the same host. You can create two docker-compose files that launch both of them, each with its own supporting database, and give each db and application persistent volumes for storage. Your applications can be exposed to the network and the databases only internally to other containers. You don’t need to worry about port conflicts internally since each container is getting its own IP address. So those two MySQL DBs won’t conflict with each other. All you need to do is ensure that publicly available services have a unique port forwarded to them. So less to keep track of.