AI is so much faster than reading docs. And you get context specific responses that you can drill into. When used correctly it’s very useful.
This was using it… incorrectly though…
AI is so much faster than reading docs. And you get context specific responses that you can drill into. When used correctly it’s very useful.
This was using it… incorrectly though…
The drive got whipped [sic]
Oh, it was just sitting there and “got wiped”? Not because of a command you ran?
Sorry to be snarky but when asking for help you need to provide what you did, what error message you see now or what you expect to happen and what is actually happening. Also what OS you’re using would be helpful.
Presumably you should be able to get the drive back into a usable state - but I’m not familiar with SAS drives.
Am I the only one who has no idea what their problem is now? Just that there was an error about DIF but… What’s the issue now?


Links to lms, navidrome, gonic, ampache, nextcloud, airsonic, the previous post… But none to the thing you posted about?


I’ve run a publicly accessible low-legitimate-traffic website that has been indexed by Google and others from my home network for >20 years without anything buckling so far. I don’t even have a great connection (30mbps upstream).
Maybe I’m just lucky?


I ran a fairly popular RTCW server back in the day… Insta-gib and sniper rifles only. Good times.


They’re good at different things.
Terraform is better at “here is a configuration file - make my infrastructure look like it” and Ansible is better at “do these things on these servers”.
In my case I use Terraform to create proxmox VMs and then Ansible provisions and configures software on those VMs.


Terraform and ansible. Script service configuration and use source control. Containerize services where possible to make them system agnostic.


Flatpaks are similar, but more aimed at desktop applications. Docker containers are made for services and give more isolation on the network.
Docker containers get their own IP addresses, they can discover each other internally, you get port forwarding, etc. Additionally you get volume mounts for persistent storage and other features.
Docker compose allows you to bring up multiple dependent containers as a group and manage connections between them, with persistent volumes. It’ll handle lifecycle issues (restarting crashed containers) and health checks.
An example - say you want a Nextcloud service and an immich service running on the same host. You can create two docker-compose files that launch both of them, each with its own supporting database, and give each db and application persistent volumes for storage. Your applications can be exposed to the network and the databases only internally to other containers. You don’t need to worry about port conflicts internally since each container is getting its own IP address. So those two MySQL DBs won’t conflict with each other. All you need to do is ensure that publicly available services have a unique port forwarded to them. So less to keep track of.


deleted by creator
Cloud backups.


deleted by creator


How isolated could it really be as a docker container vs a separate machine or proxmox?
You can get much better isolation with separate machines but that gets very expensive very fast.
It’s not that it provides complete isolation - but it provides enough isolation very cheaply. You still compete with other applications for compute resources but you run in your own little filesystem jail and can run that janky python version that your application needs and not worry about breaking yum. Or you can bundle that old out-of-support version of libaio that your application requires. All of your dependencies are bundled with your application so you don’t affect the rest of the system.
And since containers are standardized it allows you to move between physical computers without any modification or server setup other than installing docker or podman. You can run on Amazon Linux, RedHat, Ubuntu, etc. If it can run containers it can run your application. Containers can also be multi-platform so you can run on both ARM64 and AMD64 seamlessly.
And given that isolation you can run on a kubernetes cluster, or Amazon ECS with FARGATE instances, etc.
But that starts to get very enterprisey. For the home-gamer there is still a ton of benefit to just having file-system isolation and an easy way to run an application regardless of the local system version and installed packages. It’s a bit of an “experience” thing to truly appreciate it I suppose. Like I said - if you’ve tried running a lot of services on a system in the past without containers it gets kinda complicated rather fast. Especially if they all need databases (with containers you can spin up one db for each application easily).


But what do you actually gain from this?
Isolation. The number one reason to use docker is isolation. If you’ve not tried to run half a dozen services on a single server then this may not mean much to you but it’s a “pretty big deal.”
I have no idea how the synology app store works from this pov - maybe it’s docker under the covers. But in general I despise the idea of a NAS being anything than a storage server. So running Nextcloud, Immich, etc. on a NAS is pretty anathema to me either way.
This is… Pretty stupid. There are things to be careful about but it’s pretty straight forward to use iptables.
But absolutely none of the issues you listed are issues with iptables.
point is, firewalld and iptables is for amateur hour and hobbyists.
Which is weird for you to say since practically all of the issues you list are mistakes that amateurs and hobbyists make.
Containers run “on bare metal” just as much as non-containerized applications.


guess what, I know how these work.
Neat. I don’t care.
Ssh port forwarding and socks proxying. Unless they block port 22.
Edit: If they do block port 22 run ssh on port 443.