

How much money are you willing to spend? Resiliency is expensive.


How much money are you willing to spend? Resiliency is expensive.


Self-hosting is trivial and everyone can do it.
So is open heart surgery. Unless you want it to end successfully.


Have you forgotten that you too started at 0?
Not at all. In fact I remember the day my server was hacked because I’d left a service running that had a vulnerability in it. I remember changing passwords, calling my bank to ensure there had been no fraudulent charges, etc. I remember “war driving” to find vulnerable WiFi networks. I remember changing default passwords on a service setup by a client of mine.
As I said - it’s not gate-keeping it’s experience.
Yes, it sometimes can be difficult and frustrating, but so long as someone, anyone, is willing to try and learn and fail and retry, they can get my help
Teaching is “gate-keeping” apparently. You can’t tell somebody that they need to learn something! You just need to give them a link to a url and say “run this thing as root and your stuff will work - totally not a scam tho”.


“Has anyone noticed that medical doctors gate-keep people doing open heart surgery?”
Why do you assume self-hosting is and can be trivial? It is NOT for everybody. You should have some base level of technical knowledge. You should expect to need to learn some things. It’s not a badge of honor, it’s experience.
My project focuses on building a tool that makes self-hosting more accessible without sacrificing data ownership
Good luck with that. Don’t get your users pwned in the process. You’re now responsible for the security of people who think “opening a command line” is too difficult.


I’m happy you’re discovering the Linux CLI, but this is pretty ridiculous. mpv, vlc, mplayer, etc. all serve very different uses from jellyfin.


I don’t.


Clearly you don’t know.


If I wanted to run updates frequently I would run arch lmao. Even if I did apt update every day, debian stable doesn’t get that many updates.
You’re not updating for features you’re updating for bug and security fixes. That’s why Debian stable doesn’t have many updates. But the ones they do are typically important.


That’s… Not how it works… Debian is “stable” not “secure”. You use Debian so that is easier to run updates frequently since they’ll be unlikely to break things.


All systems, daily via a single ansible script. That’s apt update, upgrade and reboot if needed (some systems set to only reboot with a separate script so I can handle them separately).
Rarely have any sort of problems.


Sounds like you bookmarked the while flippin’ Internet.


Something that can make troubleshooting DNS issues a real pain is that there can be a lot of caching at multiple levels. Each DNS server can do caching, the OS will do caching (nscd), the browsers do caching, etc. Flushing all those caches can be a real nightmare. I had issues recently with nscd causing issues kinda like what you’re seeing. You may or may not have it installed but purging it if it is may help.


It’s not resolving, okay around with dig a bit to troubleshoot: https://phoenixnap.com/kb/linux-dig-command-examples
I’d start with “dig @your.providers.dns.server your. domain.name” to query the provider servers directly and to see if the provider actually responds for your entry.
If so then it may be that you haven’t properly configured the provider to be authoritative for your domain. Query @8.8.8.8 or one of the root servers. If they don’t resolve it then they don’t know where to send your query.
If they do, the problem is probably closer to home either your local network or Internet provider.


This is an awful analogy…
squeezing every last drop of resource form tired old hardware
This is such a myth. 99% of the time your hardware is doing there doing nothing. Even when running “bloated” services.
Nextcloud, for example, uses practically zero cpu and a few tens on mb when sitting around yet people avoid it for “bloat”.


Oh for sure - containers are fantastic. Even if you’re just using them as glorified chroot jails they provide a ton of benefit.


Containers run on “bare metal” in exactly the same way other processes on your system do. You can even see them in your process list FFS. They’re just running in different cgroup’s that limit access to resources.
Yes, I’ll die on this hill.


Could last years? Or months? Depends on a lot of factors. Fans may not like running 24x7, memory could fail, etc.
Just be prepared for what you would do if it does.


Since it’s a public instance you’d want to be sure to keep it pretty up-to-date with new system patches and the latest stable versions of Nextcloud. If you’re comfortable with automating updates with ansible, k8s, docker-compose, etc. then it’s not a big deal. If you’re ssh’ing to a server to manually update things then it’s going to be a lot of overhead and likely forgotten.
Old hardware may also bring its own issues and you’ll need backups especially since old hardware (especially consumer-grade stuff) can fail very unexpectedly. And providing support for users is a whole… other thing…
I like the idea of starting with the “old laptop in a basement” approach as a way to get things going to see if the service provides benefit then look to migrate to a more stable platform in the future.
enough, a lot, more demanding.
You need to give some sort of guidance here.