• 1 Post
  • 46 Comments
Joined 8 days ago
cake
Cake day: January 6th, 2026

help-circle
  • I’m guilty of a few of these and sorry not sorry but this is not changing.

    Often these are written with local dev and testing in mind, and in any case the expectation is that self-hosters will look through them and probably customize them - and in any case be responsble for their own firewalls and proxies - before deploying them to a public-facing server. Larger deployments sometimes have internal load balancers on separate machines so even when reflecting a production deployment, exposing on 0.0.0.0 or running eith network=host might be normal.

    Never just run compose files for user services on a machine directly exposed to the internet.


  • One related story: I did have the arguable pleasure to operate a stateful Websockets/HTTP2-heavy horizontally scaled “microservice” API with Rails and even more Ruby, as well as gRPC written in other stuff. Pinning of instances based on auth headers and sessions, weighting based on subpaths, stuff like that. It was originally deployed with Traefik. When it went from “beta” stage to having to handle heavier traffic consistently and reliably on the public internet, Traefik did not cut it anymore and after a few rounds of evaluation we settled on HAProxy, which was never regretted IIRC. My friends company had it in front of one of the countries busiest online services at the time, a pipeline largely built in PHP. Fronted with haproxy. I have seen similar patterns patterns play out at other times in other places.

    Outside of $work I’ve had them all running side by side or layered (should consolidate some but ain’t nobody got time for that) over 5+ years so I think I have a decent feel for their differences.

    I’m not saying HAProxy is perfect, always the best pick, has the most features, or without tradeoffs. It does take a lot more upfront learning and tweaking to get what you need from it. But I can’t square your claims with lived experience, especially when you specifically contrast it with Traefik, which I would say is easy to get started with, has popular first-class support for containers, and loved by small teams - but breaks at scale and when you hit more advanced use-cases.

    Not that any of the things either of us have mentioned so far is releveant whatsoever for a budding homelabber asking how to do domain-based http routing.

    I think you are just baiting now.




  • The main problem with UFW, besides being based on legacy iptables (instead of the modern nftables which is easier to learn and manage), is the config format. Keeping track of your changes over track is hard, and even with tools like ansible it easily becomes a mess where things can fall out of sync with what you expect.

    Unless you need iptables for some legacy system or have a weird fetish for it, nobody needs to learn iptables today. On modern Linux systems, iptables isn’t a kernel module anymore but a CLI shim that actually interacts with the nft backend.

    Misconfigured UFW resulting in getting pwned is very common. For example, with default settings, Docker will bypass UFW completely for incoming traffic.

    I strongly recommend firewalld, or rawdogging nftables, instead of ufw.

    There used to be limitations with firewalld but policies maturing and replacing the deprecated “direct” rules together with other general improvements has made it a good default choice by now.


  • Firewalld

    sudo apt-get install firewalld  
    systemctl enable --now firewalld # ssh on port 22 opened but otherwise most things blocked by default  
    firewall-cmd --get-active-zones  
    firewall-cmd --info-zone=public  
    firewall-cmd --zone=public --add-port=1234/tcp  
    firewall-cmd --runtime-to-permanent  
    

    There are some decent guides online. Also take a look in /etc/firewalld/firewalld.conf and see if you want to change anything. Pay attention to the part about Docker.

    You need to know about zones, ports, and interfaces for the basics. Services are optional. Policies are more advanced.


  • The right nginx config will do this. Since you already have Nginx Proxy Manager, you shouldn’t need to introduce another proxy in the middle just for this.

    Most beginners find Caddy a lot easier to learn and configure compared to Nginx, BTW.

    Another thing that I rarely see mentioned is that since SNI (domain name) is unencrypted for https (unless ECH, which is still not common), you can proxy and route https requests based on domain without terminating TLS or involving http at all. sniproxy is a proxy for just that and available in debian repos. If all you really need is passing through request to downstream proxies or a service terminating TLS itself, it works nicely.

    https://github.com/ameshkov/sniproxy



  • Filling some gaps:

    systemctl enable --now firewalld unattended-upgrades  
    

    Read through /etc/firewall/firewalld.conf, especially the part about how containers might bypass your firewall if you don’t change defaults.

    Also rootless podman should run well out of the box as a mostly drop-in replacement for docker (meanwhile docker also does rootless now) and allows you to run the container runtime unprivileged. This is more secure than adding user to docker (effectively root) group. Setting up autostart by writing systemd .service unit files works the same for both Docker and Podman.


  • Had a fun one when I put an 8x card forking into two nvme drives in a mobo that I thought compatible. No matter what, only one of them connects. Turned out:

    • The 8x slot didn’t bifurcate at all
    • The secondary 16x slot could do up to 8x4x4. Which is the same as no bifurcation for an 8x card in that slot.
    • GPU only works in the primary slot

    You think you think of everything…


  • I have a few different makes of these and have been surprised by how big PSU I had to put (versus on-the-wall measured wattage) for them to not occasionally randomly fail and cutting a drive off until reboot. I guess it’s spikes they don’t handle well. Besides that, the cards themselves obviously add some overhead in that department. Something to consider if low-power is a priority.

    There has also been one or two drives that just wouldn’t work at all with either card, but were fine in individual slots. Vaguely suspecting drive firmware there.

    They do serve their purpose well but just to add some catches for anyone eyeing them. Startech is the brand I had the least glitches with FWIW but keep in mind that’s just one anecdote.

    Also ask yourself if you really need PCIe4 because the PCIe3 models are quite a bit cheaper, cooler and more stable.

    Oh, and make sure your motherboard supports PCIe bifurcation. Especially for older computers that’s not always a given.



  • Odroid H4+ (Intel N97 4c; comparable to the CPU of that Protectli) and H4 Ultra (Intel N300 8c) also worth considering. Versatile units from a small established Korean maker.

    https://www.hardkernel.com/shop/odroid-h4-plus/

    https://www.hardkernel.com/shop/odroid-h4-plus/

    https://www.hardkernel.com/shop/h3-h2-net-card-2/

    If you plan on virtualizing or running a bunch of containers on it I think it’s worth looking at the higher-core models and more RAM. If it’s just for OPNSense, such 4c with 8G should be plenty.

    Also, if you can afford, I strongly suggest getting two of whatever you go for and not doing anything important with the secondary. It really sucks if you have some unexpected issue (hardware failures and OS regressions can happen to anything) and don’t have anything on hand to replace your main router with. Since you’ll be labbing it can also be very freeing to have a testing/dev/staging/playground/debugging device with the same hardware and messing around won’t take down your production network. IMO this is higher priority than higher specs if you have to do tradeoffs.


  • USB enclosures tend to be less reliable compared to SATA in general but I think that is just FUD. It’s not like that’s particularly bad for software RAID compared to running with the enclosure without any RAID.

    The main argument for not doing that is I believe mechanical: Having more moving parts mean things might, well, move, unseating cables and leading to janky connections and possibly resulting failure.

    You will kill your USB controller, and/or the IO boards in the enclosures

    wat.jpeg

    Source: 10+ years of ZFS and mdadm RAID on USB-SATA adapters of varying dodginess in harsh environments. Of course errors happen (99% it’s either a jiggly cable, buggy firmware/driver, or your normal drive failure) but nothing close to what you speak of.

    Your hardware is not going to become damaged from doing software RAID over USB.

    That aside, the whole project of buying new 4TB HDDs for a laptop today just seems misguided. I know times are tight but JFC why not get either SSDs or bigger drives instead, or if nothing else at least a proper enclosure.



  • The OP is about hosting forwarding or recursive DNS for lookups, not authoritatative DNS hosting (which would be yet at least one separate server).

    I count two servers (one clusterable for HA). How is that a lot for a small LAN?

    More would also be normal for serving one domain internally and publicly. Each of these can be separate:

    • Internal authoriative for internal domain
    • Internal resolvers for internal machines
    • Internal source-of-truth for serving your zone publicly (may or may not be an actual DNS server)
    • Public-facing authoritative for your zone serving the above
    • Secondary for the above
    • Recursing resolver of external domains for internal use

    Some people then add another forwarding resolver like dnsmasq on each server.


  • It seems the DHCP is handing out the fire wall’s ip for DNS server, 100.100.100.1 is that the expected behavior since DNSmasq should be forwarding to TDNS 100.100.100.333. Why not just hand out the TDNS address?

    You could and that should work but then it’s not called forwarding anymore. It does forwarding because that’s what you configured. Both approaches are valid.

    I have an opnsense firewall with DNSmasq performing DHCP and DNS forwarding to the Technitium server