What’s going on on your servers?
I had to bite the bullet and buy new drives after the old ones filled up. I went for used enterprise SSDs on eBay and eventually found some that had an okay price, even though it’s been much more than last time I got some. Combined with Hetzner’s hefty price increase some month ago, my hobby has become a bit more expensive again thanks to the ever growing appetite of companies building more data centers to churn more energy.
Anyways, the drives are in, my Ansible playbook to properly encrypt them and make them available in Proxmox worked, so that was smooth (ignoring the part where I disassembled the Lenovo tiny from the rack, open it, SSD out, SSD in, close it and put it back in only to realize I put in the old ssd again).
Any changes in your hardware setups? Did the price increase make you reconsider some design decisions? Let us know!
Just updated my cpu to a epyc 7852 and installed an intel 310 eco. Also just cried and bought 64gb of ddr4 ram to add to it.
Another week of not updating my proxmox from proxmox 7 which is outdated for like 2 years now because I just can’t be bothered tbh.
Before ya’ll freak out, it’s isolated and only two containers are accessible from outside, behind a proxy. So no need to panic.
so this week I was getting ready for my workday when my Son tells me CraftyController is inaccessible, so I tried to SSH into the box that the service is pinned to… nada, dead. tried to power cycle it, nada.
now this node was a B450M-A mobo Ryzen 7 2700X platform with some hodgepodge scrap RAM I’ve had running in it(RAM birthday was 2019). I hooked it up to a mini monitor and a keyboard, but it didn’t post at all, so just a blue screen of no signal. unfortunately the B450M-A mobo didn’t feature POST debug lights, nor did it use QLED, it apparently relied on PC Speaker, and my machine wasn’t telling any tales. so since I had no real idea as to the root cause and after reseating the RAM and the GPU and fiddling with it got me nowhere, I got my partner to approve the outspend for replacement of the motherboard so that I could have actual Debug indicators.
Thursday the ROG B550-F Gaming WIFI II mobo arrived, as did the Ryzen 9 5900XT and the Nautilus 360RS cooler. I spent the evening assembling the mobo and CPU and the GPU, the RAM, and all the related wiring. figured I would do the Cooler the next day. Yesterday I got the cooler in place with some serious hardware acrobatics. I then fired it up and Yellow LED. DRAM issue, so I unseated all of the RAM, plugging in one of the hodgepodge sets(I had 4x8GB ram sticks) neither set worked, went to just trying a single stick. of the 4 sticks only 1 was able to get past the Yellow LED and into completed POST.
So the RAM was shot and I’m not going to run containers on a machine with only 8 GB of ram. so I ordered up some Vengeance LPX 2x16G sticks and they arrived this morning! I just finished slotting them and then wrestling with Gentoo’s understanding of where all the hardware was. it was a lot of fiddling with the gentoo kernel config, and installing the nvidia drivers, but after all of that was done, the system booted up successfully! I’ve now got it back in its residence connected up to the UPS power, about to shunt docker containers back to the newly improved machine with 2x the CPU capacity.
Was a wild ride, but the cool part of it was when the system shat itself it was part of a 3 node Docker Swarm and I had recently migrated to a NAS for persistence of my container data. though the other 2 nodes aren’t as overbuilt as this thing, so I did have to do some memory wrangling and disabling my lower priority services in order to restore service, but I was able to ensure all necessary services were able to run during the outage, and I got some learning in regards to a couple of the services that didn’t port as cleanly as I would’ve liked. all in all fun times in system administration! lol.
I’ve crossed that threshold in Dunning-Kruger where I see how much how I don’t know, and it’s simultaneously disheartening and stressful. But hell, what am I going to do now? Quit?
I’m trying to properly learn VLANs and set them up so that I’ve got “self-hosted services exposed to the internet” and “everything else”. So far, the only thing I need to isolate is a NAS with Jellyfin and Komga, but I plan to add more services via a mini PC later. The thing that has made this whole journey frustrating is that every time I try to learn something, even laser targeted, I don’t get the full answer from the first thing I find, and the next answer I find introduces more complexity. I think what I need is a managed switch from my local Micro Center like a Netgear GS108Tv3, to replace the switch currently in my office. Then, if I understand correctly, I think I need to put the NAS (and eventually mini PC) on their own subnet and use VLAN rules to allow traffic to that subnet but not from that subnet to the rest of my LAN. But it’s hard to determine if I’ve even got that right.
every time I try to learn something, even laser targeted, I don’t get the full answer from the first thing I find, and the next answer I find introduces more complexity
Can empathize. Read a tutorial and think ‘Well, that seems pretty straight forward’. Read another tutorial about the same topic…‘Jebus that does not seem straight forward.’
It took me about 50 YouTube videos to get me to a point where I believe I now understand what a reverse proxy is and how I should use one.
Yeah that’s about right.
I’m in the unenviable position of basically doing self hosting for work and all I want to do on the weekend is get greasy under my car.
Gonna change the oil later and replace a MAF
Everything’s fucking terrible infested with AI slop I don’t know what the fuck software to run everything is bad
If it wast just AI, but the idiotic crawlers everywhere are getting worse by the day it feels.
I still have some ancient RPi running a basic homepage with some reverse proxies. A few weeks ago and after stopping to care about that thing years ago I realized that the access log that was just happily sitting there for years without getting to relevant sizes has suddenly grown by nearly 1GB, most of it in the last 6-8 months because I never bothered to set up logrotate.
But hey… I wanted to test setting up Anubis for quite some time. So now I can watch them run circles in the (still experimental) honeypot feature reading pages and pages of non-sensical babbling 😂
Dug up an old RPI 3 for the sole purpose of running a bulletproof sandbox for Claude Code. At 1 gb of RAM it’s a little slow but quite tolerable for running —dangerously-skip-permissions and one shot prompts.
Network isolated, Tailscale ACLs in, sudoless user created, and service account tokens setup with minimal permissions to GitHub. It should function almost exactly like Claude code remote. Excited to start testing it.
Right at this moment, I’m rebuilding my homelab after a double HDD failure earlier this year.
The previous build had a RAID 5 array of three 1TB Seagate Barracudas that I picked out of the scrap pile at work. I knew what I was getting into and only kept replaceable files on it. When one of the drives started doing the death rattle, I decided to yank some harder-to-acquire files to my 3TB desktop HDD before trying to resilver the entire array. Guess which device was the next to fail. I could mount and read it, but every operation took 2-5 minutes. SMART showed a reallocation count in the thousands. That drive contained some important files that I couldn’t replace, which were backed up to the (now dead) server. Fortunately
ddrescuemanaged to recover damn near everything and I only lost 80 kilobytes out of the entire disk. That was a very expensive lesson that I’ve learned very cheaply.The new setup has a RAIDz1 pool of 3x 4TB Ironwolf disks (constrained by the available SATA sockets on the motherboard), plus a new SSD for the OS and 16GB RAM (upgraded from literally the first SSD I ever bought and 10GB mis-matched DDR3).
Mounting it was a bit of a dilemma. The previous array was simply mounted to the filesystem from
fstaband bind-mounted to the containers. I definitely wanted the storage to be managed from Proxmox’s web UI and to be able to create VDs and LXC volumes on it. Some community members helped me choose ZFS over LVM-on-RAID5. Setting up the correct permissions wasn’t as much of a headache as last time. I’ve just managed to get a Samba+NFS+HTTP file server and Jellyfin running and talking to each other. Forgejo and Nextcloud will be next.Fixed headaches with my proxmox backup server. It has a SAS-controller and 4 spinning drives running backups at detached garage and the old fujitsu desktop I dug out of office dumpster pile just kept crashing. Flashed controller to IT-firmware, updated bios on motherboard and did everything else I could figure out but the system just lost the drives pretty much daily and required a hard reset. Turns out, or at least that’s my conclusion, that the PSU on the machine just didn’t have enough juice for the whole setup and that caused instability. I dug out old (2010 or so) desktop from my own pile and threw 600W PSU on the box, it’s now been stable for at least a week.
I would’ve liked to keep the fujitsu-machine as it’s in a more compact case and couple of generations newer CPU, but that thing has propietary power supply so it was easier to swap out the whole system and just move drives from one to another. So, the current setup consumes maybe a bit more electricity, but at least it’s doing what it is supposed to.
Like last time, no real changes. Just maintaining and enjoying.
I did the monthly arch upgrade and had to reboot to get some service to come up. Took an extra 5 minutes this month.
I made the right decision to get two HP elitedesk g4 minis instead of one, and to give them both plenty of ram (before the shortage).
I temporarily “lost” one of them after a power outage revealed that the CMOS battery was dead in one. It’s back up and running, but it was nice to be able to simply import the containers on the remaining good unit while I fixed it up.
Test your backups! It’ll save you time and headache, plus you can feel superior to others.
Currently planning for adding a third hardware node into my setup, adding proper distributed storage and enabling ha for all my applications.
What kind of distributed storage do you want to use, Ceph? What kind of orchestration/hypervisor do you use? I also have two nodes currently (Proxmox) with pseudo shared storage (zfs replication).
I’m planning on using proxmox with Ceph. I’m already using proxmox for virtualization of my k8s nodes, but with three hardware nodes it’s finally time to enable ceph as well as HA.
HA is possible with 2 (+Qdevice) with zfs repl, but I’ll look for a third one as well sooner or later. I haven’t used ceph, but everyone tells me how much of an overhead it has
Yeah, I’m aware of the overhead and hope I won’t have to roll back to something less, but if it turns out to be too demanding, I will have to roll back to nfs or think of something else.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters HA Home Assistant automation software ~ High Availability HTTP Hypertext Transfer Protocol, the Web LVM (Linux) Logical Volume Manager for filesystem mapping LXC Linux Containers NAS Network-Attached Storage NFS Network File System, a Unix-based file-sharing protocol known for performance and efficiency PSU Power Supply Unit RAID Redundant Array of Independent Disks for mass storage RPi Raspberry Pi brand of SBC SATA Serial AT Attachment interface for mass storage SBC Single-Board Computer SSD Solid State Drive mass storage SSH Secure Shell for remote terminal access ZFS Solaris/Linux filesystem focusing on data integrity k8s Kubernetes container management package
[Thread #247 for this comm, first seen 18th Apr 2026, 20:10] [FAQ] [Full list] [Contact] [Source code]
Is it possible to transfer RAID drives from a Synology server to an Ubuntu server without losing the data?
Most likely no unless the raid controllers are identical and you aren’t using Synology’s hybrid raid stuff.
DISAPPOINTED
It might be but it is risky - if anything goes wrong there goes that data.
😮💨







