I have a home lab I use for learning and to self host a couple of services for me and my extended family.

  • Nextcloud instance with about 1TB
  • Couple of websites
  • Couple of game servers

I’m running off an R430 with twin E5-2620 V3s 128GB and spinning rust storage.

When I deployed NC I did not think it thru, and I stored all the data locally, which causes the instance to be too big to backup normally.

As a solution, I’ve split the NC software into it’s own LXC and a NAS into another and I’m thinking about hosting a cheap NUC NAS to rsync the files.

I would also like to distribute the load of my server into separate nodes so I can get more familiar with HA and possibly hyper converged infrastructure.

I would also like to have wo nodes locally to be able to work on one without bringing down services.

Any advice / tips?

Should I skip the NAS and go straight into Ceph?

Would 3x NUCs with Intel i5 or i7ths and 32Gb Ram be enough?

Would I be better off with 3x pizza box servers like R220s or DL20s?

Storage wise I’m trying to decide between m2 to Sata adapter like this [](

) and a mixtures of SSDs and spinning rust. Or

Otherwise would I be better off with SFF?

Otherwise I was considering a single 24 bay disk array with an LSI card in IT mode, but I’m inexperienced with those and I’m not sure about power usage / noise. (the rack does sit next to my workstation)

And yes you can put an LSI card on a NUC surprisingly (This looks like a VERY fun project) https://github.com/NKkrisz/HomeLab/blob/main/markdown%2FLenovo_M720Q_Setup.md

Plus, most likely I would not expand the storage past 5 or 10 TB on each node.

Additionally; I’m looking at cost per watt (current server runs at 168w 90% of the time, looks like those tiny NUCs run about 25W or so and the SFF 50-75W depending on what they have. The shallow depth servers also idle at 25-50 depending on storage and processor options.

I also have a 12U rack at home and I would very much like to keep things racked and neat. It seems a lot easier to rack the NUCs than it would be to do with SFF cases.

Obviously I’m OK with buying new hardware (I’ll be selling the current one once I migrate) that’s part of the “learning” experience.

Any advice or experience you can share would be highly appreciated.

Thanks /c/selfhosted

  • Carrot@lemmy.today
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 day ago

    I subscribe to the philosophy that each server should handle one thing. I’ve got a NAS that stores all data for all other servers other than boot drives. I’m using only spinning rust for data, so the network speed is never the bottleneck for my system. My NAS is a 24 bay chassis with the LSI card in IT mode. I got an LSI card that is powered from the PCIe port itself, and power usage for the card itself seems negligible, but spinning 24 drives takes a decent bit of power. I’ve got 20 drives in it now and it’s pretty loud, but substantially quieter than the dell r720 it replaced. It’s in my basement so it doesn’t bother me, but if sound is an issue, and you don’t need a ton of space, definitely go with SSDs. I’ve also got a media server that handles all media streaming (movies/TV/audiobooks/music/ebooks/comics/manga/roms). It reads/writes it’s data to the NAS. I’ve got another server running my personal cloud (nextcloud, password manager, testing new SH services). Again, the nextcloud data is on the NAS. Both servers store backups to the NAS, as well as a second local drive. I’ve also got a handful of raspberry pis running the smart house stuff, and one running the Ubiquiti Controller. All are running the PoE hat with the m.2 port on them for stable boot drives, and store their backups on the NAS. I’ve kind of stopped running proxmox + virtualization when I switched off of the r720, as I find that running Debian on bare metal with btrfs backups is simpler for me, and I run almost all of my services in Docker. I’ve had a motherboard go out on my media server, and was able to swap the motherboard and get everything back up and running in a little under 2 hours. Longest part was the motherboard swap itself.

    • EpicFailGuy@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Interesting, thanks for sharing.

      Any clue what the power draw on the disk array is? I did some basic measurement with the kill-a-watt and a spinner takes about 6-7W where as an SSD takes about 2, the price difference is too much for my use case tho, performance per watt per TB, I’m better off with 1 single disk (or a mirror pair) of 6 TB in spinning rust.

      I’m not particularly concerned about data security since I’m syncing evrything 3 ways. Whenever one of the drive fails I’ll consider it a “surprise disaster recovery exercise” XD

      • Carrot@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        24 hours ago

        Fair, I’ll try to get my kill-a-watt plugged in to check next time the server powers down and report back. Power is fairly cheap where I live, and I’ve got solar, so that’s never been a huge concern for me. I’d have to check, but I’ve always assumed it’s pulling ~10 watts per drive at normal times, and as far as I know my power bill is pretty much reflecting this. (Due to how my data pool works all drives need to be spinning when in use, and my drives get basically zero down time).

        And that’s good, when I first got into self hosting I was greedy for storage and didn’t have the money to pay for redundancy, and I got bit a few times. Now my media server is running on 2 8-drive pools, each with two drives of parity. Ends up being around 200TB of useable space. I don’t have backups on my media pools, as right now I’m using 24TB drives and the cost to back that up just doesn’t make sense. I do however have my personal cloud on mirrored drives with a backup at my brother’s house, also on mirrored drives, so it’d be pretty unlikely for me to lose the important stuff.