Hi! I’ve never had a server, except for a raspberry that I use as a DNS (pi-hole), but I’ve been wanting it for a long time. The other day I found something that is kinda old, but very cheap, and I’ve been thinking about buying it since then.
It’s an IBM System x3500 M4. It has an E5-2620, 32 GB of DDR3, and 7 wonderful 900 GB SAS hard drives (don’t know if actual hard disk or solid state), which would fulfill all of my linux ISOs needs for at least the next year (probably a bit more), and a RAID controller ServeRAID M5110. All for 210 euros, which I think is very cheap.
From what I know, the E5 is power hungry for modern standards, and the SAS drives are not exactly friendly for replacement parts. How much would that (mostly the SAS part) be a problem?
Also, what can I expect concerning RAID? That is definitely the most concerning thing for me, as I’ve never worked with it.
Another huge part is, I do not care about accessing it from the outside, but I’d be sharing this system with my brother, in another city, so we would have to figure out a way of doing it. Normally I’d use port forwarding, but we’re both behind CG-NAT. Is there any way of not using a third party server as a proxy/VPN/whatever? If not, what service would you recommend for this purpose?
Another thing, my brother just happens to have a probably working, 16 GB ECC DDR3 stick laying around, except that it’s 1600MHz, and the CPU only supports up to 1333MHz. I’m pretty sure that if I’d put two sticks with different frequencies, the CPU would use the lower one, but is that the case even if the CPU does not support the frequency of one of the stick? (in short, would putting the other stick work?)
If you have any other pointers or anything, let me know. Thank you :)
I wouldn’t get something that old. Get a workstation or small form factor machine that is 3-10 years old.
How expensive is your kWh? What about noise? What about cooling in summer? Your SAS drives will fail, so make sure you have spares.
I personally would look into suitable used Lenovo TinyPCs and a passively cooled switch to cluster them. Consider your nodes expendable. Cluster storage at node scale rather than RAID.
You have to think about the extra costs, like power use , loudness etc. I have no idea what your server would be like, but it’s something to think about. FWIW, i reused my old desktop (it’s pretty beefy) into a server, but i run a pi now, because it was costing me a fair bit to run in power usage.
loudness etc
What fan noise? A benefit of being clinically deaf.
I’d definitely skip this in favor of something consumer-grade. You can find used Dell Optiplexes all over the place cheap and stick a large drive inside/outside of it and use it for a couple of years.
A big old server is just going to drain your wallet on both power and parts with equal or worse performance and a lot more complexity for what 99% of home users will use it for.
It sounds like your main goal is probably a media server and an Optiplex will give you an i5 or i7 with QuickSync which works excellent for processing video. RAID isnt really necessary here because you can just download more Linux ISOs if these one are lost, though it can be great later if you buy a bunch more drives and expand into other areas where data is less replaceable.
Can’t say on access behind CG-NAT, as I haven’t ever dealt with it, but Tailscale might work as a free third-party option though that’s just a guess.
Yeah, same.
I don’t know which version this 2620 is (they’ve been around since 2012), but any modern consumer CPU can outpace it by an order of magnitude - tho with fewer PCIe lanes (& possibly memory throughput, but not sure, maybe not), but it doesn’t sound like you need them.
Also idling makes quite a lot of difference even with cheap electricity bcs it’s 24/7 (eg 20 or 40 monies more a month on a rig that costs 200 monies either way is perhaps worth considering). Those “old” servers didn’t really power down much (CPU voltage + other parts, like server mobos, generally consume more power).
Also2, if important, making a quiet server is harder than making a quiet “PC server”.
Additionally to consider whatever the pro admin (closed sauce) tools the server has.
And this is just me, but I also don’t need RAID for my use case (and drive prices), for the same price as one HDD (a few years back that is) I would rather add that drive in an additional separate server/PC & have a backup system instead of just a backup drive (and it can be in a remote location in case of other emergencies/damages, like fire or whatever).
Old servers always seemed cheap but after thinking about it & the alternatives they’ve always seemed just fairly priced at the end. Which makes sense since it’s not noobs selling them, nor are they on fire sales/liquidation prices (they are sold by intermediaries in those cases).
So an old consumer PC (top of the line of the era if you like, or one of those corpo prebuilt machines), a good drive, and a good (efficient & safe) PSU is my usual homelab recommendation unless it wouldn’t cover very specific use cases).
Just for fun ballpark info:
- a 2620 (a hexacore from 2012) has a geekbench score of 390 (Single-Core Score) & 1902 (Multi-Core Score),
- an i3 12th gen (a classic quadcore from 2022 with a fraction of the power/heat, twice the max memory bandwidth) has 2147 (Single-Core Score) & 6766 (Multi-Core Score).Old desktop pc would be ideal, for exactly the problems you pointed out. The only thing that is making me consider this is the already included drives, which is where i’d end up spending most of my money (especially now, thanks ai)
I mean, the same capacity would cost WAY too much more. For a similar capacity + the system i’d have to spend at least double :(
@CmdrShepard49 @orsetto Also, don’t use hardware raid. If the controller ever fails, you lose all your data unless you can replace it with the same controller. If you do have a raid controller, you configure it to do JBOD so you can use software raid (or snapraid); at which point, you can do the same thing with consumer hardware.
Also, what can I expect concerning RAID? That is definitely the most concerning thing for me, as I’ve never worked with it.
Generally speaking it’s recommended these days to use a software RAID rather than relying on hardware. If anything happens to that RAID controller you will need to replace it with a duplicate in order to mount your drives. Software RAID is controlled by the Linux OS and would be much easier to recover. There used to be a bit of a performance penalty for a software RAID but these days it’s negligible.
I have an x3500m4 but found it using way too much energy for my requirements. A regular pc does the job for less than 25% of the electricity.
So, i’d say check your needs and the footprint. Electricity bill comes every month and something runnin 24/7 adds up real quick.
Do you have an estimate on the energy consumption?
I fired it up for you … Powered down 12 Watt which is just the PSU and the IMM (I had one power supply connected) Then when powered up 100 Watt
The IMM info, one PSU with 230 Volt feed:

Bear in mind I had no VM’s running…
I don’t know exactly what you want host but I have an old Fujitsu Q920 i5 with 16 GB of RAM and a 2 TB SSD drive which cost me around 200 Euro.
I’m serving around 20 services on it including Nextcloud, Navidrome, Paperless NG, Emby, Matrix, Friendica, Wireguard, Zabbix, DNS blocker and a few VMs. Processor utilization is around 5% most of the time and it uses about 8 GB of RAM.
I admit that Nextcloud could be more responsive and faster but for the family it is enough.
I haven’t exactly checked how much electricity I’m using but I would have noticed a bigger change on my bill.
For backups I have a remote node at another location that retrieves ZFS snapshots periodically.
Anyway. I have this setup for 2 years now and I’m happy. It does what I expect it to do.
As a side note. i use FreeBSD with jails and bhyve for this. A Proxmox running VMs with nested Docker apps may need something more performant so be aware.
yup, desktop components are what i was originally looking for, but this is 200 euros for a lot of storage, which would be way more expensive if i were to buy it separately
Speaking of Nextcloud, if you use for backing up photos, have you tried Immich?
I haven’t. They only provide docker deployment which I’m not fond of. If they would offer an alternative installation method I may would. Heard a lot of good things about it.
At present I use Nextcloud client on my phone to backup photos and the Memories app to nicely browse them.
Servers are just expensive hardware. You can accomplish 95% with a consumer grade desktop without all the extra power/heat/noise and dependencies on specific hardware.
yup. I was using my old desktop as a server. Thankfully though, it has 32GB RAM and a 8 core CPU. I use ras pi as a server now isntead, because it’s far less power hungry.
Regarding CGNAT and Port Forwarding: I too am behind CGNAT with my ISP, and my solution to this is renting a cheap VPS (I use Contabo, others might have different recommendations) and installing Pangolin. It’s a tunneling software that uses some UDP fuckery to hole-punch straight through the network with their Newt tunnels. I use this so my friends and family can access my Plex and Overseerr requests.
3.60 €/month is not bad. however i’d look for something even cheaper, given that the lowest plan gives 4 cores and 8gb of RAM, which I probably won’t need
Two random things you may already know, but just in case:
- Oracle (yes, the evil company) gives you one VPS for free
- Jellyfin is an alternative to Plex which would work perfectly with your setup
Oracle (yes, the evil company) gives you one VPS for free
I hate that it’s oracle…
I’d like to spin up Jellyfin, but Plex has tv apps for the non techies in my group ;/
Jellyfin also now has apps for most common TV OSes: webOS, some Samsung models (more to come as soon as Samsung approves them), Android TV.
Used/refurb SAS drives aren’t that expensive. Can someone with better memory than I please link to that site for second hand server components?
The reason why SAS drives are usually more expensive isn’t because the tech itself is more expensive (It’s largelt just a different kind of interface), but rather that “enterprise grade” hardware have a few additional Q&A steps, such as running a break-in cycle at the factory to weed out defective units.
While a server such as the one you described is slightly power hungry, it’s not that bad. Plus, if you wanna get into servers long term, it could serve as a useful way to get used to the hardware involved.
Server hardware is at its core not that different from consumer hardware, but it does often come with some nice and useful additions, such as:- Botswana drive bays (I tried to write “hotswap”, but autocorrect is probably correct.
- IPMI/iDRAC or equivalent for headless management
- Dual PSUs
- Rack mount capability
- Easy maintenance access to most hardware
- A ridiculous amount of sensors with automated warnings.
RAID is entirely optional. I seem to be the only one in here who actually like hardware RAID, as software RAID is more popular in the self hosting community. Using it is entirely optional and depends on your use case, though. If you wanna live without, use JBOD mode, and access each drive normally. Alternatively, pool as many disks as you want into RAID6 and you have one large storage device with built-in redundancy. RAIDs can either be managed from the BIOS, or from the OS using tools such as storcli.
You’re the only one giving me a positive answer!
I do not mind RAID, so i won’t lose data when a drive fails (the most valuable thing will be backups, which i mean, are backups, so not that critical if i can make a new one, but even losing other data would be a bit boring). I think I’d do software RAID tho, so if the controller breaks i wouldn’t have to find the same model, which could be hard for old hardware i guess.
the other comments scared me about power consumption, so i’ll have to investigate more of it.
I got 3 Seagate Exos X16 14 TB drives for only $140 each (refurbished) at the end of 2022. I’ve got them in TrueNAS as a zfs array and they work great.
Mine were the SATA version, which isn’t currently in stock. The SAS version of my drives go for $299 now. The SATA X14 version is $350.
So prices for the same refurbished drives are more than double what they were like 3.5 years ago, so they really are expensive! I paid like $10 per TB, but they’re all $15-25 per TB now! I was looking for drives for a friend who wants to get started self hosting, and I was shocked by how much refurb drives had gone up.
This is all from https://serverpartdeals.com/ by the way, I’m assuming that’s the site you mean too.
Released in 2012 (Product Spec). It certainly has potential, but as you say, it will consume more power than something more modern. There are things you can do to tame the beast. You might even adopt something that I’ve been doing, and that is shutting down the server before I retire for the evening. I am the only user, so it’s just sitting there eating up electricity while I sleep. I also have no midnight, mass Linux ISOs downloading, so there’s that. Good news tho, RAM will be pretty cheap.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters CGNAT Carrier-Grade NAT DNS Domain Name Service/System NAT Network Address Translation PCIe Peripheral Component Interconnect Express PSU Power Supply Unit Plex Brand of media server package RAID Redundant Array of Independent Disks for mass storage SATA Serial AT Attachment interface for mass storage SSD Solid State Drive mass storage UDP User Datagram Protocol, for real-time communications VPS Virtual Private Server (opposed to shared hosting) ZFS Solaris/Linux filesystem focusing on data integrity
[Thread #101 for this comm, first seen 17th Feb 2026, 20:41] [FAQ] [Full list] [Contact] [Source code]







