I did something similar last year. Market for mITX NAS boards is pretty bad. I went for ASRock N100DC-ITX – it has 2x SATA ports, but there's also PCIe 3 x4.
The main benefits of this board were:
* it's not from an obscure Chinese company
* integrated power supply – just plug in DC jack, and you're good to go
HDD have to be bought new, as well as anything mechanical (eg fans). But for motherboards, CPU, RAM and SSD, there is great value in buying used enterprise hardware on ebay. It is generally durable hardware that spent a quiet life in a temperature controlled datacentre, server motherboards from 5 years ago are absolute aircraft carriers in term of PCIe lanes and functionalities. Used enterprise SSDs are probably more durable than a new retail SSD, plus power loss protection and better performance.
The only downside is slightly higher power consumption. But just bought a 32 core 3rd gen Xeon CPU + motherboard, 128GB RAM, it idles at 75w without disks which isn't terrible. And you can build a more powerful NAS for a third of the price of a high end Synology. Unlikely that the additional 20-30w idle power consumption will cost you more than that.
Maybe 75 W without disk is not terrible but it's not good. My unoptimized ARM servers idle at about 3 or 4 W and add another 1 or 10 W when their SSDs or HDDs are switched on.
75 W probably need active cooling. 4 W do not.
Anyway you can probably do many more things with that 75 W server.
I wouldn't say that being new is an absolute requirement. I recently upgraded my ZFS pool from SATA to SAS HDDs. Since SAS HDDs have much better firmware for early error detection and monitoring, I decided to buy 50% refurbished. Even if I lost half of them, I would still be safe. I also have offsite backups. This setup worked really well for me, and I feel completely confident that my data is safe while not wasting unnecessary resources. Whether to use new or used equipment therefore depends on the setup.
Agree, but that's taking a risk with your data (whereas if a MB fails, you likely just need to replace it but your data is fine), and HDD kind of have a finite number of hours in them. Where buying them used I think makes sense is for a backup server, that you leave off except the few hours in a week where you do an incremental backup. Then it doesn't really matter that the drives have already been running for 3 or 4 years.
Wait. You build a new one every -year-?! How does one establish the reliability of the hardware (particularly the aliexpress motherboard), not to mention data retention, if its maximum life expectancy is 365 days?
Looks like they built a new NAS, but kept using the same drives. Which given the number of drive bays in the NAS probably make up a large majority of the overall cost in something like this.
Edit: reading comprehension fail - they bought drives earlier, at an unspecified price, but they weren't from the old NAS - I agree, when lifetimes of drives are measures in decades and huge amounts of tbw it seems pretty silly to buy new ones every time.
Built a NAS last winter using the same case. Temps for HDDs used to be in mid-50s C with no fan and about 40 with the stock fan. The case-native backplane thingamajig does not provide any sort of pwm control if the fan is plugged in, so it's either full blast or nothing. I swapped the fan for a Thermalright TL-B12 and the HDDs are now happily chugging along at about 37 with the fan barely perceptible. Hddfancontrol ramps it up based on the output of smartctl.
Case can actually fit a low-profile discrete GPU, there's about half height worth of space.
I would like to point people to the Odroid H4 series of boards. N97 or N355, 2*2.5GbE, 4*SATA, 2 W in idle. Also has extension boards to turn it into a router for example.
The developer hardkernel also publishes all relevant info such as board schematics.
And the best feature is they have in-band ECC, which can correct one-bit and detect two-bit errors. No other Alder Lake-N or Twin Lake SBC exposes this feature in UEFI.
I also have an older Odroid HC4, it's been years it is running smoothly and not only I cannot use 1000$ for a NAS as the current post implied but the power consumption seems crazy to me for a mere disk-over-network usage (using a 500W power supply).
I like the extensive benchmark from hardkernel, the only issue is that any ARM-based product is very tricky to boot and the only savior is armbian.
In A DC environment sure. In a home NAS not so much. I'm on Unraid and just throw WD recertified drives of varying sizes at it (plus some shucked external drives when I find them on offer), that's one of its strengths and makes it much cheaper to run.
I think the worry about power consumption is a bit overblown in the article. My NAS has an i5-12600 + Quadro P4000 and uses maybe 50% more power than the one in this article under normal conditions. That works out to maybe $4/month more cost. Given the relatively small delta, I'd encourage picking hardware based on what services you want to run.
Less power, less heat. Less heat, less cooling required. At some point that allows you to go fanless, and that's very beneficial if you have to share a room with the device.
I'm with you, but my "NAS" is also really just a server, running tons of other services, so that justifies the power consumption (it's my old 2700X gaming rig, sans GPU).
But i do have to acknowledge that the US has relatively low power costs, and my state in particular has lower costs than that even, so the equation is necessarily different for other people.
Very sad that HDDs, SSDs, and RAM are all increasing in price now, but I just made a 4 x 24 TB ZFS pool with Seagate Barracudas on sale at $10 / TB [1]. This seems like a pretty decent price even though the Barracudas are rated for 2400 hours per year [2] but this is the same spec that the refurbished Exos drives are rated for.
By the way, interesting to see that OP has no qualms about buying cheap Chinese motherboards, but splurged for an expensive Noctua fan when the cheaper Thermalright TL-B12 perform just as well for a lot cheaper (although the Thermalright could be slightly louder and perhaps be a slightly more annoying spectrum).
Also, it is mildly sad that there aren't many cheap low power (< 500 W) power supplies for SFX form factor. The SilverStone Technology SX500-G 500W SFX that was mentioned retails for the same price as 750 W and 850 W SFX PSUs on Amazon! I heard good things about getting Delta flex 400 W PSUs from Chinese websites --- some companies (e.g. YTC) mod them to be fully modular, and they are supposedly quite efficient (80 Plus Gold/Platinum) and quiet, but I haven't tested them out yet. On Taobao, those are like $30.
It's a good price but the Barracuda line isn't intended for NAS use so it's unclear how reliable they are. But it's still tempting to roll the dice given how expensive drive prices are right now.
I am not at all an expert, I can only share my anecdotal unscientific observations!
I'm running a TrueNAS box with 3x cheap shucked Seagate drives.*
The TrueNAS box has 48GB RAM, is using ZFS and is sharing the drives as a Time Machine destination to a couple of Macs in my office.
I can un-confidently say that it feels like the fastest TM device I've ever used!
TrueNAS with ZFS feels faster than Open Media Vault(OMV) did on the same hardware.
I originally setup OMV on this old gaming PC, as OMV is easy. OMV was reliable, but felt slow compared to how I remembered TrueNAS and ZFS feeling the last time I setup a NAS.
So I scrubbed OMV and installed TrueNAS, and purely based on seat-of-pants metrics, ZFS felt faster.
And I can confirm that it soaks up most of the 48GB of RAM!
TrueNAS reports ZFS Cache currently at 36.4 GiB.
I dont know why or how it works, and it's only a Time Machine destination, but there we are those are my metrics and that's what I know LOL
* I don't recommend this.
They seem unreliable and report errors all the time.
But it's just what I had sitting around :-)
I'd hoped by now to be able to afford to stick 3x 4TB/8TB SSDs of some sort in the case, but prices are tracking up on SSDs...
I do like to deduplicate my BitTorrent downloads/seeding directory with my media directories so I can edit metadata to my heart's content while still seeding forever without having to incur 2x storage usage. I tune the `recordsize` to 1MiB so it has vastly fewer blocks to keep track of compared to the default 128K, at the cost of any modification wasting very slightly more space. Really not a big deal though when talking about multi-gibibyte media containers, multi-megapixel art embeds, etc.
Haven't used them yet myself but seems like a nice use case for things like minor metadata changes to media files. The bulk of the file is shared and only the delta between the two are saved.
ZFS also uses RAM for read through cache aka ARC.
However, I’m not sure how noticeable the effect from increased RAM would be - I assume it mostly benefit for read patterns with high data reuse, which is not that common.
Yes. Parent's comment matches everything I've heard. 32GB is a common recommendation for home lab setups. I run 32 in my TrueNAS builds (36TB and 60TB).
You can run it with much less. I don't recall the bare minimum but with a bit of tweaking 2GB should be plenty[1].
I recall reading some running it on a 512MB system, but that was a while ago so not sure if you can still go that low.
Performance can suffer though, for example low memory will limit the size of the transaction groups. So for decent performance you will want 8GB or more depending on workloads.
Depends on the network speed. At 1Gbps a single HDD can easily saturate the network with sequential reads. A pair of HDD could do the same at 2.5Gbps. At 10Gbps or more, you would definitely see the benefits of caching in memory.
Not as much as expected. I have several toy ZFS pools out of ancient 3tb wd reds, and anything remotely home-grade (stripped mirrors, 4,6,8 wide raidz1/2) saturates the disks before 10gig networking. As long as it's sequential, 8gb or 128gb doesn't matter.
Makes sense. I didn't know if the FS used RAM for this purpose without some specialized software. PikachuEXE and Mewse mentioned ZFS. Looks like it has native support for caching frequent reads [0]. Good to know
As the other said already if you have more RAM you can have more cache.
Honestly it's not that needed but if you would really use the 10Gbit+ networking then 1 second is ~125Mbytes. So depending on your usage you can never even more than 15% utilization or have it almost all if you constantly running something on it ie torrents or using it a SAN/NAS for VM on some other machine.
But for a rare occasional home usage nor 32Gb nor this monstrosity and complexity doesn't make sense - just buy some 1-2 bay Synology and forget about it.
I would have chosen the i3-n305 version of that motherboard because it has In-Band ECC (IBECC) support - great for ZFS. IBECC is very underrated feature that doesn't get talked about enough. It may be available for the N150/N355, but I have never seen a confirmation.
Can you explain why ECC is great for ZFS in particular as opposed to any other filesystem?
And if the data leaves the NAS to be modified by a regular desktop computer then you lose the ECC assurance anyway, don't you?
ZFS is about end-to-end integrity, not just redundancy. It stores checksums of data when writing, checks them when reading, and can perform automatic restores from mirror members if mismatches occur. During writes, ZFS generates checksums from blocks in RAM. If a bit flips in memory before the block is written, ZFS will store a checksum matching the corrupted data, breaking the integrity guarantee. That’s why ECC RAM is particularly important for ZFS - without it you risk undermining the filesystem’s end-to-end integrity. Other filesystems usually lack such guarantees.
The oversimplified answer is that ZFS’ in-memory structures are not designed to minimize bitflip risk, as some file systems are. Content is hashed when written to memory cache, but it can be a long time before it then gets to disk. Very little validation is done at that point to protect against writing bad data.
Obligatorische Pastete:
"16GB Ram sind Flischt, ohne wenn und aber. ECC ist nicht Flischt aber ZFS ist dafür ausgelegt. Wenn in Strandnähe Daten gelesen werden und es kommt irgendwie was in den Arbeitsspeicher, könnte eine eigentlich intakte Datei auf der Festplatte mit einem Fehler "korrigiert" werden. Also ECC ja. Das Problem bei ECC ist nicht der ECC-Speicher an sich, der nur wenig mehr als konventioneller Speicher kostet, es sind die Mutterbretter, die ECC unterstützen. Aufpassen bei AMD: Oft steht dabei, dass ECC unterstützt wird. Gemeint ist aber, dass ECC-Speicher läuft, aber die ECC-Funktion nicht genutzt wird. LOL. Die meisten MBs mit ECC sind Serverboards. Wer nichts gegen gebrauchte Hardware hat, kann z.B. mit einem alten Sockel 1155-Xeon mit Asus-Brett ein Schnäppchen machen. Ansonsten ist die Asrock Rack-Reihe zu empfehlen. Teuer, aber stromsparend. Generell Nachteil bei Serverboards: Die Bootzeit dauert eine Ewigkeit. Von Consumerboards wird man mit kurzen Bootzeiten verwöhnt, Server brauchen da oft mal 2 Minuten, bis der eigentliche Bootvorgang beginnt. Bernds Server besteht also aus einem alten Xeon, einem Asus Brett, 16GB 1333Mhz ECC-Ram und 6x 2TB-Platten in einem RaidZ2 (Raid6).6TB sind Netto nutzbar. Ich mag alte Hardware irgendwie. Ich reize Hardware gerne bis zum Gehtnichtmehr aus. Die Platten sind auch schon 5 Jahre alt, machen aber keine Mucken. Geschwindigkeit ist super, 80-100MB/s über Samba und FTP. Ich lasse den Server übrigens nicht laufen, sondern schalte ihn aus, wenn ich ihn nicht brauche. Was noch? Komression ist super. Obwohl ich haupsächlich nicht weiter komprimierbare Daten speichere (Musik, Videos), hat mir die interne Kompression 1% Speicherplatz beschert. Bei 4TB sind das ca. 40GB Platz gespart. Der Xeon langweilt sich trotzdem ein bisschen. Testweise habe ich gzip-9-Komprimierung getestet, da kam er dann schon ins Schwitzen."
This has been discussed on HN some times before. User xornot looked at the zfs source code and debunked "faulty ram corrupts more and more on scrub", for more details see
https://news.ycombinator.com/item?id=14207520
The Jonsbo N3 case which is 8x 3.5" drives has a smaller footprint than this, which might be better for most folks. Needs a SFX PSU though, which is kind of annoying.
If you get an enterprise grade ITX board that has a 16x PCIe slot which can be bifurcated into 4 M.2 form factor PCIx4 connections, it really opens up options for storage:
* A 6x SATA card in M.2 form factor from Asmedia or others will let you fill all the drive slots even if the logic board only has 2/4/6 ports on it.
* The other ports can be used for conventional M.2 nVME drives.
That's what I built! It's a great case, the only components I didn't already have lying around were the motherboard and PSU.
It's very well made, not as tight on space as I expected either.
The only issue is as you noted, you have to be really careful with your motherboard choice if you want to use all 8 bays for a storage array.
Another gotchas was making sure to get a CPU with integrated graphics, otherwise you will have to waste your pcie slot on a graphics card and have no space for the extra SATA ports.
I upgraded my home backup server a couple of months ago to a Minisforum N5 Pro, and am very happy with it. It only has 4 3.5” drive slots, but I only use two with 2x20TB drives mirrored, and two 14TB external drives for offsite backups. The AMD AI 370 CPU is plenty fast so I also run Immich on it, and it has ECC RAM and 10G Ethernet.
Are there any NAS solutions for 3.5" drives, homebrew or purchased, that are slim enough to stash away in a wall enclosure? (This sort of thing: https://www.legrand.us/audio-visual/racks-and-enclosures/in-... , though not that particular model or height.) I'd like to really stash something away and forget about it. Height is the major constraint, you can only be ~3.5" tall. And before anyone says anything about 19" rack stuff, don't bother. It's close but just doesn't go, especially if it's not the only thing in the enclosure.
> And before anyone says anything about 19" rack stuff, don't bother. It's close but just doesn't go, especially if it's not the only thing in the enclosure.
Do you have to use that particular wall enclosure thing? A 1U chassis at 1.7” of height fits 4 drives (and a 2U at ~3.45” fits 12), and something like a QNAP is low-enough power to not need to worry about cooling too much. If you’re willing to DIY it would not be hard at all to rig up a mounting mechanism to a stud, and then it’s just a matter of designing some kind of nice-looking cover panel (wood? glass in a laser-cut metal door? lots of possibilities).
I guess my main question is, what/who is this for? I can’t picture any environment that you have literally 0 available space to put a NAS other than inside a wall. A 2-bay synology/qnap/etc is small enough to sit underneath a router/AP combo for instance.
> Do you have to use that particular wall enclosure thing?
It's already there in the wall. All the Cat5e cabling in the house terminates there, so all the network equipment lives in there, which makes me kind of want to also put the NAS in there.
I researched a bunch of cases recently and the Jonsbo, while it looked good, came up as having a ton of issues with airflow to cool the drives. Because of this, I ended up buying the Fractal Node 804 case, which seemed to have a better overall quality level and didn't require digging around AliExpress for a vendor.
lol same. All my parts arrived except the 804. The supply chain for these cases appears to be imploding where I live (Hungary). The day after I ordered it either went out of stock or went up by +50% in all webshops that are reputable here.
I’m still a bit torn on whether I made the good call of getting 804 or the 304 wouldve been a enough for a significantly smaller footprint and -2 bays. Hard to tell without seeing them in person lol.
Are you satisfied with it? Any issues that came up since building?
I have been running my NAS on the 304 for 5 years. It fits natively 6 HDDs but I think it is possible to cram two more with a bit of ingenuity. It is tucked away in an Ikea cabinet that I have drilled the back of for airflow.
I recently got a used QNAP TS-131P for cheap, that holds one 3.5" drive for offsite backup at a friend's house. It's compact and runs off a common 12V 3A power supply.
There is no third-party firmware available, but at least it runs Linux, so I wrote an autorun.sh script that kills 99% of the processes and phones home using ssh+rsync instead of depending on QNAP's cloud: https://github.com/pmarks-net/qnap-minlin
I too was in the market recently for a NAS, downgrading from a 12 bay server because of yagni - it's far too big, too loud, runs hot, and uses way too much energy. I was also tempted by the jonsbo (it's a very nice case) but prices being what they are it was actually better to get a premade 4 bay model for under $500 (batteries included, hdds are not). It's small, quiet, power efficient, and didnt break the bank in the process. Historically DIY has always been cheaper, but that's no longer the case (no pun intended)
I have built 2 NAS that borrow ideas from his blogs. One uses the Silverstone CS382 case (6x 6TB SAS) and the other uses a Topton N5105 Mini-ITX board (6x 10TB SATA). I'm quite happy with both.
Obligatory comment every time one of these threads comes up that Synology, sure, the hardware is a bit dated but… as far as set and forget goes:
I’ve run multiple Synology NAS at home, business, etc. and you can literally forget that it’s not someone else’s cloud. It auto updates on Sundays, always comes online again, and you can go for years (in one case, nearly a decade) without even logging into the admin and it just hums along and works.
What makes you think that Synology hardware is special in that sense?
Most quality hardware will easily last decades. I have servers in my homelab from 2012 that are still humming along just fine. Some might need a change of fans, but every other component is built to last.
I wonder how many consumer level HDDs in RAID5 will take to saturate a 10Gbps connection. My napkin math says that from 1,250 MB/s we can achieve around 1,150 MB/s due to network overhead so it means about 5 Red Pro/ Ironwolf Pro (reading at about 250–260 MB/s each) in RAID5 to saturate the connection.
TL;DR - please stop wasting tons of resources putting together new servers every year and turning this into yet another outlet for "I have more money than sense and hopefully I can buy myself into happiness". Just get old random hardware and play around with it and you'll learn so much that you will be able to truly appreciate the difference between consumer and enterprise hardware.
This seems awfully wasteful. One of the main reasons for which I've built my own homeserver was to reduce resource usage - one could probably argue that the carbon footprint of keeping your photos in the cloud and running services is lower than building your own little datacentre copy locally and where would we be if everyone builds their own server, then what? Well, I think that paying Google/Apple/Oracle/etc whoever money so that they continue their activities has a bigger carbon footprint than me picking up old used parts and running them on a solar/wind only electricity plan. I also think I'm going a bit overboard with this and I'm not suggesting to vote with your wallet because that doesn't work. If you want real change this needs to come from the government. You not buying a motherboard won't stop a corporation from making another 10 million.
Anyway, except for the hard drives, all components were picked up used. I like to joke it's my little Frankenstein's monster, pieced together from discarded parts no one wanted or had any use for. I've also gone down the rabbit hole to build the "perfect" machine, but I guess I was thinking too highly of myself and the actual use case. The reason I'm posting this is to help someone who might not build a new machine because they don't have ECC and without ECC ZFS is useless and you need Enterprise drives and you want 128 GB of RAM in the machine and you could also pick up used enterprise hardware and you could etc...
If you wish to play around with this, the best way is to just get into it. The same way Google started with consumer level hardware so can you. Pick up a used motherboard, pick up some used ram, a used CPU, throw them into a case and let it rip. Initially you'll learn so much and that alone is worth every penny. When I built my first machine, I wasn't finding any decently used former desktop form hp/lenovo/dell so I found a used i5 8500t for about 20$, 8 gb of ram for about 5$, a used motherboard for 40$, case was 20$ and PSU was $30. All in all the system was 115$ and for storage I used an old 2.5inch ssd for boot drive and 2 new NAS hard drives (which I still have btw!). This was amazing. Not having ECC, not having a server motherboard/system, not worrying about all that stuff allowed me to get started. The entry bar is even lower now, so just get started, don't worry. People talk about flipped bits as if it happens all day every day. If you are THAT worried, then yeah, look for a used server barebone or even a used server with support for ecc and do use ZFS, but I want to ask, how comfortable are you making the switch 100% now over night without having ever spent any time configuring even the most basic server that NEEDS to run for days/weeks/months? Old/used hardware can bridge this gap and when you're ready it's not like you have to throw out the baby with the bathwater. You now have another node in a proxmox cluster. Congrats! The old machine can run LXCs, VMs, it could be a firewall it could do anything and when it fails, no biggie.
Current setup for those interested:
i7 9700t
64 GB DDR4 (2x32)
8, 10, 12, 12, 14 TB HDDs (snapraid setup and 14 TB HDD is holding parity info)
The author is not suggesting anyone should rebuild their NAS every year. Instead he is investigating which options make sense in year X. I remember reading his recommendations back when I built my NAS in 2021 but that doesn't mean I bought new hardware since then.
It's a bit patronizing to tell people what to do with their money. If you care more about the environment than enjoying technology, then go ahead and do what you suggest. If you want to be really green, how about giving up technology altogether? Go full vegan, abandon all possessions, and all that? Or if you really want to help the planet, have you considered suicide?
There's always more you can do. I'd rather enjoy my life, and not tell others how to enjoy theirs, unless it's impacting mine. Especially considering that the impact of a single middle-class individual pales in comparison to the impact of corporations and absurdly wealthy individuals. Your rant would be better served to representatives in government than tech nerds.
I did something similar last year. Market for mITX NAS boards is pretty bad. I went for ASRock N100DC-ITX – it has 2x SATA ports, but there's also PCIe 3 x4.
The main benefits of this board were:
* it's not from an obscure Chinese company
* integrated power supply – just plug in DC jack, and you're good to go
* passive cooling
Really hope they make an Intel N150 version.
The motherboard seems quite expensive.
HDD have to be bought new, as well as anything mechanical (eg fans). But for motherboards, CPU, RAM and SSD, there is great value in buying used enterprise hardware on ebay. It is generally durable hardware that spent a quiet life in a temperature controlled datacentre, server motherboards from 5 years ago are absolute aircraft carriers in term of PCIe lanes and functionalities. Used enterprise SSDs are probably more durable than a new retail SSD, plus power loss protection and better performance.
The only downside is slightly higher power consumption. But just bought a 32 core 3rd gen Xeon CPU + motherboard, 128GB RAM, it idles at 75w without disks which isn't terrible. And you can build a more powerful NAS for a third of the price of a high end Synology. Unlikely that the additional 20-30w idle power consumption will cost you more than that.
Maybe 75 W without disk is not terrible but it's not good. My unoptimized ARM servers idle at about 3 or 4 W and add another 1 or 10 W when their SSDs or HDDs are switched on.
75 W probably need active cooling. 4 W do not.
Anyway you can probably do many more things with that 75 W server.
I wouldn't say that being new is an absolute requirement. I recently upgraded my ZFS pool from SATA to SAS HDDs. Since SAS HDDs have much better firmware for early error detection and monitoring, I decided to buy 50% refurbished. Even if I lost half of them, I would still be safe. I also have offsite backups. This setup worked really well for me, and I feel completely confident that my data is safe while not wasting unnecessary resources. Whether to use new or used equipment therefore depends on the setup.
Agree, but that's taking a risk with your data (whereas if a MB fails, you likely just need to replace it but your data is fine), and HDD kind of have a finite number of hours in them. Where buying them used I think makes sense is for a backup server, that you leave off except the few hours in a week where you do an incremental backup. Then it doesn't really matter that the drives have already been running for 3 or 4 years.
[dead]
75W idle is 650kWh a year, that's quite significant in the context of a home.
[dead]
[dead]
Wait. You build a new one every -year-?! How does one establish the reliability of the hardware (particularly the aliexpress motherboard), not to mention data retention, if its maximum life expectancy is 365 days?
How else is one to get the clicks?
Looks like they built a new NAS, but kept using the same drives. Which given the number of drive bays in the NAS probably make up a large majority of the overall cost in something like this.
Edit: reading comprehension fail - they bought drives earlier, at an unspecified price, but they weren't from the old NAS - I agree, when lifetimes of drives are measures in decades and huge amounts of tbw it seems pretty silly to buy new ones every time.
MB and other elements are more concerning than the drives.
Built a NAS last winter using the same case. Temps for HDDs used to be in mid-50s C with no fan and about 40 with the stock fan. The case-native backplane thingamajig does not provide any sort of pwm control if the fan is plugged in, so it's either full blast or nothing. I swapped the fan for a Thermalright TL-B12 and the HDDs are now happily chugging along at about 37 with the fan barely perceptible. Hddfancontrol ramps it up based on the output of smartctl.
Case can actually fit a low-profile discrete GPU, there's about half height worth of space.
I would like to point people to the Odroid H4 series of boards. N97 or N355, 2*2.5GbE, 4*SATA, 2 W in idle. Also has extension boards to turn it into a router for example.
The developer hardkernel also publishes all relevant info such as board schematics.
I am building a NAS using the Odroid H4+ and a 3d printable case design. I selected the Odroid board for the in-band ECC and low power consumption: https://www.printables.com/model/1257966-odroid-h4-nas
And the best feature is they have in-band ECC, which can correct one-bit and detect two-bit errors. No other Alder Lake-N or Twin Lake SBC exposes this feature in UEFI.
I also have an older Odroid HC4, it's been years it is running smoothly and not only I cannot use 1000$ for a NAS as the current post implied but the power consumption seems crazy to me for a mere disk-over-network usage (using a 500W power supply).
I like the extensive benchmark from hardkernel, the only issue is that any ARM-based product is very tricky to boot and the only savior is armbian.
> HDD have to be bought new
In A DC environment sure. In a home NAS not so much. I'm on Unraid and just throw WD recertified drives of varying sizes at it (plus some shucked external drives when I find them on offer), that's one of its strengths and makes it much cheaper to run.
I think the worry about power consumption is a bit overblown in the article. My NAS has an i5-12600 + Quadro P4000 and uses maybe 50% more power than the one in this article under normal conditions. That works out to maybe $4/month more cost. Given the relatively small delta, I'd encourage picking hardware based on what services you want to run.
Less power, less heat. Less heat, less cooling required. At some point that allows you to go fanless, and that's very beneficial if you have to share a room with the device.
It depends how much electricity costs where you live. I’m quite pleased mine idles at ~15W.
I'm with you, but my "NAS" is also really just a server, running tons of other services, so that justifies the power consumption (it's my old 2700X gaming rig, sans GPU).
But i do have to acknowledge that the US has relatively low power costs, and my state in particular has lower costs than that even, so the equation is necessarily different for other people.
Very sad that HDDs, SSDs, and RAM are all increasing in price now, but I just made a 4 x 24 TB ZFS pool with Seagate Barracudas on sale at $10 / TB [1]. This seems like a pretty decent price even though the Barracudas are rated for 2400 hours per year [2] but this is the same spec that the refurbished Exos drives are rated for.
By the way, interesting to see that OP has no qualms about buying cheap Chinese motherboards, but splurged for an expensive Noctua fan when the cheaper Thermalright TL-B12 perform just as well for a lot cheaper (although the Thermalright could be slightly louder and perhaps be a slightly more annoying spectrum).
Also, it is mildly sad that there aren't many cheap low power (< 500 W) power supplies for SFX form factor. The SilverStone Technology SX500-G 500W SFX that was mentioned retails for the same price as 750 W and 850 W SFX PSUs on Amazon! I heard good things about getting Delta flex 400 W PSUs from Chinese websites --- some companies (e.g. YTC) mod them to be fully modular, and they are supposedly quite efficient (80 Plus Gold/Platinum) and quiet, but I haven't tested them out yet. On Taobao, those are like $30.
[1] https://www.newegg.com/seagate-barracuda-st24000dm001-24tb-f...
[2] https://www.seagate.com/content/dam/seagate/en/content-fragm...
>I just made a 4 x 24 TB ZFS pool
How much RAM did you install? Did you follow the 1GB per 1TB recommendation for ZFS? (i.e. 96GB of RAM)
> $10 / TB
That's a remarkably good price. If I had $1.5k handy I'd be sorely tempted (even tho it's Seagate).
It's a good price but the Barracuda line isn't intended for NAS use so it's unclear how reliable they are. But it's still tempting to roll the dice given how expensive drive prices are right now.
I've recently shucked some Seagate HAMR 26Tb drives hopefully they last
Not surprised by the fan, once I went noctua I didn’t go back.
Q - assuming the NAS was strictly used as NAS and not as a server with VMs, is there a point in having a large amount of RAM? (large as in >8GB)
I'm not sure what the benefit would be since all it's doing is moving information from the drives over to the network.
I am not at all an expert, I can only share my anecdotal unscientific observations!
I'm running a TrueNAS box with 3x cheap shucked Seagate drives.*
The TrueNAS box has 48GB RAM, is using ZFS and is sharing the drives as a Time Machine destination to a couple of Macs in my office.
I can un-confidently say that it feels like the fastest TM device I've ever used!
TrueNAS with ZFS feels faster than Open Media Vault(OMV) did on the same hardware.
I originally setup OMV on this old gaming PC, as OMV is easy. OMV was reliable, but felt slow compared to how I remembered TrueNAS and ZFS feeling the last time I setup a NAS.
So I scrubbed OMV and installed TrueNAS, and purely based on seat-of-pants metrics, ZFS felt faster.
And I can confirm that it soaks up most of the 48GB of RAM!
TrueNAS reports ZFS Cache currently at 36.4 GiB.
I dont know why or how it works, and it's only a Time Machine destination, but there we are those are my metrics and that's what I know LOL
* I don't recommend this. They seem unreliable and report errors all the time. But it's just what I had sitting around :-) I'd hoped by now to be able to afford to stick 3x 4TB/8TB SSDs of some sort in the case, but prices are tracking up on SSDs...
ZFS uses a large amount of ram, i think the old rule of thumb was 1GB ram per 1TB of storage
That's only for deduplication.
https://superuser.com/a/993019
I do like to deduplicate my BitTorrent downloads/seeding directory with my media directories so I can edit metadata to my heart's content while still seeding forever without having to incur 2x storage usage. I tune the `recordsize` to 1MiB so it has vastly fewer blocks to keep track of compared to the default 128K, at the cost of any modification wasting very slightly more space. Really not a big deal though when talking about multi-gibibyte media containers, multi-megapixel art embeds, etc.
Have you considered "reflinks"? Supported as of [OpenZFS 2.2](https://github.com/openzfs/zfs/pull/13392).
Haven't used them yet myself but seems like a nice use case for things like minor metadata changes to media files. The bulk of the file is shared and only the delta between the two are saved.
cross-seed | cross-seed https://www.cross-seed.org/
I believe they are saying they literally edit the media files to add / change metadata. Cross-seeding is only possible if the files are kept the same.
ZFS also uses RAM for read through cache aka ARC. However, I’m not sure how noticeable the effect from increased RAM would be - I assume it mostly benefit for read patterns with high data reuse, which is not that common.
Huh. More than just the normal page cache on other filesystems?
Yes. Parent's comment matches everything I've heard. 32GB is a common recommendation for home lab setups. I run 32 in my TrueNAS builds (36TB and 60TB).
You can run it with much less. I don't recall the bare minimum but with a bit of tweaking 2GB should be plenty[1].
I recall reading some running it on a 512MB system, but that was a while ago so not sure if you can still go that low.
Performance can suffer though, for example low memory will limit the size of the transaction groups. So for decent performance you will want 8GB or more depending on workloads.
[1]: https://openzfs.github.io/openzfs-docs/Project%20and%20Commu...
ZFS will eat up as much RAM as you give it as it caches files in memory as accessed.
All filesystems do this (at least all modern ones, on linux)
If you use ZFS you might need more RAM for performance?
ZFS cache.
Caching files in ram means they can be moved to the network faster - right?
Depends on the network speed. At 1Gbps a single HDD can easily saturate the network with sequential reads. A pair of HDD could do the same at 2.5Gbps. At 10Gbps or more, you would definitely see the benefits of caching in memory.
Not as much as expected. I have several toy ZFS pools out of ancient 3tb wd reds, and anything remotely home-grade (stripped mirrors, 4,6,8 wide raidz1/2) saturates the disks before 10gig networking. As long as it's sequential, 8gb or 128gb doesn't matter.
Makes sense. I didn't know if the FS used RAM for this purpose without some specialized software. PikachuEXE and Mewse mentioned ZFS. Looks like it has native support for caching frequent reads [0]. Good to know
[0]: https://www.truenas.com/docs/references/l2arc/
As the other said already if you have more RAM you can have more cache.
Honestly it's not that needed but if you would really use the 10Gbit+ networking then 1 second is ~125Mbytes. So depending on your usage you can never even more than 15% utilization or have it almost all if you constantly running something on it ie torrents or using it a SAN/NAS for VM on some other machine.
But for a rare occasional home usage nor 32Gb nor this monstrosity and complexity doesn't make sense - just buy some 1-2 bay Synology and forget about it.
I won’t be able to sleep having my data just on 1 disk
I would have chosen the i3-n305 version of that motherboard because it has In-Band ECC (IBECC) support - great for ZFS. IBECC is very underrated feature that doesn't get talked about enough. It may be available for the N150/N355, but I have never seen a confirmation.
Can you explain why ECC is great for ZFS in particular as opposed to any other filesystem? And if the data leaves the NAS to be modified by a regular desktop computer then you lose the ECC assurance anyway, don't you?
ZFS is about end-to-end integrity, not just redundancy. It stores checksums of data when writing, checks them when reading, and can perform automatic restores from mirror members if mismatches occur. During writes, ZFS generates checksums from blocks in RAM. If a bit flips in memory before the block is written, ZFS will store a checksum matching the corrupted data, breaking the integrity guarantee. That’s why ECC RAM is particularly important for ZFS - without it you risk undermining the filesystem’s end-to-end integrity. Other filesystems usually lack such guarantees.
The oversimplified answer is that ZFS’ in-memory structures are not designed to minimize bitflip risk, as some file systems are. Content is hashed when written to memory cache, but it can be a long time before it then gets to disk. Very little validation is done at that point to protect against writing bad data.
what is the impact on performance, does it require special ram? just heard about this here
sorry for the german comment - ECC is mandatory!
Obligatorische Pastete: "16GB Ram sind Flischt, ohne wenn und aber. ECC ist nicht Flischt aber ZFS ist dafür ausgelegt. Wenn in Strandnähe Daten gelesen werden und es kommt irgendwie was in den Arbeitsspeicher, könnte eine eigentlich intakte Datei auf der Festplatte mit einem Fehler "korrigiert" werden. Also ECC ja. Das Problem bei ECC ist nicht der ECC-Speicher an sich, der nur wenig mehr als konventioneller Speicher kostet, es sind die Mutterbretter, die ECC unterstützen. Aufpassen bei AMD: Oft steht dabei, dass ECC unterstützt wird. Gemeint ist aber, dass ECC-Speicher läuft, aber die ECC-Funktion nicht genutzt wird. LOL. Die meisten MBs mit ECC sind Serverboards. Wer nichts gegen gebrauchte Hardware hat, kann z.B. mit einem alten Sockel 1155-Xeon mit Asus-Brett ein Schnäppchen machen. Ansonsten ist die Asrock Rack-Reihe zu empfehlen. Teuer, aber stromsparend. Generell Nachteil bei Serverboards: Die Bootzeit dauert eine Ewigkeit. Von Consumerboards wird man mit kurzen Bootzeiten verwöhnt, Server brauchen da oft mal 2 Minuten, bis der eigentliche Bootvorgang beginnt. Bernds Server besteht also aus einem alten Xeon, einem Asus Brett, 16GB 1333Mhz ECC-Ram und 6x 2TB-Platten in einem RaidZ2 (Raid6).6TB sind Netto nutzbar. Ich mag alte Hardware irgendwie. Ich reize Hardware gerne bis zum Gehtnichtmehr aus. Die Platten sind auch schon 5 Jahre alt, machen aber keine Mucken. Geschwindigkeit ist super, 80-100MB/s über Samba und FTP. Ich lasse den Server übrigens nicht laufen, sondern schalte ihn aus, wenn ich ihn nicht brauche. Was noch? Komression ist super. Obwohl ich haupsächlich nicht weiter komprimierbare Daten speichere (Musik, Videos), hat mir die interne Kompression 1% Speicherplatz beschert. Bei 4TB sind das ca. 40GB Platz gespart. Der Xeon langweilt sich trotzdem ein bisschen. Testweise habe ich gzip-9-Komprimierung getestet, da kam er dann schon ins Schwitzen."
This has been discussed on HN some times before. User xornot looked at the zfs source code and debunked "faulty ram corrupts more and more on scrub", for more details see https://news.ycombinator.com/item?id=14207520
The Jonsbo N3 case which is 8x 3.5" drives has a smaller footprint than this, which might be better for most folks. Needs a SFX PSU though, which is kind of annoying.
If you get an enterprise grade ITX board that has a 16x PCIe slot which can be bifurcated into 4 M.2 form factor PCIx4 connections, it really opens up options for storage:
* A 6x SATA card in M.2 form factor from Asmedia or others will let you fill all the drive slots even if the logic board only has 2/4/6 ports on it.
* The other ports can be used for conventional M.2 nVME drives.
That's what I built! It's a great case, the only components I didn't already have lying around were the motherboard and PSU.
It's very well made, not as tight on space as I expected either.
The only issue is as you noted, you have to be really careful with your motherboard choice if you want to use all 8 bays for a storage array.
Another gotchas was making sure to get a CPU with integrated graphics, otherwise you will have to waste your pcie slot on a graphics card and have no space for the extra SATA ports.
I upgraded my home backup server a couple of months ago to a Minisforum N5 Pro, and am very happy with it. It only has 4 3.5” drive slots, but I only use two with 2x20TB drives mirrored, and two 14TB external drives for offsite backups. The AMD AI 370 CPU is plenty fast so I also run Immich on it, and it has ECC RAM and 10G Ethernet.
Are there any NAS solutions for 3.5" drives, homebrew or purchased, that are slim enough to stash away in a wall enclosure? (This sort of thing: https://www.legrand.us/audio-visual/racks-and-enclosures/in-... , though not that particular model or height.) I'd like to really stash something away and forget about it. Height is the major constraint, you can only be ~3.5" tall. And before anyone says anything about 19" rack stuff, don't bother. It's close but just doesn't go, especially if it's not the only thing in the enclosure.
> And before anyone says anything about 19" rack stuff, don't bother. It's close but just doesn't go, especially if it's not the only thing in the enclosure.
Do you have to use that particular wall enclosure thing? A 1U chassis at 1.7” of height fits 4 drives (and a 2U at ~3.45” fits 12), and something like a QNAP is low-enough power to not need to worry about cooling too much. If you’re willing to DIY it would not be hard at all to rig up a mounting mechanism to a stud, and then it’s just a matter of designing some kind of nice-looking cover panel (wood? glass in a laser-cut metal door? lots of possibilities).
I guess my main question is, what/who is this for? I can’t picture any environment that you have literally 0 available space to put a NAS other than inside a wall. A 2-bay synology/qnap/etc is small enough to sit underneath a router/AP combo for instance.
> Do you have to use that particular wall enclosure thing?
It's already there in the wall. All the Cat5e cabling in the house terminates there, so all the network equipment lives in there, which makes me kind of want to also put the NAS in there.
1 liter PC's (tiny/mini/micro), or some N100 type build + external bay is likely your best bet. If it's really that small, you might have heat issues.
I researched a bunch of cases recently and the Jonsbo, while it looked good, came up as having a ton of issues with airflow to cool the drives. Because of this, I ended up buying the Fractal Node 804 case, which seemed to have a better overall quality level and didn't require digging around AliExpress for a vendor.
lol same. All my parts arrived except the 804. The supply chain for these cases appears to be imploding where I live (Hungary). The day after I ordered it either went out of stock or went up by +50% in all webshops that are reputable here.
I’m still a bit torn on whether I made the good call of getting 804 or the 304 wouldve been a enough for a significantly smaller footprint and -2 bays. Hard to tell without seeing them in person lol.
Are you satisfied with it? Any issues that came up since building?
I have been running my NAS on the 304 for 5 years. It fits natively 6 HDDs but I think it is possible to cram two more with a bit of ingenuity. It is tucked away in an Ikea cabinet that I have drilled the back of for airflow.
I recently got a used QNAP TS-131P for cheap, that holds one 3.5" drive for offsite backup at a friend's house. It's compact and runs off a common 12V 3A power supply.
There is no third-party firmware available, but at least it runs Linux, so I wrote an autorun.sh script that kills 99% of the processes and phones home using ssh+rsync instead of depending on QNAP's cloud: https://github.com/pmarks-net/qnap-minlin
I too was in the market recently for a NAS, downgrading from a 12 bay server because of yagni - it's far too big, too loud, runs hot, and uses way too much energy. I was also tempted by the jonsbo (it's a very nice case) but prices being what they are it was actually better to get a premade 4 bay model for under $500 (batteries included, hdds are not). It's small, quiet, power efficient, and didnt break the bank in the process. Historically DIY has always been cheaper, but that's no longer the case (no pun intended)
I have built 2 NAS that borrow ideas from his blogs. One uses the Silverstone CS382 case (6x 6TB SAS) and the other uses a Topton N5105 Mini-ITX board (6x 10TB SATA). I'm quite happy with both.
ref: https://blog.briancmoses.com/2024/07/migrating-my-diy-nas-in...
Obligatory comment every time one of these threads comes up that Synology, sure, the hardware is a bit dated but… as far as set and forget goes:
I’ve run multiple Synology NAS at home, business, etc. and you can literally forget that it’s not someone else’s cloud. It auto updates on Sundays, always comes online again, and you can go for years (in one case, nearly a decade) without even logging into the admin and it just hums along and works.
Until you get the blue flashing light of dead. Luckily I was able to source an identical old model of eBay to transfer the disks to.
What makes you think that Synology hardware is special in that sense?
Most quality hardware will easily last decades. I have servers in my homelab from 2012 that are still humming along just fine. Some might need a change of fans, but every other component is built to last.
It’s the software and stability of the software (between updates for example) that’s impressive.
I wonder how many consumer level HDDs in RAID5 will take to saturate a 10Gbps connection. My napkin math says that from 1,250 MB/s we can achieve around 1,150 MB/s due to network overhead so it means about 5 Red Pro/ Ironwolf Pro (reading at about 250–260 MB/s each) in RAID5 to saturate the connection.
I though raid5 is highly discouraged
I can't remember the details, but was that not specifically for hardware raid controllers? 2000s style.
I think for home use with MDADM or raid z2 on zfs it's just gucci. It's cost effective.
What's the plan if your house burns down?
Ideally: off-site backup and archive-tier object storage in the cloud.
The loss of your vacation photos will be less of your worries
TL;DR - please stop wasting tons of resources putting together new servers every year and turning this into yet another outlet for "I have more money than sense and hopefully I can buy myself into happiness". Just get old random hardware and play around with it and you'll learn so much that you will be able to truly appreciate the difference between consumer and enterprise hardware.
This seems awfully wasteful. One of the main reasons for which I've built my own homeserver was to reduce resource usage - one could probably argue that the carbon footprint of keeping your photos in the cloud and running services is lower than building your own little datacentre copy locally and where would we be if everyone builds their own server, then what? Well, I think that paying Google/Apple/Oracle/etc whoever money so that they continue their activities has a bigger carbon footprint than me picking up old used parts and running them on a solar/wind only electricity plan. I also think I'm going a bit overboard with this and I'm not suggesting to vote with your wallet because that doesn't work. If you want real change this needs to come from the government. You not buying a motherboard won't stop a corporation from making another 10 million.
Anyway, except for the hard drives, all components were picked up used. I like to joke it's my little Frankenstein's monster, pieced together from discarded parts no one wanted or had any use for. I've also gone down the rabbit hole to build the "perfect" machine, but I guess I was thinking too highly of myself and the actual use case. The reason I'm posting this is to help someone who might not build a new machine because they don't have ECC and without ECC ZFS is useless and you need Enterprise drives and you want 128 GB of RAM in the machine and you could also pick up used enterprise hardware and you could etc...
If you wish to play around with this, the best way is to just get into it. The same way Google started with consumer level hardware so can you. Pick up a used motherboard, pick up some used ram, a used CPU, throw them into a case and let it rip. Initially you'll learn so much and that alone is worth every penny. When I built my first machine, I wasn't finding any decently used former desktop form hp/lenovo/dell so I found a used i5 8500t for about 20$, 8 gb of ram for about 5$, a used motherboard for 40$, case was 20$ and PSU was $30. All in all the system was 115$ and for storage I used an old 2.5inch ssd for boot drive and 2 new NAS hard drives (which I still have btw!). This was amazing. Not having ECC, not having a server motherboard/system, not worrying about all that stuff allowed me to get started. The entry bar is even lower now, so just get started, don't worry. People talk about flipped bits as if it happens all day every day. If you are THAT worried, then yeah, look for a used server barebone or even a used server with support for ecc and do use ZFS, but I want to ask, how comfortable are you making the switch 100% now over night without having ever spent any time configuring even the most basic server that NEEDS to run for days/weeks/months? Old/used hardware can bridge this gap and when you're ready it's not like you have to throw out the baby with the bathwater. You now have another node in a proxmox cluster. Congrats! The old machine can run LXCs, VMs, it could be a firewall it could do anything and when it fails, no biggie.
Current setup for those interested:
i7 9700t
64 GB DDR4 (2x32)
8, 10, 12, 12, 14 TB HDDs (snapraid setup and 14 TB HDD is holding parity info)
X550 T2 10Gbps network card
Fractal Design Node 804
Seasonic Gold 550watts
LSI 9305 16i
The author is not suggesting anyone should rebuild their NAS every year. Instead he is investigating which options make sense in year X. I remember reading his recommendations back when I built my NAS in 2021 but that doesn't mean I bought new hardware since then.
It's a bit patronizing to tell people what to do with their money. If you care more about the environment than enjoying technology, then go ahead and do what you suggest. If you want to be really green, how about giving up technology altogether? Go full vegan, abandon all possessions, and all that? Or if you really want to help the planet, have you considered suicide?
There's always more you can do. I'd rather enjoy my life, and not tell others how to enjoy theirs, unless it's impacting mine. Especially considering that the impact of a single middle-class individual pales in comparison to the impact of corporations and absurdly wealthy individuals. Your rant would be better served to representatives in government than tech nerds.
It's a bit patronizing to tell people what to do
on this website?!
with their money
in this economy?!