Hacker News new | past | comments | ask | show | jobs | submit login
Radxa's SATA HAT makes compact Pi 5 NAS (jeffgeerling.com)
32 points by ingve 45 days ago | hide | past | favorite | 36 comments



Be aware that there's N100 boards with 6x SATA, 2x M.2 NVMe, and 4x 2.5GBASE-T around this price point. For example:

https://www.aliexpress.us/item/3256806198066931.html

The N100 processor offers 2-3x the performance and lots of additional PCIe lanes compared to a Raspberry Pi 5 despite having half the TDP (6W vs 12W).


Not to ding that board at all (because it is quite nice), but adding on a power adapter (like Pico PSU for $30) and a cheap NVMe SSD (Kingston for $20) brings it up to $175—which is not that much more than the $127 entry for the 4GB Pi option—but it is closer to a $200 price point.

The overall build footprint will also be larger, and you'll have to pick up a set of SATA cables ($10 or so for four) and possibly a case or bracket to support the drives ($10-20).

It does have 16 GB of RAM (Pi tops out at 8GB for $147), the CPU is faster, and it has 4x 2.5 GbE which is a huge upgrade!

But I wouldn't quite put a mini ITX motherboard in the same price class, even though it's a better all-round board for a 'homelab in one box' type of setup.


n100 will also last longer and be more reliable. You can run regular linux distro and it will be supported forever as it's x86. You will get better performance with the NAS too (6 drives can go pretty fast!)


I really would like one of those with dual 10 Gigabit Ethernet. My home lab needs are a little more advanced but my budget isn't. :-)


I know GoWin has started releasing some boxes with 10 and even 25 GbE adapters, it'd be nice if all manufacturers were able to add on 2.5 + 10 GbE, since modern low-end chips can finally get those speeds.


4x 2.5GbE? Wonder if they make a 1x 10GbE version as well? :)

Couldn't see one in quick searching though. :(


Sadly, the the 8 TB Samsung QVO SSDs jumped pretty badly in price since the beginning of this year. (At least here in Germany.)


That was crazy, too—last year I saw them get down to the $350 range (almost bought a couple more at that price)... and yesterday when I went to check on them I noticed they're up to $530!

When I bought my drives a couple years ago they were around $400. I wish there were more options for 4-8 TB SSDs that went for capacity over performance. Honestly I would be happy with some larger lower-cost SSDs that ran at 100-200 MB/sec just for silent, low power capacity.


Agreed. A low bandwidth SSD with no rotational or seek latency would be a very good use case for home users and NAS storage devices. Add in the reduced power, heat, and noise benefits and it's even better.


Good luck finding a reasonably priced PCIe 3 drive with cache these days.

The only non-junk PCIe3 option that's even advertised here recently is the overpriced WD Red SN700.

Other than that it's just lemons and more power-hungry PCIe 4-5.

I still have better operational cost-performance from 5 year old drives compared to anything on the market today.


> The only non-junk PCIe3 option that's even advertised here recently is the overpriced WD Red SN700.

Those WD drives seem to have some real issues, at least with ZFS and btrfs. :(

https://github.com/openzfs/zfs/discussions/14793


That's WD Black SN770 - targeted towards gamers. WD Red SN700 is prosumer/SMB NAS range. Lower performance and >2x the price.


Thanks. I tend to avoid WD so didn't know they were different. :)


Why would anyone put SSD at the end of a slow network attached storage device? It's a waste of money for very little space. A couple HDD would give you way more space, longer lifetime (writes), and cost incomparibly less. It seems like this is just for aesthetics and kind of cargo culting.


To add to the other two comments: also because of noise. If you want a completely silent device, HDD kind of ruins that compared to a SSD.

As always, there are trade-offs between all choices and not everything is a no-brainer :)


> Why would anyone put SSD at the end of a slow network attached storage device?

Far lower latency, and 2.5GbE isn't that bad if you're not already used to something faster.

SSD storage is even an improvement over 1GbE when you're working with lots of small files, or really any scenario that's substantially random access.


Windows explorer runs like a dog if any mapped network drives are not on SSD, especially painful if one of the HDDs spins down

Putting mapped drives on even garbage QLC drives is a good QoL improvement, remaining unmapped shares can stay on HDD


one reason is physical space and power draw. Not everything is a cost optimization.


Power. You need quite a solid PSU to withstand the power-up of a couple multi-TB drives.

Space. You trade off the physical space to get more storage space.

Random IO bandwidth. Until 6+ HDDs (w/o counting parity/mirrors) there is no point even speaking about random IO bandwidth.

> longer lifetime (writes)

To each their own. You need a lot of writes to deplete TBW of a modern drive, even MLC/QLC ones.

> kind of cargo culting

I would say what it's not the right term here. But hey, calling everyone who uses SSD in NAS because reasons is totally the right way. Rite?


In terms of power for an equivalent amount of storage the SSDs require much more (SSD and HDD use about the same amount of power). But you're talking transient spin-up power of course which is significant and might require an external power supply if you have more than handful of HDD. But you don't even need 4 HDD. The amount of money saved by an SSD average power draw versus HDD average power draw is insignificant and would not total up to anything even over the SSD lifetime (which is limited compared to HDD). Also, if you turn off the device and let the data sit without power SSDs' charge traps will leak and bit rot (like in an off-site scenario).

The physical space required is not significantly different. They're both small form factor things even if HDDs are 2 to 3 times the volume it's 2 to 3 times an insignificant amount. Plus you need only about 1/5 the HDD to match SSD storage.

Random IO bandwidth doesn't matter at all when you're putting it at the end of an ethernet connection to an rpi. You could certainly benefit from it if you put them in your actual computer though.

By cargo culting I meant the entire concept of NAS in the first place for home use. Just put the drives in your desktop PC and it'll be much much faster, much cheaper, and significantly less likely to break. The power cost of leaving your desktop on for the lifetime of such a thing will be less than the extra cost of NAS parts.


> In terms of power for an equivalent amount of storage the SSDs require much more (SSD and HDD use about the same amount of power).

O RLY?

    Toshiba X300
                  4TB     16TB    10TB      8TB
    Operating W   6.81    6.91    9.48      8.41
    Idle      W   4       4.03    7.22      5.61

    Samsung 870 QVO     8TB
    Operating W         2.4/3.3 (R/W)
    Idle      W         0.045
> But you're talking transient spin-up power of course which is significant and might require an external power supply which costs money

You need an external PSU for 3.5" HDDs because they require 12V and 5V. 870 QVO only require 5V.

> The physical space required is not significantly different

Come on:

    3.5" HDD: 26 x 147 x 100 = 382200 mm3 (Toshiba X300)
    2.5" SSD:  7 x  70 x 100 =  49000 mm3 (Kingston A400)
    382200 / 49000 = 7.8
870 QVO is listed as "100 X 69.85 X 6.8 (mm)" so essentially the same

> Random IO bandwidth doesn't matter at all when you're putting it at the end of an ethernet connection to an rpi.

Ah, yes, because everyone is only doing sequential IO to NAS, like you?

Sorry, but you arguing here in a bad faith and without even a bits of a basic knowledge.


Yes, really. So what you're showing is the Toshiba X300 8TB HDD uses about 4 watts more than SSD in idle or operating. Notably you don't seperate read vs write because SSD write actually uses more power than HDD and 20 watts of SSD write happens. You chose to characterize it by relative amount ignoring the insignificant absolute amounts. In the real world 4 extra watts of power just doesn't matter. If you have 3 8TB X300 HDD that's about $20 in power per year over the SSD. Not really something to base you entire decision around.

In comparison 3 8TB SAMSUNG 870 QVO cost $1500 total, or, ~$1000 more than the 3x $150 X300 8TB. So it will take 50 years for the $20/yr energy cost of the HDD over SSD to matter.

>Ah, yes, because everyone is only doing sequential IO to NAS, like you?

This is a mischaracterization (or misunderstanding) of my statement. I said it doesn't matter because behind a rpi ethernet link it literally doesn't. The latency of the link and rpi compuational power dominates. You could have an optane behind the rpi ethernet link and it'll be about the same as something fairly slow. SSD in rpi 1 gig ethernet NAS are like using sports cars to haul gravel with trailers. If it was connected via infiniband it might make sense.

I don't think you're making your arguments in bad faith but I do think you're a bit confused about what 4 watts is. And the physical volume thing is the same: very small things with large relative differences make for small absolute differences.


What's the $/kWh value you're basing your results in?


I pay 0.111 per kilowatt hour here in Wisconsin, USA but I calculated for 0.15 per kilowatt hour. Even at 0.30 per kilowatt hour that's still 25 years of operation before the HDD power cost equals the SSD capital cost.


True but the NAS can serve to multiple devices including the TV.


A networked attach storage computer is just a computer with limited capabilities and size. A normal computer can do it too.


And limited noise & power generation, and it's tucked out of the way somewhere


> Power. You need quite a solid PSU to withstand the power-up of a couple multi-TB drives.

This is overstated. You need a lot more than "a couple" of drives to trouble even the most modest ATX power supply. Now if you were trying to run this off the RPi's power I would agree. But since this requires a separate PSU anyway it's a non-issue.


I'll add two more:

Noise.

Feeling. Solid state devices always felt more durable to me. I know I won't use too many writes on the NAS, but mostly read.

And then there's "rare" access time: If you don't want to spend power on spinning disks, you need to wait some time for them to spin up if you want to access your NAS.


If the raspberry pi had ECC RAM (and maybe a faster NIC or multiple NICs) this would be an excellent solution. Are there any (hobbyist/non-industrial $$$$) ARM NAS boards with ECC?


The Helios64 looked really promising until it was abandoned.

https://blog.kobol.io/2021/08/25/we-are-pulling-the-plug/

https://wiki.kobol.io/helios64/intro/

An RK3588 version of that with some more PCIe connectivity (another m2 slot or two?) would have been pretty much ideal.


ASRock Rack is selling an Ampere motherboard... full size, and over $1k including an Ampere CPU. But that's about it for now.


... of course this drops after I spent some righteous clams on a Synology NAS.

For a NAS newbie, how would it compare to a DS224?


Honestly Synology's software and support is going to be nicer than what you get with something like OMV (and TrueNAS isn't available on the Pi).

The hardware is fairly similar in terms of what you'll get for performance and efficiency, though you get hot swap bays on the Synology, which is nice. Plus a case... Radxa hasn't told me when they're releasing the case, just that they're working on producing it.


Synology NAS just works, it has nice UI and lots of plugins. However, it's far more expensive and hardware can be underwhelming. It's less flexible as well.


"Righteous clams" - going to start saying this immediately along with "What's the clam-age" and any other variants I can think of




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: