cancel
Showing results for 
Search instead for 
Did you mean: 

Rog strix Z690-E Gaming terrible write performance on raid 5

sververda
Level 8
Hey all,

I'm getting a terrible write performance on my raid-5 setup with the following hardware:

The system has an I7-12700K processor and 32GB or ram in a dual set Corsair CMK32GXM2A4800C40.
I have an NVME (Corsair MP600 Core with PCIe link width x4, PCIe link speed 8000 MB/s) as my non raid system disk.
The raid 5 system is 4x HDD's WD Red Plus WD40EFZX, Sata 3, 6 Gb/s. Data disk cache Enabled on each disk, on the riad Write-cache buffer flushing enabled, Cache mode: Read Only.

What i already noticed during copying data from old Sata disk (that i temporarily connected to the remaining 2 sata ports) that i did got a performance start of 70 to 80 MB/s for a couple of seconds after which it dropped to 15 to 20 MB/s for the remainder of the copy procedure (even with very large files).

So now i started doing some test from the NVME to the raid5 and vice versa.
While copying 4 files of around 1.1GB from the NVME to the raid 5 i got around 1 GB/s for the first 2 second after which it dropped to again 20 MB/s.
When copying 9 files or around 2.2GB each from the raid 5 to the NVME i got around 380 MB/s average, no peak at the beginning and no drop after the start.

I was not expecting huge speeds specially from the old sata disk, but was expecting more from the NVME to the raid 5... but the overal drop the 20 MB/s is hugely dissapointing in my opinion and is FAR from what it should perform.

I found a synology NAS test with the same drives on a 10Gb ethernet connection that performed around 250MB/s write speeds in a continous loop of a 1GB file.
I know there is a difference in hardware and software raid, but with the latest chipset and processor the drop still should not be THAT huge.

What am i missing here to get a better performance?

forgot to mention: Bios is on 0811, OS is windows 10 Pro 64-bit
Intel RST VMD driver: 19.0.0.1067
3,371 Views
10 REPLIES 10

xeromist
Moderator
The Synology test might be using different RAID options as well. Was it also for 4 drives in RAID 5? Did it mention if they were using similar caching?

Also, remember that mode 5 does parity calculations which will slow things down. If you don't *need* the extra space I would run them as mode 10 (or 1+0). That would be way faster for both reads & writes while still allowing for a single disk failure. The downside is a halving in raw capacity rather than n-1 of course.
A bus station is where a bus stops. A train station is where a train stops. On my desk, I have a work station…

Yes the synology test was also with 4 of the exact same drive types (WD Red Pro) but actually 2TB drives instead of the 4TB in a raid 5 configuration.

I do realize that the parity calculation will make the entire write sessions slower than the reading ofc but it should not be with this amount.

I chose the raid 5 because i need the amound of space and i'm not expecting HUGE speeds, but certainly higher then what it currently is. The striping just gives it a bit more redundancy while i do keep backups ofc of the most important data. For high speed operations i will still use the NVME, the raid 5 is just for storage but i'm convinced that the current 20 MB/s is way way way below specs.

xeromist
Moderator
I'm not saying it can't be faster. I'd encourage you to try different options such as disabling caching entirely if you can. I haven't used desktop motherboard RAID in years so I'm not sure how much you can tweak but sometimes options you think should make things better actually do the opposite.
A bus station is where a bus stops. A train station is where a train stops. On my desk, I have a work station…

I'm not sure what you are expecting from bottom of the line WD red HDD's.

You need to disable Write-cache buffer flushing and enable Write-back

RavenMaster wrote:
I'm not sure what you are expecting from bottom of the line WD red HDD's.


Are these SMR drives? I know WD failed to disclose the downgrade until people called out the BS but I believe it should be labeled now. Either way, my understanding is that SMR doesn't start to kill performance until there is a heavier transactional load. A straight write to empty disks shouldn't be impacted. I don't know if I'll ever trust WD again after all that but I don't think it has anything to do with sververda's concerns.
A bus station is where a bus stops. A train station is where a train stops. On my desk, I have a work station…

xeromist wrote:
Are these SMR drives? I know WD failed to disclose the downgrade until people called out the BS but I believe it should be labeled now. Either way, my understanding is that SMR doesn't start to kill performance until there is a heavier transactional load. A straight write to empty disks shouldn't be impacted. I don't know if I'll ever trust WD again after all that but I don't think it has anything to do with sververda's concerns.


The WD drives are actually CMR, so not SMR. It's hard to say what i expect from these drives as there are no real tests that i can compare to with the motherboard raid from the Z690 chipset. So my expectations are very loosely coupled to what i saw from the synology tests with the same drives (they actually tested both the CMR and the SMR versions) minus the (as from what i have read so far) a slight performance drop because of the sofware raid. But to be honest i already would be really happy if i only am able to achieve half the writing speeds that i saw on that synology.

Tnx so far for the answers, i do know ofc that other raid options like going for raid 10 will give better performance... and hey who knows in the end i will make it a different setup but right now i prefer the capacity over the write speed. Read speeds from this setup while copying files back to the NVME are satisfactory for me and overall i will do much more reads then writes to the raid 5 (it was just very annoying while moving all my data from the older disks to the raid).

I will certainly do some more tests on the raid setup with the various caching options disable and will let you know the results.

I am curious to know if you ever got this issue resolved. I have the exact same issue. I have a 1TB and 2TB Samsung 980 Pro nVME drives and a RAID-5 consisting of 3x12TB HGST drives and transferring files TO the NVME's is what i would expect but transferring FROM the NVME's to the RAID-5 gives the same 25-50MB/s result.

For the record i have the Z690-A Prime board... i have had an open ticket with Asus for over 2 months now. they have swapped the board for me but i get the same result.

I just wanted to do a followup on this that i think i just figured out my own problem. I changed the RAID-5 Stripe size from 128k (Asus default i guess) down to 64k and now my write performance is at 400-450MB/s versus the 32-50MB/s i was getting.