cancel
Showing results for 
Search instead for 
Did you mean: 

Rampage VI Optimal Storage Configuration

ratzofftoya
Level 7
Hey all,

Excited to be installing my Rampage VI this weekend. I mostly game, edits videos, and work in CAD on my computer. What do you think is an optimal storage configuration that uses the Rampage capabilities (hyper m.2 card, DIMM.2, VROC?) to their fullest? I plan on using two raided SSDs for my scratch disk and four raided HDDs for long-term project storage.

But where do I stick all my m.2 drives and how do I configure them for the best OS and game/application drive setup?

Thanks!
423 Views
25 REPLIES 25

Brighttail wrote:
It is really sad that the 4k numbers don't benefit from 8 m.2 drives. 😞


Given they are the most important for most users it is sad. But for some usages those other numbers mean a huge difference. When I say users I mean the types of things they do with the rig. Also some other interesting facts on the Intel 900p. You never need to factory reset the drive ever. Most NVMe drives after they start getting filled they slow down the 900Ps don't. But most importantly is the IOPs which aren't shown in DiskMark. I haven't found a good tool to show those but I saw one in a video on the AMD platform that was a speedometer and the IOPs where crazy high on a RAID 0 Hyper x16 with 4 drives. I know my builds with unit and integration tests of a huge Visual Studio solution with 80+ projects takes about 6:40 minutes. The fastest reported from others on this project and there are many folks is around 11+ minutes. So for me it is a huge difference. Most folks are running 20+ minutes.

BTW I think the issue is partly related to spanning 2 controllers, RAID itself causes slowness on 4K.

IMHO most users should stick with a single drive no RAID.

Notice my boot drive is a single Samsung 960 pro.

One last thing the faster you OC the faster those numbers get even the 4K one. This is because the 3 VMD controllers are on the CPU.

CharlieH wrote:
Given they are the most important for most users it is sad. But for some usages those other numbers mean a huge difference. When I say users I mean the types of things they do with the rig. Also some other interesting facts on the Intel 900p. You never need to factory reset the drive ever. Most NVMe drives after they start getting filled they slow down the 900Ps don't. But most importantly is the IOPs which aren't shown in DiskMark. I haven't found a good tool to show those but I saw one in a video on the AMD platform that was a speedometer and the IOPs where crazy high on a RAID 0 Hyper x16 with 4 drives. I know my builds with unit and integration tests of a huge Visual Studio solution with 80+ projects takes about 6:40 minutes. The fastest reported from others on this project and there are many folks is around 11+ minutes. So for me it is a huge difference. Most folks are running 20+ minutes.

BTW I think the issue is partly related to spanning 2 controllers, RAID itself causes slowness on 4K.

IMHO most users should stick with a single drive no RAID.

Notice my boot drive is a single Samsung 960 pro.

One last thing the faster you OC the faster those numbers get even the 4K one. This is because the 3 VMD controllers are on the CPU.


Thanks, Charlie. Do you stick that 960 Pro on the DIMM.2, or just in the m.2 slot under the armor?

ratzofftoya wrote:
Thanks, Charlie. Do you stick that 960 Pro on the DIMM.2, or just in the m.2 slot under the armor?


I run it on the DIMM.2 so as you can see on the IRSTe pic it is on VMD 3. That way everything runs under VROC. I disable my PCH SATA in the BIOS.

You might try the native PCH slot (m.2 slot under the armor). Given its 1 NVMe the bandwidth should be fine (I think it can handle the bandwidth of 2 NVMes and not saturate). I haven't had time to try that. It would off load the CPU usage for the OS drive. So the more I think about it this might be a better option. I'll have to try it, it would probably allow multiple reads/writes give its really two different controllers, well 4 controllers, PCH, VMD1, VMD2, VMD3. Usually the more controllers the better. But if you OC the CPU the VROC VMD controller might be better performance. This all needs vetted out.

BTW the DIMM.2 can be configured to run VROC VMD controller (This is the default) or it can be set to run PCH. So you can still use the DIMM.2 for PCH and not have to mess with taking he MB apart to get to that other NVMe slot. But once again the difference between the native PCH slot and the DIMM.2 PCH performance probably needs vetted out too.

For those not wanting to use a Hyper x16 card the DIMM.2 does RAID on PCH or VROC.

Very important there are two different IRST drivers though depending on if you are using PCH or VROC. PCH needs the non-enterprise drivers, the regular IRST drivers which I recommend getting from Intels site:
https://downloadcenter.intel.com/product/55005/Intel-Rapid-Storage-Technology-Intel-RST-
And I posted the links for the VROC IRSTe drivers I use in a previous post above (there is a newer version v5.3.1.1020 I haven't tried yet as I write this post).

And one last thing VROC RAID 0 and 1 are free, 5 & 6 cost/need the key.

CharlieH wrote:
Given they are the most important for most users it is sad. But for some usages those other numbers mean a huge difference. When I say users I mean the types of things they do with the rig. Also some other interesting facts on the Intel 900p. You never need to factory reset the drive ever. Most NVMe drives after they start getting filled they slow down the 900Ps don't. But most importantly is the IOPs which aren't shown in DiskMark. I haven't found a good tool to show those but I saw one in a video on the AMD platform that was a speedometer and the IOPs where crazy high on a RAID 0 Hyper x16 with 4 drives. I know my builds with unit and integration tests of a huge Visual Studio solution with 80+ projects takes about 6:40 minutes. The fastest reported from others on this project and there are many folks is around 11+ minutes. So for me it is a huge difference. Most folks are running 20+ minutes.

BTW I think the issue is partly related to spanning 2 controllers, RAID itself causes slowness on 4K.

IMHO most users should stick with a single drive no RAID.

Notice my boot drive is a single Samsung 960 pro.

One last thing the faster you OC the faster those numbers get even the 4K one. This is because the 3 VMD controllers are on the CPU.


Awesome information here.

Mother of God at those sequential rates though - truly impressive!

1080 Ti's (EK blocks) in SLi and a Sound Blaster ZxR prevent me from "becoming the ultimate storage IO bad-ass" like youself ; )
| Corsair 1000D | i9-10980XE @ 4.8 GHz 24/7 (All-Core) | Rampage VI EE |
| 32GB (4x8) Corsair Dominator RGB 3600 Kit (16-16-16-32 T1) |
| EVGA RTX 3090 FTW3 Ultra with EK block | Corsair AXi1600 PSU |
| 3x Samsung 970 PRO NVMe | Custom Liquid Cooling (Dedicated CPU/EK Monoblock)

Protocol wrote:
Awesome information here.

Mother of God at those sequential rates though - truly impressive!

1080 Ti's (EK blocks) in SLi and a Sound Blaster ZxR prevent me from "becoming the ultimate storage IO bad-ass" like youself ; )


I love SLIed GPUs, I have a different rig that has that. This rig is for my work and I run VMs on it so the x8 1080 GPU I have is fine. I am never physically on the rig except for low level updates like the BIOS. I use Hyper-V manager to access the VMs from my laptop or my other SLIed rig with 2 monitors.

EK makes some great products my SLIed rig doesn't have my GPUs under water. I use to do that and loved the Danger Den full nickel plated blocks. They were very heavy though. And with all the LED stuff I would think you need Acetal to get the LEDs on the blocks anyway.

When I water cool I use solid Tygon black tubing like this:
http://www.performance-pcs.com/tygon-r3400-1-2-id-3-4-od-uv-resistant-tubing-black.html
, blacked out reservoir (I tape a clear EK one), distilled water only, and siver coils like these:
https://www.amazon.com/gp/product/B00A66HMRC/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1
This keeps the light out. I also like the Iwaki RD 30 pump like this one:
http://www.performance-pcs.com/iwaki-rd-30-24-vdc-318gph-1200-l-hr-canned-motor-pump.html
, with the right voltage (24) you can get some very high throughput. This rig we are talking about I have with the RAID 0 x8 Intel 900Ps has this exact configuration for my CPU and my idle temps are 23-25C. I haven't checked load because I haven't benched this rig. this loop would handle 2 GPUs in addition to the CPU fine I would just add another radiator.

The reason is I am lazy and never want to have to do maintenance on my loop. When I run this configuration I never touch my loop ever it runs for years. It definitely isn't for show though all blacked out.

GhostWorks
Level 9
Hi all,

I'm looking into upgrading my MB ( Rampage V Extreme ) with CPU , cooler and storage , giving a friend my current stuff so he can build a machine around it for his gaming room / office and weekly gaming

I'm looking into storage options and heard issues with VROC with samsung m.2 970 Pro...
I'm hoping to have a system bootable drive on a Intel Optane 900P 480GB PCI-e x4 Card, and a 2Tb Game drive via 2 x 1TB Samsung 970 Pro on the DIMM card in VROC raid 0 to achieve a 2tb faster than PCH would be ,

Scope is a system drive Fastest possible and a 2TB game drive fastest possible

Thanks
Corsair 900D