cancel
Showing results for 
Search instead for 
Did you mean: 

Bootable VROC RAID 10, VMD domains, and PCIe lanes

User5437
Level 7
Intended Hardware/RAID Setup:

As I understand it from this picture, bootable RAID arrays have a maximum of 4 supported drives because they must reside within a single VMD domain and non-bootable arrays can span multiple VMD domains.

79603

I don't know how VMDs are distributed on this board so I don't know if the arrays can be configured in this way:

Onboard:
M.2_1 (lower M.2 slot, too short for a 905P) Data drive 7

M.2_2 (upper M.2 slot, shares bandwidth with the U.2 port and may be PCH only) No drive


DIMM.2:
M.2_1 Boot drive 1

M.2_2 Boot drive 2


M.2 x16 Card 1:
M.2_1 Boot drive 3

M.2_2 Boot drive 4

M.2_3 Data drive 1

M.2_4 Data drive 2


M.2 x16 Card 2:
M.2_1 Data drive 3

M.2_2 Data drive 4

M.2_3 Data drive 5

M.2_4 Data drive 6


Will this configuration work?

Also, the CPU supports 44 PCIe lanes and there will be an x16 graphics card, how many 970 EVOs would need to be cancelled to keep the graphics card running at a full x16?
15,215 Views
23 REPLIES 23

With the ASUS X299 Deluxe Ed 30 + I9-10900X CPU: What is the better pathway to RAID 0?

I've invested in INTEL VROC Standard MM951605 chip, ASUS HYPERX16 Gen 4, (4) Samsung EVO Plus 250GB M.2 so far. NO RAID YET. Wrong M.2 modules I've learned.

Suggestions please? Looking at the Highpoint Bootable 7103 m.2 PCIEX16 card now.

This the my second similar post.

GrievousAngel wrote:
With the ASUS X299 Deluxe Ed 30 + I9-10900X CPU: What is the better pathway to RAID 0?

I've invested in INTEL VROC Standard MM951605 chip, ASUS HYPERX16 Gen 4, (4) Samsung EVO Plus 250GB M.2 so far. NO RAID YET. Wrong M.2 modules I've learned.

Suggestions please? Looking at the Highpoint Bootable 7103 m.2 PCIEX16 card now.

This the my second similar post.


ok. there are a few threads on this topic on this forum so I'll try to summarize exactly what works:

mother boards based on the X299 chip set, like Rampage Extreme or ASUS X299 Deluxe etc:
1. Intel VROC on the X299 chip set works ONLY with high end Intel drives such as Optane 905p or 900p drives in M.2 or U.2 format or drives from their Data Center enterprise line. See here: https://www.intel.com/content/dam/su...onfigs_6-3.pdf.
2. ONLY the "Intel only SSD" key called: VROCISSDMOD (MM#956822) works on motherboards with the X299 chipset the two other type of keys do NOT work on X299 boards! - the VROCISSDMOD key can be bought for $20 here: https://www.evga.com/products/product.aspx?pn=W002-00-000066
3. With this key installed, you can create RAID arrays of level 0,1,10 or 5 (setting it up in BIOS - bootable OR in windows using eRST - non bootable)
4. if you plan to ONLY use RAID 0, like putting two 905p is raid 0, you DO NOT need a any key at all. for all other raid levels you need the (MM#956822), per #3.

For mother boards with the enterprise chips sets like C422 or C621 with a Xeon processor - like the ROG Dominus Extreme MB:
4. VROC works for the specific list of NVMe see here: https://www.intel.com/content/dam/su...onfigs_6-3.pdf - also for certain non-Intel drives BUT only the one on the list.
5. you can use the Premium (MM#951606) key to create a raid array level 0,1,10 or 5 of these drives
6. with the MM#951605 you can only create a raid 0, 1 or 10.

That is the net of it all.

I have this VROCISSDMOD installed in a Rampage VI Extreme and a Rampage VI Extreme Encore and it works well:
- VROC RAID scales very well with #of cores (I have a 10980XE) and CPU Speed so if you OC you'll get even better IOPS and throughput performance in addition to the CPU speed.
In my experience: once you put high performance compatible Intel drives (like 905p) in VROC you get close to theoretical max throughputs and close to theoretical 100% linear scaling. One thing though: windows 10 can only do 1Million IOPS (I think that's the cap) so if you raid 4 905p-s in raid 0 you start feeling windows limitations so the IOPS don't scale 100% at the top end.

The storage speeds are so fast that the bottle neck moves to other parts of the system - (I think Windows 10 is not designed to take 100% advantage of all this bandwidth and throughput).

Nice problem to have 🙂

After my experience it may be just an exercise in saying I did it!

Is the performance increase remarkable?

Jahmen
Level 7
I was going to ask how you could physically get two Hyper M.2 Cards onto the boards PCIe slots with the Graphics Card. Then I saw this was posted back in 2017 back when the Graphics Cards were smaller.

My ROG Rampage VI Extreme Encore motherboard with the ASUS 2080 Ti graphics card can't handle the physical configuration for 2 Hyper M.2 Cards.
Best for the Hyper Card in the 1st PCIe slot (1) for the X16 x8 lanes and the graphics card in the 2nd PCIe slot (2).
The graphics card won't fit on the 3rd bottom PCIe Slot (3).

You can't put the graphics card on top 1st Slot (1) with a Hyper M.2 Card under it in the 2nd PCIe Slot (2) position because it covers the intake fans for the graphics card.

Anyone know how to set up a Hardware base VROC Key module boot-able RAID10 on CPU with the Hyper M.2 Card and x4 7600p series SSDs?
Yeah, I figure probably not. I've been asking ASUS Support to give me that information for a couple weeks now.

Feel free to help out and post those steps if you know them, and if you do, please include the driver & application file steps.

The ASUS RAID Guide has some steps, but without those for the HD key BIOS setup or the driver & application files.

Appears to be no consensus yet for exactly which of those Key files (RSTe or VROC) to use for the HD Key.
Go do crazy some place else, we're all stocked up here!

Jahmen wrote:
I was going to ask how you could physically get two Hyper M.2 Cards onto the boards PCIe slots with the Graphics Card. Then I saw this was posted back in 2017 back when the Graphics Cards were smaller.

My ROG Rampage VI Extreme Encore motherboard with the ASUS 2080 Ti graphics card can't handle the physical configuration for 2 Hyper M.2 Cards.
Best for the Hyper Card in the 1st PCIe slot (1) for the X16 x8 lanes and the graphics card in the 2nd PCIe slot (2).
The graphics card won't fit on the 3rd bottom PCIe Slot (3).

You can't put the graphics card on top 1st Slot (1) with a Hyper M.2 Card under it in the 2nd PCIe Slot (2) position because it covers the intake fans for the graphics card.

Anyone know how to set up a Hardware base VROC Key module boot-able RAID10 on CPU with the Hyper M.2 Card and x4 7600p series SSDs?
Yeah, I figure probably not. I've been asking ASUS Support to give me that information for a couple weeks now.

Feel free to help out and post those steps if you know them, and if you do, please include the driver & application file steps.

The ASUS RAID Guide has some steps, but without those for the HD key BIOS setup or the driver & application files.

Appears to be no consensus yet for exactly which of those Key files (RSTe or VROC) to use for the HD Key.




So I have this set up:
ASUS ROG RAMPAGE VI EXTREME
1x ASUS HYPER M.2 X16 CARD V1
4x Intel Optane 900P connected to the ASUS HYPER M.2 X16 CARD
1x EVGA 2080ti graphics card
1x Intel VROC Intel SSD Only Key VROCISSDMOD installed
+ a number of other SSDs etc (even tried to use the u.2 port with a 6.4TB Intel P4600 - works too (BUT you cannot use the onboard m.2 Slot (raiser card) in CPU mode if you do. need to route the m.2 slot through the chipset's shared x4 lanes... - I was experimenting with these configurations back and forth with various BIOS settings but do not use the u.2 port anymore - might as well route everything through the ASUS HYPER M.2 X16 CARD and continue to use the M2. slot)

I've used both these two configurations set up and working:
1. graphics card in slot 3 and Hyper x16 card in slot 1 (had to use this set up initially because my CPU aircooler was too big and conflicted with any card in PCIe slot 1)
2. graphics card in slot 1 and Hyper x16 card in slot 3 (delided my 7980XE CPU and changed to water-cooling allowing me to put back the graphics card in PCIe slot 1)


Both configurations works but 2 is better in my view.

I currently have configuration 2 working 20/7. - best set up for best performance... but both works 24/7 confirmed...

I see no reason why it would not work to add an additional ASUS HYPER M.2 X16 CARD in PCIe slot 4 BUT if you do you can only use 8 out of the 16 lanes. The reason is because you simply do not have enough PCIe lanes connected to the CPU even with a 7980XE (44 is max).

simple PCIe lane math: 16 lanes (in slop 1 for the graphics card) + 16 lanes (in slot 3 for the Hyper x16 card) + 8 lanes (in slot 4 for the 2nd Hyper x16 card ) = 40 + another 4 lanes for cipset etc (can not change that) = 44 which is all the 7980XE supports...

so that's the best you can do on a R6E with a 7890XE

Jahmen wrote:
I was going to ask how you could physically get two Hyper M.2 Cards onto the boards PCIe slots with the Graphics Card. Then I saw this was posted back in 2017 back when the Graphics Cards were smaller.

My ROG Rampage VI Extreme Encore motherboard with the ASUS 2080 Ti graphics card can't handle the physical configuration for 2 Hyper M.2 Cards.
Best for the Hyper Card in the 1st PCIe slot (1) for the X16 x8 lanes and the graphics card in the 2nd PCIe slot (2).
The graphics card won't fit on the 3rd bottom PCIe Slot (3).

You can't put the graphics card on top 1st Slot (1) with a Hyper M.2 Card under it in the 2nd PCIe Slot (2) position because it covers the intake fans for the graphics card.

Anyone know how to set up a Hardware base VROC Key module boot-able RAID10 on CPU with the Hyper M.2 Card and x4 7600p series SSDs?
Yeah, I figure probably not. I've been asking ASUS Support to give me that information for a couple weeks now.

Feel free to help out and post those steps if you know them, and if you do, please include the driver & application file steps.

The ASUS RAID Guide has some steps, but without those for the HD key BIOS setup or the driver & application files.

Appears to be no consensus yet for exactly which of those Key files (RSTe or VROC) to use for the HD Key.


Only known working key is VROCISSDMOD which you dont need for raid0 but do for everything else. This is confirmed with the original RVIE only though, I dont imagine they changed that. The change up in PCIE slots is what kept me from upgrading. Ill stick with my two AIC 900P drives in the other two slots that are not available on Omega or Encore as Im using the 2 X16 for GPUs.

As for the question of why put to X16 AIC drive cards in some people need fast file transfer in and right back out. Put 4 together and you get better than 10GB/s

If you go back towards the beginning there is a step by step of what drivers to use in what order and where to get them on winraid. Takes two drivers during the install at the F6 prompt.

BigJohnny wrote:
Only known working key is VROCISSDMOD which you dont need for raid0 but do for everything else. This is confirmed with the original RVIE only though, I dont imagine they changed that. The change up in PCIE slots is what kept me from upgrading. Ill stick with my two AIC 900P drives in the other two slots that are not available on Omega or Encore as Im using the 2 X16 for GPUs.

As for the question of why put to X16 AIC drive cards in some people need fast file transfer in and right back out. Put 4 together and you get better than 10GB/s

If you go back towards the beginning there is a step by step of what drivers to use in what order and where to get them on winraid. Takes two drivers during the install at the F6 prompt.



YES VROCISSDMOD is the only key that works AND it ONLY works with high end intel SSDs like Optane 90x series.

a key can be bought for $20 here: https://www.evga.com/products/product.aspx?pn=W002-00-000066

lots of other sites charge crazy prices and evga is the cheapest I know of currently.

Might as well buy the key to allow you to configure multiple RAIDs in various modes: RAID 0 for fast access and super high IOPS and RAID 5 or 1 for redundant storage. $20 is peanuts given how much a system like this cost in total.

I have found that VROC scales very well with the CPU. I.e. 18 cores and good OC improves the disk access/IO scores (Crystalmark etc).

If you are concerned about space limitations on the Intel octane m.2 drives, get the U.2 model and use u.2 cables to M.2 converters problem solved...

Also based on 2 years of VROC experience, I can say that VROC is rock solid - it just works - period. (downside is it only works with intel high end drives on a X299 and requires the VROCISSDMOD key to use RAID other than RAID0).

Aysberg
Level 10
Just out of curiosity, what are you doing with such a setup? Is this just a "because I can" scenario. If I would need that massive storage performance I would simply switch the platform and could archive more with much less hassle.

I am running an Omega with 2 x 2080ti, doing 3D renderings and post production and a RAID0 with two Samsung NVMe can handle the load and are still not at it's limit. OS boots from another NVMe and the rest is stored on classic SSDs or even a spinning drive.

machproo
Level 7
Hello, I have been using a Raid 0 VROC of 3 x Western Digital Black SN750 1Gb SSD on my Rog Dominus Extreme motherboard for the past year and the WD Black SN750 SSD serie is perfectly VROC compatible. I just ordered an ANU28PE16 controller board built around the Broadcom PLX8748 chip. This card is VROC compatible and would allow you to create Raid0 to Raid5 with 8 x « U.2 SSD*» or « 16 x U.2 SSD » if using QNAP-U2MP enclosures. The equipment is ordered. I'll post the test results.

machproo wrote:
Hello, I have been using a Raid 0 VROC of 3 x Western Digital Black SN750 1Gb SSD on my Rog Dominus Extreme motherboard for the past year and the WD Black SN750 SSD serie is perfectly VROC compatible. I just ordered an ANU28PE16 controller board built around the Broadcom PLX8748 chip. This card is VROC compatible and would allow you to create Raid0 to Raid5 with 8 x « U.2 SSD*» or « 16 x U.2 SSD » if using QNAP-U2MP enclosures. The equipment is ordered. I'll post the test results.


The Dominus is a Socket 3647 Xeon board using the C621 Chipset set (not Core i9 extreme X and XE with the X299 cipset)
so things written in this thread do not apply to Xeon/C621.
you do need a VROC module to but you have a choice: premium or standard version.
also Xeon/C612 are not limited to intel SSD only. They have a much larger range of SSD to choose from listed on intel's web sitehttps://www.intel.com/content/www/us/en/support/articles/000030310/memory-and-storage/ssd-software.h... .
This is the relevant page from the November 2021 version. Pay special attention to the last two sentences on the page: "Intel VROC on X299 only supports the Intel SSDs listed above. The Third Party Vendor SSDs List is NOT supported by Intel VROC on X299 platforms"

90796