cancel
Showing results for 
Search instead for 
Did you mean: 

RVIEE VROC on DIMM.2

BigJohnny
Level 13
Anyone running VROC on DIMM.2 with a pair of 905P M2 drives?

If so did it take using F6 drivers at win install?*
2,617 Views
14 REPLIES 14

deboyzfun
Level 7
BigJohnny wrote:
Anyone running VROC on DIMM.2 with a pair of 905P M2 drives?

If so did it take using F6 drivers at win install?*


yes vroc driver : Intel_VROC_win_6.2.0.1239_pv but used 760p

deboyzfun wrote:
yes vroc driver : Intel_VROC_win_6.2.0.1239_pv but used 760p




I have a R6EE with a core i9 10980XE which should have 48CPIe lanes.

I could not get the VROC to work with two 380GB 905p in the DIMM.2 slot
several issues:
1. with x16, x16, x,4 mode I only see one of the 905Ps
To even get the second 906P visible in the BIOS I had to set the mode to x16,x8,x8 then I see both

2. with x16, x8, x8 mode, I can create a VROC RAID 0 BUT they the 2 906Ps for some reason ends up on 2 dirrerent VMDs AND
according to the BIOS you cannot create a bootable RAID 0 VROC that work for windows 10 with a RAID 0 across 2 VMDs. SO while I can create a RAID 0 I cannot make it bootable (which is what I want)

I you have a good answer for what I need to do, I would be very appreciative.

Thanks

Int8bldr wrote:


I have a R6EE with a core i9 10980XE which should have 48CPIe lanes.

I could not get the VROC to work with two 380GB 905p in the DIMM.2 slot
several issues:
1. with x16, x16, x,4 mode I only see one of the 905Ps
To even get the second 906P visible in the BIOS I had to set the mode to x16,x8,x8 then I see both

2. with x16, x8, x8 mode, I can create a VROC RAID 0 BUT they the 2 906Ps for some reason ends up on 2 dirrerent VMDs AND
according to the BIOS you cannot create a bootable RAID 0 VROC that work for windows 10 with a RAID 0 across 2 VMDs. SO while I can create a RAID 0 I cannot make it bootable (which is what I want)

I you have a good answer for what I need to do, I would be very appreciative.

Thanks


I have the same issue (see also the discussion in https://rog.asus.com/forum/showthread.php?117161-Rampage-VI-Extreme-Encore-DIMM-2-PCIE-Lane-Allocati...).

This seems to be "expected" according to the manual: On page ix, in the footnotes marked with (*) and (**), it says that running the first two PCIe slots in x16 mode disables one of the DIMM M.2 slots. The result is that 4 of the PCIe lanes that come from the CPU are "magically" inaccessible on this board. I have contacted Asus support, but so far they just replied with a screenshot of the manual which they cropped right before the footnotes, so it seems they don't even know/bother what they wrote in their own manual.
In summary, I have no idea how all CPU lanes can be used on the "Extreme Encore", so maybe we need another "Extreme Encore Again" revision that has good VRMs and makes all CPU lanes accessible.

84484
84485

BenJW wrote:
I have the same issue (see also the discussion in https://rog.asus.com/forum/showthread.php?117161-Rampage-VI-Extreme-Encore-DIMM-2-PCIE-Lane-Allocati...).

This seems to be "expected" according to the manual: On page 1-8, in the footnotes marked with (*) and (**), it says that running the first two PCIe slots in x16 mode disables one of the DIMM M.2 slots. The result is that 4 of the PCIe lanes that come from the CPU are "magically" inaccessible on this board. I have contacted Asus support, but so far they just replied with a screenshot of the manual which they cropped right before the footnotes, so it seems they don't even know/bother what they wrote in their own manual.
In summary, I have no idea how all CPU lanes can be used on the "Extreme Encore", so maybe we need another "Extreme Encore Again" revision that has good VRMs and makes all CPU lanes accessible.


Thank you for confirming my suspicions!

I think the board was not made for a 48 PCIx lane CPU or they never enable the BIOS to handle all 48 PCIx lanes for the core i9 10980XE.

The manual says this

84481

Anyone reading this would be lead to believe that you can run the both DIMM.2_1 and DIMM.2_2 in x16,x16,x4 mode, BUT DIMM.2_2 does not show

But even if you change to x16,x8,x8 and get DIMM.2_2 to show in BIOS you cannot VROC them in RAID 0 and bootable.

My Next attempt is to use my HYPER M.2 X16 CARD v2 and put them there instead to see if I somehow can make that work.

I'm thinking maintaining x16,x16, x4 (two Graphics cards PCIEX16_1 and PCIEX16_2) but put the HYPER M.2 X16 CARD v2 with one 380GB M2 905p in slot PCIEX16_3 and then the other 380GB M2 905p left in the DIMM.2_1 slot. what a waste of the HYPER M.2 X16 CARD 😞

We'll see. other method is to forget about running 2 graphics cards altogether. and just put the HYPER M.2 X16 CARD in the PCIEX16_2 slot 😞

Int8bldr wrote:
Thank you for confirming my suspicions!

I think the board was not made for a 48 PCIx lane CPU or they never enable the BIOS to handle all 48 PCIx lanes for the core i9 10980XE.

Anyone reading this would be lead to believe that you can run the both DIMM.2_1 and DIMM.2_2 in x16,x16,x4 mode, BUT DIMM.2_2 does not show

But even if you change to x16,x8,x8 and get DIMM.2_2 to show in BIOS you cannot VROC them in RAID 0 and bootable.

My Next attempt is to use my HYPER M.2 X16 CARD v2 and put them there instead to see if I somehow can make that work.

I'm thinking maintaining x16,x16, x4 (two Graphics cards PCIEX16_1 and PCIEX16_2) but put the HYPER M.2 X16 CARD v2 with one 380GB M2 905p in slot PCIEX16_3 and then the other 380GB M2 905p left in the DIMM.2_1 slot. what a waste of the HYPER M.2 X16 CARD 😞

We'll see. other method is to forget about running 2 graphics cards altogether. and just put the HYPER M.2 X16 CARD in the PCIEX16_2 slot 😞


Thanks for the insight with the VMDs. It's really annoying since the obvious way for a 2 drive M.2 RAID would indeed be to put both SSDs into the DIMM.2 slots. What's worse imho is that one of the main selling points of Cascade Lake X over the previous generation CPUs are 4 extra CPU lanes, but the Encore nullifies the advantage by keeping 4 lanes inaccessible with any 44 or 48 lane CPU, even though it is already the third incarnation of the Rampage VI Extreme series.

If it helps, there are also simple M.2 to PCIe x4 adapters (on ebay or aliexpress) that are cheaper than the Hyper X16 card, which would probably be sufficient to use a single M.2 drive in PCIEX16_3.

BenJW wrote:
Thanks for the insight with the VMDs. It's really annoying since the obvious way for a 2 drive M.2 RAID would indeed be to put both SSDs into the DIMM.2 slots. What's worse imho is that one of the main selling points of Cascade Lake X over the previous generation CPUs are 4 extra CPU lanes, but the Encore nullifies the advantage by keeping 4 lanes inaccessible with any 44 or 48 lane CPU, even though it is already the third incarnation of the Rampage VI Extreme series.

If it helps, there are also simple M.2 to PCIe x4 adapters (on ebay or aliexpress) that are cheaper than the Hyper X16 card, which would probably be sufficient to use a single M.2 drive in PCIEX16_3.


So I went with putting both 380GB 905P in my HYPER M.2 X16 CARD v2 and installed it in slot CPIEX16_2 (for now) and that worked. Both end up on the same VMD (#0) and I can create a bootable VROC RAID 0 drive.

Conclusion is that the DIMM1 slot is useless if you want a bootable VROC RAID 0. You can still use it for individual M.2 drives and even VROC raid them but they will be non-bootable.

Other implicit Conclusion is that if you want to use all 48 PCIe lanes you have to use the PCI slots to do so (forget about DIMM slots). 16x, 16x, 8x mode worked!

Int8bldr wrote:
So I went with putting both 380GB 905P in my HYPER M.2 X16 CARD v2 and installed it in slot CPIEX16_2 (for now) and that worked. Both end up on the same VMD (#0) and I can create a bootable VROC RAID 0 drive.

Conclusion is that the DIMM1 slot is useless if you want a bootable VROC RAID 0. You can still use it for individual M.2 drives and even VROC raid them but they will be non-bootable.

Other implicit Conclusion is that if you want to use all 48 PCIe lanes you have to use the PCI slots to do so (forget about DIMM slots). 16x, 16x, 8x mode worked!


Thanks for sharing your knowledge. One more question:
x16 + x16 + x8 = 40, plus 4 lanes from the one working DIMM.2 slot makes 44 lanes. That's still 4 lanes short of 48 (DMI is counted separably). Do you know how the other 4 lanes are used/accessible on the Encore?

Cheers
Ben

BenJW wrote:
Thanks for sharing your knowledge. One more question:
x16 + x16 + x8 = 40, plus 4 lanes from the one working DIMM.2 slot makes 44 lanes. That's still 4 lanes short of 48 (DMI is counted separably). Do you know how the other 4 lanes are used/accessible on the Encore?

Cheers
Ben


I think the last 4 PCIe lanes go to the x4 slot = PCIEX4 slot OR to the M.2_2 over PCH (you can configure this in BIOS but I have not tested that)

One think to notice is that you cannot move the 380GB 905P M.2s to the M.2_1 or M.2_2 they are too long to fit (905P M.2 is a 22110 long device). Even if they did fit they would be slow because the M.2 slots are limited by the PCH band width and you can only raid with IRST (not VROC-able)

Int8bldr wrote:
I think the last 4 PCIe lanes go to the x4 slot = PCIEX4 slot OR to the M.2_2 over PCH (you can configure this in BIOS but I have not tested that)

One think to notice is that you cannot move the 380GB 905P M.2s to the M.2_1 or M.2_2 they are too long to fit (905P M.2 is a 22110 long device). Even if they did fit they would be slow because the M.2 slots are limited by the PCH band width and you can only raid with IRST (not VROC-able)


Yes, that's correct but does not solve the mystery about the whereabouts of the 4 CPU lanes: Correct is that the x4 lanes to PCIEX_4 (as well as to M.2_1 and to M.2_2) each come from the PCH. The PCH can provide up to 24 PCIE lanes. It is connected to the CPU via the DMI which essentially is like a x4 PCIE connection, but this connection is counted separably from the CPU lanes, as indicated in the block diagram below (the diagram is for Skylake-X with max. 44 lane CPUs):
84502

This also means that the PCIEX4 as well as M.2_1 and M.2_2 cannot be used for VROC at all, since they are not connected directly to the CPU via CPU lanes. As you write they can only be used for an IRST chipset RAID, which would bottleneck the bandwidth to the shared DMI x4 connection.

In summary, there is still the unsolved mystery what Asus did with the remaining 4 CPU lanes.