cancel
Showing results for 
Search instead for 
Did you mean: 

RVIEE VROC on DIMM.2

BigJohnny
Level 13
Anyone running VROC on DIMM.2 with a pair of 905P M2 drives?

If so did it take using F6 drivers at win install?*
152 Views
14 REPLIES 14

BigJohnny
Level 13
I didnt have an issue with different VMDs on R6E. with two 900P AIC cards.
I do remember that setting it in the BIOS was worthless. Never did show up right and boot partition just said windows boot. Had to do the F6 drivers. Then it worked perfectly. They take X4 each with a direct connect to the CPU. So long as nothing else is populated it shouldnt be an issue. The x16 X16 should apply to PCIE cards then That covers the GPUS will see what I get.

I do recall having to do iaStorE then iaVROC drivers only and install the RSTe software later just because so I could see what was going in and if it was correctly seeing my key.


Thanks for the input

BigJohnny
Level 13
You can run both with only X8 on AIC card Â*but still in same boat.Â*
This only applies to the DIMM.2 and once you set it to x16x8x4 it’s all happy. You can run two AIC cards in the X4 (small one that disables M2_2) and the bottom slot at x16/x16/X4. Then you still lose the DIMM.2_2 and the M2_2 under the cover.

Yeah give you 48 lanes you can’t use. I’m on a 44 lane CPU but don’t have the option to disable the slots I don’t use. No matter x16 x8 for GPUs. X8 for two M2 . x8 for both DIMM2 drives and theirs your 40. The other 4 go to the DMI which is 44 which is where my 9940X is at. What good is another 4 lanes that can’t be used? My preference would be the ability to switch off the lanes of the bottom two slots. And use them where I need them.Â*
Â*Â*

BigJohnny
Level 13
It’s up and operational. Only down side it latency of the VROC raid but Q14K is still more than twice as fast as a stand-alone Sammy 970 plus and sequential are past 500. I was going to use my single U2 905P but there’s no U2 port and no way to make it tidy. *The other caveat is slower boot times but I can deal with that. If I wasn’t using my GPUs vertical mount I’d have just used the AIC 900P drives that didn’t limit any lanes. At least you can use the DIMM.2 for VROC. The R6E has one of DIMM.2 drives on PCH. So these two, a 960 pro and 970 evo plus, 2x1TB 850evos and 3x1TB 860EVOs. That should do me for local storage.*

rosefire
Level 7
Okay, so if I understand this right, ASUS sells the R6EE board as made for i9-109XXx and "Ready for the latest Intel® Core™ X-series processors to maximize connectivity", but with a lot of constraints:

- The M.2_1 and M.2_2 slots, and PCIe x4 are chipset lanes, not CPU lanes, so they are inaccessible to VROC, but can be used for IRST.

- My 905P M.2 SSD are 22110s, are too long to fit in the M.2_1 and M.2_2, so they can't be set up with IRST.

- To use my 905Ps in a RAID configuration, I must, therefore, install them on the DIMM and buy a certain type (?) of VROC key for $100++.

- Installing them on the DIMM means PCIe_4 is only 4 bits, but the PCIex4 slot and M2._2 remain unshared

- I have two x16 graphics cards, which can be installed in PCIe_1 x16 and PCIe_2 x16

What VROC key is needed for this board?
Future PicPlatform.......Rampage VI Extreme Encore / i9-10940x
Memory.........G.Skill F4-4266C17Q-32GTZR 32GB Kit
Graphics ......Radeon Pro Vega 56
Boot Drive.....2X Intel 380GB, 905P M.2 SSD
Storage........2x Samsung 1TB 970 EVO M.2 SSD
Cooling........MCP355 Pump, Swiftech SKF Block, EK360 60mm Radiator



rosefire wrote:
Okay, so if I understand this right, ASUS sells the R6EE board as made for i9-109XXx and "Ready for the latest Intel® Core™ X-series processors to maximize connectivity", but with a lot of constraints:

1 The M.2_1 and M.2_2 slots, and PCIe x4 are chipset lanes, not CPU lanes, so they are inaccessible to VROC, but can be used for IRST.

2 My 905P M.2 SSD are 22110s, are too long to fit in the M.2_1 and M.2_2, so they can't be set up with IRST.

3 To use my 905Ps in a RAID configuration, I must, therefore, install them on the DIMM and buy a certain type (?) of VROC key for $100++.

4 Installing them on the DIMM means PCIe_4 is only 4 bits, but the PCIex4 slot and M2._2 remain unshared

5 I have two x16 graphics cards, which can be installed in PCIe_1 x16 and PCIe_2 x16

What VROC key is needed for this board?



1. correct
2. yes the do not fit in M.2_1 and M2_2. you can still potentially use IRST though if you put them somewhere else...
3-5. more complicated answer:
a) you can install them in DIMM.2_x slots and use a VROC key BUT they end up on 2 different VMDs: VMD0 and VMD1 AND you cannot create a BOOTABLE VROC raid 0 volume that span 2 VMDs, so if that is what you want this solution won't work either.
b) you have to use Intel Optane drivers to create a VROC volume (900P or 905P work confirmed but non intel drives do not work)
c) you can get a VROC key relatively "cheap" for $20 at EVGA here:https://www.evga.com/products/product.aspx?pn=W002-00-000066
d) if you want to only use VROC raid 0 you do not need a VROC key, but I would recommend to get one anyway because it gives you flexibility for future and for $20 it's not a big deal....

to create bootable VROC 0 on 2 CPU x4 lanes you can:

i) use one of the DIMM.2_x slots (you need to try out which one it is tied to VMD 0 and one is VMD1)
and put the other one in the PCIEX16_3 PCIe slots you have left. For instance, since you are running 2 graphics card in, i assume, PCIEX16_1 PCIEX16_2, you could put an m.2 adapter card in PCIEX16_3. I am not sure if the PCIEX16_3 is tied to VMD0 or VMD1, but it should be one or the other and could be matched with one of the DIMM.2_x slots - need to try...
Also, if you use PCIEX4_1,make sure you do not (plan to) use the M.2_2 because they shares the bandwidth (same x4 lanes) and you can only have one enabled at a time in BIOS AND I am not even sure you can VROC PCIEX4_1 given it's PCH connection.
Here is an example of an x4 PCIe adapter (that I have not tried) https://www.amazon.com/EZDIY-FAB-Express-Adapter-Support-22110/dp/B01GCXCR7W

ii) get a ASUS hyper x16 card with 4 M.2 slots and put it in the last PCIEX16_3. BUT it's usability is going to depend on the CPU you got, see table on page 1-8 in the user manual... the 10980XE should allow for x16, x16, x8, but in my experience it only does x16, x16, x4 (maybe a BIOS issue) so in the end you will at most be able to use max 2 slots of the Asus hyper x16 drives (like if you downshift the PCIEX16_2 to x8 for graphics card 2). if you have a "lesser" CPU with PCIe 44 or less lanes you have even less choices see the table on page 1-8.

I have the R6EE with a 10980XE and 256GB ram etc.
I had the same idea as I think you had: use 2 graphics card in PCIEX16_1 PCIEX16_2 and put 2 intel 905P 308GB M.2 drives in the DIMM.2 slot VROC Raid for system (and I still have space in PCIEX4_1, PCIEX16_3 for the future). AND it does not work because they end up on two different VMDs!!!

So in the end used my Asus hyperx16 v2 card that I already had, bought 2 more 905Ps, forgot about the 2nd graphics card (wait for 3080ti which is coming "real soon now") and put the hyperx16 card in PCIEX16_2. This works100% with blazing 10GB/s speeds in writes and reads with massive IOPS and so on...

I now have space for more future expansions and is not maxed out in PCIe slots - I'm quite happy and think that I made the right decision using 1 2080ti while waiting for 3080ti...

R6EE is a pleasure to work with (compared to R6E) and 10980XE is easy to over clock on this board - almost effortless to sustainable achieve 24/7 4.6G (and a bit more work to get to 5GHz but not sustainable 24/7) - I used folding@home to really stress test the system for sustainable OC for days. Some of the workunits are super stressfull more so than the standard suite used (AIDA64, cinebench, pie...) and test well for sustainable load on both the graphics card and the CPU simultaneously with heave use of AVX that can drive up temps and power usage both peak and sustainable, really, really high.

Good luck!