Okay, so if I understand this right, ASUS sells the R6EE board as made for i9-109XXx and "Ready for the latest IntelÃ‚Â® CoreÃ¢â€žÂ¢ X-series processors to maximize connectivity", but with a lot of constraints:
1 The M.2_1 and M.2_2 slots, and PCIe x4 are chipset lanes, not CPU lanes, so they are inaccessible to VROC, but can be used for IRST.
2 My 905P M.2 SSD are 22110s, are too long to fit in the M.2_1 and M.2_2, so they can't be set up with IRST.
3 To use my 905Ps in a RAID configuration, I must, therefore, install them on the DIMM and buy a certain type (?) of VROC key for $100++.
4 Installing them on the DIMM means PCIe_4 is only 4 bits, but the PCIex4 slot and M2._2 remain unshared
5 I have two x16 graphics cards, which can be installed in PCIe_1 x16 and PCIe_2 x16
What VROC key is needed for this board?
2. yes the do not fit in M.2_1 and M2_2. you can still potentially use IRST though if you put them somewhere else...
3-5. more complicated answer:
a) you can install them in DIMM.2_x slots and use a VROC key BUT they end up on 2 different VMDs: VMD0 and VMD1 AND you cannot create a BOOTABLE VROC raid 0 volume that span 2 VMDs, so if that is what you want this solution won't work either.
b) you have to use Intel Optane drivers to create a VROC volume (900P or 905P work confirmed but non intel drives do not work)
c) you can get a VROC key relatively "cheap" for $20 at EVGA here:https://www.evga.com/products/product.aspx?pn=W002-00-000066
d) if you want to only use VROC raid 0 you do not need a VROC key, but I would recommend to get one anyway because it gives you flexibility for future and for $20 it's not a big deal....
to create bootable VROC 0 on 2 CPU x4 lanes you can:
i) use one of the DIMM.2_x slots (you need to try out which one it is tied to VMD 0 and one is VMD1)
and put the other one in the PCIEX16_3 PCIe slots you have left. For instance, since you are running 2 graphics card in, i assume, PCIEX16_1 PCIEX16_2, you could put an m.2 adapter card in PCIEX16_3. I am not sure if the PCIEX16_3 is tied to VMD0 or VMD1, but it should be one or the other and could be matched with one of the DIMM.2_x slots - need to try...
Also, if you use PCIEX4_1,make sure you do not (plan to) use the M.2_2 because they shares the bandwidth (same x4 lanes) and you can only have one enabled at a time in BIOS AND I am not even sure you can VROC PCIEX4_1 given it's PCH connection.
Here is an example of an x4 PCIe adapter (that I have not tried) https://www.amazon.com/EZDIY-FAB-Express-Adapter-Support-22110/dp/B01GCXCR7W
ii) get a ASUS hyper x16 card with 4 M.2 slots and put it in the last PCIEX16_3. BUT it's usability is going to depend on the CPU you got, see table on page 1-8 in the user manual... the 10980XE should allow for x16, x16, x8, but in my experience it only does x16, x16, x4 (maybe a BIOS issue) so in the end you will at most be able to use max 2 slots of the Asus hyper x16 drives (like if you downshift the PCIEX16_2 to x8 for graphics card 2). if you have a "lesser" CPU with PCIe 44 or less lanes you have even less choices see the table on page 1-8.
I have the R6EE with a 10980XE and 256GB ram etc.
I had the same idea as I think you had: use 2 graphics card in PCIEX16_1 PCIEX16_2 and put 2 intel 905P 308GB M.2 drives in the DIMM.2 slot VROC Raid for system (and I still have space in PCIEX4_1, PCIEX16_3 for the future). AND it does not work because they end up on two different VMDs!!!
So in the end used my Asus hyperx16 v2 card that I already had, bought 2 more 905Ps, forgot about the 2nd graphics card (wait for 3080ti which is coming "real soon now") and put the hyperx16 card in PCIEX16_2. This works100% with blazing 10GB/s speeds in writes and reads with massive IOPS and so on...
I now have space for more future expansions and is not maxed out in PCIe slots - I'm quite happy and think that I made the right decision using 1 2080ti while waiting for 3080ti...
R6EE is a pleasure to work with (compared to R6E) and 10980XE is easy to over clock on this board - almost effortless to sustainable achieve 24/7 4.6G (and a bit more work to get to 5GHz but not sustainable 24/7) - I used folding@home to really stress test the system for sustainable OC for days. Some of the workunits are super stressfull more so than the standard suite used (AIDA64, cinebench, pie...) and test well for sustainable load on both the graphics card and the CPU simultaneously with heave use of AVX that can drive up temps and power usage both peak and sustainable, really, really high.