cancel
Showing results for 
Search instead for 
Did you mean: 

Issues with trying to configure an 8 drive array

AviatorDuck
Level 7
Ok folks, what am I missing here? The Intel Rapid Storage Technology platform seems to only be able to use 6 of my 8 available SATA ports at one time. UEFI BIOS/IRST won't let me configure 8 drives in an array. When I select 8 drives RAID 5....the option to create array is unavailable (greyed out)....same with 7 drives....when I select only 6 drives....then the button to create array is available (white). All 8 drives are showing up as available. If I configure two 4 drive arrays...or any other array configuration that uses all 8 drives....it is successful, and they show "Normal" and healthy in BIOS, but whichever array that was created last will fail a cyclic redundancy check in Win10. Grrr.....If I create multiple arrays using only 6 drives...then everything works as advertised. Again...it doesn't matter which drives I choose.

Bypassing IRST, using AHCI..... I can install Win10 on my NVMe drive with no issues. And with all 8 SATA drives plus my NVMe drive all showing up in Disk Manager as alive and healthy. BUT if I enable IRST in UEFI BIOS....AND have all 8 drives plugged into the PCH controller....Win10 will NOT install and I get a BSOD for iaStorAV.sys while trying to boot into Installation routines....never really starts as it BSOD's before the first configuration screens are launched.

So is this a configuration issue that I have missed setting it up correctly or possibly a bug with UEFI BIOS/IRST or a limitation of IRST?

At first, I thought maybe it was one of those issues with sharing between M.2_1 and SATA, but I believe I have the BIOS configured correctly where M.2_1 is on PCIe and not using PCH SATA at all (evidenced by using AHCI mode and having 9 drives show up in Disk Manager....my 8 SATA drives and my NVMe drive).


Here is my configuration:

Asus ROG Strix x299 Motherboard
8 SATA Ports on PCH controller (8 Seagate 6TB 7200RPM drives)
M.2_1 on PCIe (Samsung 960 Pro 2TB NVMe)
NOTE: Dual posting on both Asus and IRST forums for assistance!
3,061 Views
14 REPLIES 14

G75rog
Level 10
Mayhaps you have some small clunkers you could toss in there for an 8 drive test?

G75rog wrote:
Mayhaps you have some small clunkers you could toss in there for an 8 drive test?


Unfortunately I do not..... 😞

CharlieH
Level 8
I have 8 drives but I use the following:
RAID 0
VROC
2x HyperV x16 cards
8x Intel 900ps

xeromist
Moderator
Ah, that explains it. Still doesn't explain why it's not really documented or explained but at least you know you aren't crazy.

So what are you going to do now that you have more drives than your controller will support?
A bus station is where a bus stops. A train station is where a train stops. On my desk, I have a work station…

xeromist wrote:
Ah, that explains it. Still doesn't explain why it's not really documented or explained but at least you know you aren't crazy.

So what are you going to do now that you have more drives than your controller will support?


I sure wish I could roll back the hands on time on this one....I would have done things WAY differently! I picked this board cause it had 8 SATA ports! Would have gone with an x299 MB, but might have chosen the x299 Prime board instead...but anyways...I LOVE this board thus far! But that is minor in the grand scheme of things....I would have gone with SSD's instead of traditional disks...and that would have changed my case layout so I could have gone with bigger radiators for my dual loop cooling system. But now that is a bigger challenge since I drilled holes in the case metal to run tubing based on current radiator size! 😞 I could have run bigger Rads and had better cooling for when I start to OC this Rig! 🙂 But it is what it is and now I move on.....maybe a year from now I will change things around...but for now I will go with what I have!

So long story short.....I will run 2 Four drive arrays....4 drives configured RAID 10 and 4 drives configured RAID 5. Keeping MOST of my "active" VM's on the 2 NVMe drives and using these 2 arrays for lesser performance hogging VM's, VM archiving, and system/VM backups. Initialize took less that 8 hours for both arrays so that was a pleasant surprise!