cancel
Showing results for 
Search instead for 
Did you mean: 

Z690 Extreme / PCIEX 16X_1 lanes at 8x if I use M.2_1

simon_leuener
Level 10
As in the title, if I use
1 Graphic card in PCIEX 16x_1 and 1 M.2 in M.2_1
the Graphic lanes are reduced to 8x

If I insert my m.2 into m.2_2 I have the full 16x lanes on the graphic card.
Dont think the board is broken...

How would u every use the m.2 Gen 5 in m.2_1, if it halves the Graphic card lanes to 8x???
I dont understand ASUS, if this is a normal behaviour. Its a damn 1200EUR board

Should i set m.2 in m.2_1 and the Graphic card in the second slot PCIEX 16x_2?
What is a solution (beside get a new or another board lol)

Thanks for help everyone 🙂
2,185 Views
17 REPLIES 17

Spicedaddy
Level 9
It's a limitation of the CPU. It has 16X PCIE 5.0 and 4X PCIE 4.0.

No motherboard will give you 16X PCIE 5.0 graphics with 4X PCIE 5.0 NVME at the same time.

Its an odd one how the Extreme Board has M.2_1 sharing lanes with the PCI-E_1 & PCI_2 all from the CPU. (the M.2_1 is linked to the PCI-E _1&_2 Slots......same as if you popped in the M.2 Expansion Card or another GPU in PCI-E_2, you would then be x8/x8 .



It was another reason I went with the Apex Board, all M.2 slots (only 2 vs 3 on the Extreme) on the board and DIMM Expansion Card are strictly linked to the PCH (Chipset).

CPU is 20 Lanes
PCH is either 12 (4.0) or 16 Lanes (3.0) ....

See link and check out the diagram for Lane allocation.

Easy solution is use the other M.2 Slots on the Extreme Board, I believe that Board has 3 onboard M.2 Slots, you should be able to use M.2_2 and M.2_3 for your NVMe drives while maintaining x16 on PCI-E_1 (top slot GPU).

You maybe able to use your DIMM M.2 Expansion Card populated with NVMe drives as well (as they are also from PCH)

https://amp.hothardware.com/news/intel-z690-alder-lake-chipset-specs-pcie-5-ddr5-gear-4-mode

captaintrips
Level 7
So just to be clear (as I'm building a 12900k build out on my maximus extreme z690 as we speak), if I use a gen4 M2 nvme on m2_1 slot, it should still allow me to have 16x on my PCIE5_1 slot, correct?

It will only impact it if I use a gen5 M2 nvme in that m2_1 slot, right?

I see the warning the m2_1 slot shares pcie5 bandwidth with pcie5_1 & 2 slots, but if I'm using a gen4 M2 nvme in that slot, it should not have my lanes on pcie5_1 slot for my gpu?

captaintrips wrote:
So just to be clear (as I'm building a 12900k build out on my maximus extreme z690 as we speak), if I use a gen4 M2 nvme on m2_1 slot, it should still allow me to have 16x on my PCIE5_1 slot, correct?

It will only impact it if I use a gen5 M2 nvme in that m2_1 slot, right?

I see the warning the m2_1 slot shares pcie5 bandwidth with pcie5_1 & 2 slots, but if I'm using a gen4 M2 nvme in that slot, it should not have my lanes on pcie5_1 slot for my gpu?
No, any NVMe drive in that slot will give you x8. I'm in the same boat, same board and all 3 of my m.2 slots are occupied and besides you'll never perceive the difference anyhow in most cases with x8 vs x16, doesn't even come close to saturating all the bandwidth. Through testing, I've not seen any differences worth mentioning in 3DMark benchies with one in m.2_1 or not. I will say, I didn't buy a 1k mobo to not eventually run a Gen 5 SSD in slot 1, you'd want your fastest SSDs in those slots (off CPU) and all else off the PCH.

captaintrips
Level 7
Ugh.... Well I've got (3) gen4 nvme's

Guess I'll move two to the pch riser card and then my OS one I'll bump down to the m2_2 then 😕

I know, I know... There's no performance hit between 8x and 16x and likely won't be in the near future... It's just the principle of the matter.

Fortunately in process of doing cabling so haven't booted it up yet, which means those m2 thermal pads should still be reusable.

rjbarker
Level 11
Would be nice if Asus included a detailed explanation of how the M.2 Drives share lanes with the PCI-E GPU Slots.....something more detailed within the manuals.

I would like to see the benchmark comparisons of x16 vs x8, as I have read (not tried it) that the 3080Ti / 3090 are in that category of GPU's now saturating the x8 lanes......

rjbarker wrote:
Would be nice if Asus included a detailed explanation of how the M.2 Drives share lanes with the PCI-E GPU Slots.....something more detailed within the manuals.

I would like to see the benchmark comparisons of x16 vs x8, as I have read (not tried it) that the 3080Ti / 3090 are in that category of GPU's now saturating the x8 lanes......
this may be true with PCIE3 x8 or x16 but not even close on PCIE4 or even less so on 5. I'll try and get some results up soon, PM me rj if you like, I'll work with you on this.

jking63 wrote:
this may be true with PCIE3 x8 or x16 but not even close on PCIE4 or even less so on 5. I'll try and get some results up soon, PM me rj if you like, I'll work with you on this.


Thanks JKing for the offer, much appreciated!!
Im not to concerned as I know I can populate both my onboard M.2 slots on my Apex Board and still have x16 on the top GPU slot .....

For me its more my OCD 🙂 ......if that makes sense !!!!

Cheers

Mangonz
Level 9
Old-ish thread but I had checked the PCI-e lanes when I did my build.

M.2_1 shares the 16 PCIe 5.0 lanes with GPU - don't use it. This is would only useful if you have a PCIe 5.0 SSD of which none currently exist.
M.2_2 has 4 dedicated PCIe 4.0 lanes direct to CPU that are not shared with anything. Use this one for your main/OS SSD as it has the best performance.
M.2_3 uses the PCH so shares it's 8 PCIe 4.0 DMI lanes with everything else on the chipset, 10G lan, 2.5G lan, USB, Thunderbolt, wi-fi, bluetooth and the ROG DIMM.2 SSDs. If you use a lot at once it will slow down, but not as much as past generations due to increased DMI bandwidth.
ROG DIMM.2 M.2_1 shares PCH with M.2_3.
ROG DIMM.2 M.2_2 again shares PCH DMI bandwidth.