cancel
Showing results for 
Search instead for 
Did you mean: 

MVF Pcie slot 1 stuck in x1 bus width

skruffs01
Level 7
Hello Everyone,

This is my first post on ROG. Hopefully someone has a little insite on this issue.

I have a MVF mobo with 2 way sli 780s. In the bios I am reading the GPU in slot one (top) as running at x1 bus width and GPU in slot two in x8. Both should read x8 since I am in sli. This is confirmed by CPUz and GPUz. I also tried to run the render option in GPUz to see if it was just in a reduced PCIe power state and nothing changes (GPU1 3.0 x16 @ x1, GPU2 3.0 x16 @ x8)

Here's the rub, my scores in 3DMark 11 performance and Valley seem to be about right (could be better) for this setup (both stock settings andslight OC - both GPUs and CPU). I am not sure if anyone else here has seen this issue before regarding any Z77 ROG boards from ASUS. There are a lot of discussions but they all seem to stop before the OPer says if its fixed or not (not this time).

Any insite would help since if I am actually missing some performance it would be nice to know. Next step is to tear down the WC loop to investigate.

Items I have tried to so far
- check CPU pins on mobo - ALL OK
- remove and reseat GPUS (3 times) - OK
- update latest chipset drivers
- reinstall video drivers
- clear CMOS
- revert to previous bios (same problem present)

System info
3770k (stock and 4.5Ghz)
MVF (bios 1903)
2x EVGA ACX SC 780s (320.49)
Seasonic 860w PSU

263402634126342
10,775 Views
18 REPLIES 18

Chino
Level 15
Welcome to the ROG forums, skruffs01.

Have you tried just running one GTX 780 in the first PCIe lane to see if it runs atr x16?

Chino wrote:
Welcome to the ROG forums, skruffs01.

Have you tried just running one GTX 780 in the first PCIe lane to see if it runs atr x16?


Thanks!

This is the next step. I have to drain the loop then I can remove the cards. I plan to check both card 1 and 2 alone in slot 1. Really didnt want to break down the loop, but I guess its the only option now.

Chino
Level 15
Oh, you're using a custom loop. That is good to know. Before you go draining it, we can try some other workarounds to see if they work for you.

You mentioned downgrading your BIOS to an older version. Which versions have you tried?

Chino wrote:
Oh, you're using a custom loop. That is good to know. Before you go draining it, we can try some other workarounds to see if they work for you.

You mentioned downgrading your BIOS to an older version. Which versions have you tried?


1803 and 1903 (both have the same information in the bios (dimm post info, and NB Pcie config) slot 1 x1 native, and slot 2 x8 native

Flashed 1408 but still same information in Bios and windows as 1803 and 1903, x1 native slot 1, x8 native slot 2

Yes, I tried gen1, 2, and 3 for both cards, including the 3 pcie presets (stabilty settings I think), no difference.

Chino
Level 15
Well, looks like the safest way to check if the first PCIe lane is defective is taking apart your SLI. Is is possible for your to remove your second GPU without having to disassemble your custom loop? You know. Like just pull it out, disconnect the PCIe cables and hold the GPU with your hand in mid air while powering up your system and going into the BIOS?