cancel
Showing results for 
Search instead for 
Did you mean: 

X670 resource

Shamino
Moderator

ill use this thread to collect some new test bioses for the boards, maybe also to explain some less understood options

to disable cores ccd go here and choose ccd xx bit map down core.
each ones stand for an enabled core
best to disable from the back, ie:
110000
instead of 0011000
after selection press downcore apply changes or discard if made mistake

ocpak/octools

FAQ:
7950x not boosting pass 5.5G -> check that CStates is not disabled
Detailed Explanation on CState Boot Limiter


Test BIOSes:

new:
X3D OC Preset for those MB with asynch BCLK Support: (for simple slight perf boost for X3D)
97792

DOCP/EXPO Tweaked: (for simple timings tightening)
97793

strixe-e 1515 

strixe-f 1515 

strix e a 1515 

crosshair hero 1515 

crosshair gene 1515 

crosshair extreme 1515 

creator 670 1515

creator b650 1515

strix 650E I

strix 670 itx

 

 

for crosshair and strix e-e:

explanation of segment2 Loadline:

dualseg.jpg

customize a heterogenous loadline for a dual segment workload range.

example above shows loadline=L6 when current is in range of 0~40A, and Level4 when current is above 40A.

 

 

 


Adds for x3d

dynamic ccd priority switch with core flex, os / driver agnostic so win10 win11 ok

97403

97404

Algo as follows:
If condition reached and ccd0 specified, then check current mem/cache activity > threshold and hysteresis reached, if fulfilled then switch
If condition reached and ccd1 specified, then check current mem/cache activity <=threshold and hysteresis reached,, if fulfilled then switch
Default hysteresis =4

Can combine multiple algos for ccd priority so combinations are wide

works on non x3d too but of course senseless on it. detailed explanation here.

788,319 Views
2,675 REPLIES 2,675

You can check in HWinfo:

Screenshot 2024-01-05 011029.png

Yeah shows 4x2 for the sink.2 slot. both drives are Samsung 990 Pro and primary one identifies correctly as 4x4. When moved to another system it’s recognized as 4x4. 

currently in chat with Asus Support. 

7950X3D — Gene X670E — 2x32GB 6200MT/s CL 32 Corsair Dominator Titanium 64GB kit — MBA 7900XTX — Aorus FO32U2P QD-OLED — 2x4T 990 Pro NVMe — Corsair Link (AIO+Fans) — Corsair 2500X Case — Asus ROG Loki 1000W PSU

DIMM.2 (not sink.2). Stupid auto correct 🙂

7950X3D — Gene X670E — 2x32GB 6200MT/s CL 32 Corsair Dominator Titanium 64GB kit — MBA 7900XTX — Aorus FO32U2P QD-OLED — 2x4T 990 Pro NVMe — Corsair Link (AIO+Fans) — Corsair 2500X Case — Asus ROG Loki 1000W PSU

Hi, I checked, is correct, x4.

drv_4x.jpg

74lobster

ksenchy
Level 11

All of my 3 nvmes are at 4.0 x4 🤷

TheRatCalledRat
Level 8

I have noticed that PCIe lanes are not assigned properly with BIOS 1807 on a ProArt X670E:

The motherboard's PCIe slot 1 and the AMD Radeon Pro VII that is inserted into it should both support PCIe 4.0 x16. However, on warm and cold (re)boots I have seen assigned lane numbers including x1, x2, x4, x8 and x16. The number of lanes is indicated by AMD Radeon Pro driver interface and HWInfo64 in Windows 11 as well as by Linux. 3dmark's PCIe feature test then also indicates the bandwidth expected with the respective number of lanes (from just below 2GB/s for x1 up to below 30GB/s for x16)

All BIOS settings are at default, resetting the CMOS or assigning x16 to slot 1 in the BIOS interface does not change the erratic lane assignment. Also other PCIe devices appear to be affected if the number of lanes is below their expected values (e.g. the M.2 nvme drives or network devices at x1).

Besides the on-board PCIe devices and the graphics card there are two other devices connected to the PCIe bus: a Crucial T700 drive in M.2 slot 1 and a WD SN770 drive in slot 2. (The memory consists of two KSM48E40BD8KM-32HM, the CPU is a 7950X)

After downgrading the BIOS to version 1602 the lane assignment appears to work properly again - PCIe 4.0 x16 every time for the graphics card, as well as the expected numbers for the other PCIe devices. The computer appears stable under load (prime95, y-cruncher, cinebench, furmark etc), and in idle both with bios 1807 and 1602.  Just with firmware 1807 the PCIe lanes are assigned inconsistently at boot time.

I suspect the PCIe lane assignment is somehow defective with BIOS version 1807. Has anyone else noticed erratic behavior with the firmware 1807 / the AGESA 1.1.0.1 firmware on their board?

 

 

 

I'm running AGESA 1.0.8.0 and firmware 1710 on my X670E-CREATOR WIFI ProArt and haven't experienced this issue, but thanks for documenting it. SSDs in both slots (MP700 Pro & 990 Pro), RTX 4090 FE and 7950X with 64 GB of QVL Corsair CMT64GX5M2B6000C40 in EXPO profile, and a Dell X550 dual 10 Gbit NIC in bottom slot.

Hi mum!

I posted the same Link Width issue with the same motherboard (ProArt X670E) and an Intel ARC A750 when running the latest 1807 BIOS here.  I haven't noticed the problem occur on a cold boot, only warm boots (restarts).  

I also have a 7950x, but I have 3 m.2 nvme drives. Two WD SN850X in the PCIE 5.0 slots and 1 Inland nvme in the non shared nvme PCIE 4.0 slot. M.2_1, M.2_2, and M.2_4 are populated.

I'm hoping with others reporting, this will be resolved soon.  Reverting to the 1710 BIOS fixes the issue.  I have seen other board manufacturers having similar issues being reported in their forums with the latest AGESA update.

Concerning the low link width and inconsistency, BIOS 1904 seems to be a substantial improvement for me. So far almost a perfect score coming up with x16 - only one in the ~50 boots or reboots I have tried it the GPU link width came up with x8 (from power off). So it seems to be at the level of 1602(/1710) again (I wonder if a future version will always come up with x16 at some point, till then I am still a bit confused). What has chaged to make the difference is unfortunately not explained in the changelog or otherwise, so I hope the difference I observe is really based on some change, and not just chance.

Also in other respects, BIOS 1904 seems to work well.

I hope you see an improvement too, SparkyBoy006

To follow up on my previous post, today I checked out if the PCIe link width problem presents itself again after updating to 1807... It does indeed, almost always x1, sometimes x2, x4, and one time x8 or x16. After that, I reverted to 1710, which showed x16 on the great majority of boots, but incidentally also x8 (never below x8). 1602 appears to have the lowest "error rate". There was no clear pattern to me concerning the link width when booting after complete disconnection from power, or after a reboot.

With Linux, I investigated the PCI bus with "lspci -vvv". When the link width is below x16, that is the case for the PCI bridge, but I do not really comprehend the output... The lower bandwidth indicated by the 3dmark PCIE feature test would then be caused by a bottleneck in the bridge(?)

A diff between the output for a cold boot of firmware 1710 which ended up with x16 and a cold boot of firmware 1807 which ended up with x1, shows not many differences. Only the link width for the PCI bridge, some IRQs and I/O- vs I/O+ for the Thunderbolt interface appeared to differ... I will list the exact differences below.

Could this point to a defective I/O die which comes to light because of different firmware settings in the newer firmware versions? Or some kind of noise on the PCIe bus that for some reason is more disturbing in more recent firmware? 

Differences between lspci -vvv output for 1710 and 1807:
for ID 00:01.1 PCI bridge:
1710: LnkSta: Speed 16GT/s, Width x16
1807: LnkSta: Speed 16GT/s, Width x1
for ID 01:00.0 PCI bridge:
1710: LnkSta: Speed 16GT/s, Width x16
1807: LnkSta: Speed 16GT/s, Width x1 (downgraded)
for ID 03:00.0 GFX card in slot 1:
1710: Interrupt: pin A routed to IRQ 131
1807: pin A routed to IRQ 165
for ID 04:00.0 NVME in M2_1:
1710: Interrupt: pin A routed to IRQ 56
1807: Interrupt: pin A routed to IRQ 98
for ID 10:00.0 Thunderbolt:
1710: Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
1807: Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
for ID 6e:00.0 iGPU
1710: Interrupt: pin A routed to IRQ 149
1807: Interrupt: pin A routed to IRQ 81
for ID 6e:00.3 USB controller:
1710: Interrupt: pin D routed to IRQ 140
1807: Interrupt: pin D routed to IRQ 72
for ID 6e:00.4 USB controller:
1710: Interrupt: pin A routed to IRQ 149
1807: Interrupt: pin A routed to IRQ 81

I have no clue why the bridge would not run at full width, while the GFX card would allow that.