08-09-2025 12:46 PM
I have a problem because I'm unable to enable Native ASPM on this board. When I set this to Enable, it resets itself back to Auto and that causes ASPM to be managed by BIOS instead of OS. What's the point of Enable if it doesn't stick? I understand that this might cause issues with power management of various devices but I want to try to minimize power usage of my PC and thus want to try it anyway.
Is there a way to solve this?
08-09-2025 05:04 PM
What OS are you using? Do you have dual boot OS or single OS installed?
08-12-2025 07:01 PM
Hi, thank you for taking time to reply. I'm using CachyOS Linux. I have several NVMe disks but let's pick the one that is connected through M2_3 slot (lspci reports ASPM substates as supported only for devices connected through PCH) When I put NVMe disk into M.2 slot, the PCIe port always reports itself as active no matter if any traffic goes through it at all. This is probably why ASPM doesn't really work. To make sure I have tested this by unmounting nvme disk from system. As to why Enabled for Native ASPM doesn't stick I have no idea. It seems like a bug in UEFI? If you need anything else let me know. PCIe device tree (cut for brevity):
lspci -tv
-[0000:00]-+-00.0 Intel Corporation Raptor Lake-S Host Bridge/DRAM Controller
+-1d.4-[07]----00.0 Samsung Electronics Co Ltd NVMe SSD Controller S4LV008[Pascal]
lspci -s 07:00.0 -vv |grep DLA (DLActive- means idle)
TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
lspci -s 00:1d.4 -vv |grep DLA (DLActive+ means active)
TrErr- Train- SlotClk+ DLActive+ BWMgmt- ABWMgmt-
lspci -s 00:1d.4 -vv |grep ASPM
LnkCap: Port #13, Speed 16GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
LnkCtl: ASPM L1 Enabled; RCB 64 bytes, LnkDisable- CommClk+
L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
L1SubCtl1: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+
08-13-2025 12:57 AM - edited 08-13-2025 01:07 AM
Thank you for taking your time to reply. I'm using Linux. No dual boot. I have several nvme drives connected to various pcie ports (some through pcie adapter cards). Removing everything but one nvme, doesn't change anything. Only M2 slots connected through PCH have ASPM substates (L1.1 L1.2) negotiated - M2_1 and M2 connected through adapter in PCIE16x_1 do claim to have ASPM L1 but no substates (why?). What is strange to me is that PCIe port connected to my nvme devices for example in M2_3 slot is always seen as active (no matter the io/load) and never goes to sleep even if device itself is not seen as active (I have even unmounted filesystem that is on it so reads/writes are not possible - still no dice). ASPM with all of the substates is negotiated successfully. APST is enabled on drive (990 pro), can't confirm it works though. Do you have any idea why Native ASPM set to Enabled doesn't stick?
08-25-2025 12:48 AM
Is it possible that your CMOS battery is dead and that's why your settings are going back to default? If some settings are sticking as they should then not the problem obviously, but maybe worth checking?
08-25-2025 03:23 AM
Not possible. All other settings are saved properly.