cancel
Showing results for 
Search instead for 
Did you mean: 

How far can I safely push VCCIO and VCCSA on the Maximus X Hero WIFI AC?

Nerror
Level 7
And by safely I mean no real degrading over the next 5 years.

System specs as listed in the postbit to the left.

I currently have a stable 5.1GHz OC with RAM at 3466MHz. VCCSA manually set to 1.25v, shoots up to 1.28v under load. VCCIO manually set to 1.20v, shoots up to 1.24v under load. Those are the lowest voltages I can get and still be stable.

68130
68131

From Raja's own Kabylake OC guide I can read this:

For memory speeds over DDR4-3600 or if using high-density memory kits, voltages up to 1.35V (VCCSA) may be required. Some CPUs have “weak” memory controllers that require elevated voltages to maintain stability. If possible, do not venture too far from 1.35V as a maximum.

And there's a table with suggested voltages on this page: https://edgeup.asus.com/2017/kaby-lake-overclocking-guide/3/, which suggest the same maximum for VCCSA and a max of 1.30V for VCCIO.

But then I read other guides like this one: http://www.overclock.net/t/1621347/kaby-lake-overclocking-guide-with-statistics#, where they write this:
Safe Voltages (Always TENTATIVE):
Vcore: 1.45v/1.37v
VCCIO: 1.25v/1.2v
System Agent (SA): 1.3v/1.25v
Vdimm: 1.4v/1.35v

The first value shows voltages a pretty ballsy person can use. The voltage after the forward slash shows voltages for regular users who don't want to live on the edge.


And another guide goes even lower: https://linustechtips.com/main/topic/773966-comprehensive-memory-overclocking-guide/
For VCCIO/VCCSA, I do not recommend exceeding a value of 1.25v for each. I personally use a value of 1.14v for VCCIO, and 1.15v for VCCSA. Going beyond 1.25v is silly, and may potentially damage your IMC or traces on your board.



While I am inclined to believe an Asus guy like Raja more, the guide was mainly for the Kaby Lake on the Z270 board. Also, that guide doesn't really touch on the consequences of going that high. So I am hoping there's someone here with actual knowledge of this on the 8700K + Z370 Maximus X Hero combo.
31,946 Views
10 REPLIES 10

Menthol
Level 14
How could anyone know long term effects at this point in time

Menthol wrote:
How could anyone know long term effects at this point in time


Well, ASUS should know some at least, based on components they use. And Intel should know about the CPU. I suppose I could write them directly instead.

I've now opened tickets with both ASUS and Intel technical support. I guess I'll find out if either of them are able to give an actual direct answer. 🙂

janos666
Level 7
Is there any development in this area (the confusion about recommended and/or "commonly considered to be safe" IO and SA voltages on CoffeLake)?

I was an early buyer and I am still a bit puzzled by this. The only stress-test in which I can (fairly easily and relatively quickly) reproduce instability is LinX 9.x (http://hwtips.tistory.com/1611). Unless I set VCCIO and VSA above 1.3V (I am still not sure about the exact numbers, I am currently using settings which result in HWInfo measuring SA=1.387 and CCIO=1.328 average under continuous linpack load and it's still not good enough). I always get differing "Residual" (the test goes on without error but it's effectively a failure to see different Residual values between runs with the same settings) when running the RAMs with their XMP profile (2x16Gb G-Skill F4-3200C14-16GVR).

When people talk about these settings, many seem failing to realize that:

  • relatively low frequencies with relatively low timings can be just as demanding on the memory controller as relatively high clocks with relatively loose timings
  • higher capacity modules are usually built with more chips and thus have more "ranks" (aka dual-rank), so 2x16Gb is much more similar to 4x8 than 2x8 in it's "demands" (and this does make a difference, just try running with 4 sticks and you will see...)


So, I am not entirely surprised I need voltages high enough to make some people fret and call me crazy but I am puzzled why it's so hard (seemingly impossible) to get these modules work in their XMP profiles even after going significantly above the Auto set SA and CCIO voltages.

It feels like it could take forever to find a working pair of SA and IO voltages because:

  • it's anecdotally possible to cause instability by setting either or both simply too high (so starting from the sky is not a fast path)
  • may be it's important to keep a certain difference between the two (but this exact difference range has to be figured out uniquely)
  • it doesn't seem possible to tell which one is too high or too low of these two (at least not without insane amount of testing, trying various combinations)
  • testing takes hours on every turn (it can run 3+ hours before I spot an error but I would prefer to use the PC for something else than testing from time to time)


Of course the RAM could be faulty but I doubt that since changing these voltages seems to have an impact on how long the test can run before an error (the higher the voltage, the longer it goes, although I don't want to find out where it starts to suffer permanent damage instead of finally stabilizing).

Although I can use the PC for anything with no apparent issues despite the LinX failure (no BSOD, no random software crashes, no corrupted file systems, everything seems fine).

bass_junkie_xl
Level 12
i wonder to , i have my 16gb kit of g.skill rgb 3000 14-14-14-34 1.35v its auto xmp does 1.11 vcssa / 1.31 vccio lol i have them @ 4000 18-19-19-38 1.4v vccio 1.2 v ( 1.22 under load ) vcssa 1.2v ( 1.23v under load ) alot of those 4000 - 4600 memory kits do over 1.3v on both on auto so i dont think g.skill would sell ram that runs 1.3 v + to those if it would fry your cpu in a year . it would be nice if some one ran a cpu @ 5 ghz 1.3v and ram @ 4000 with vccio and vcssa @ 1.35v to each and prime 85 it and see if degradtion happens we will never know lol
Rig # 1 - 14900Ks SP-124 | 90 MC @ 6.0 GHZ | 5.2 R | 4.7 E | DDR5 48GB @ 8,600 c36 | Strix RTX 4090 | PG27AQN 1440P 27" 360 Hz G-Sync ULMB 2

Rig # 2 - 14900Ks-SP-118 | 89 MC @ 5.9 GHZ | 5.2 R | 4.7 E | DDR4 32GB @ 4,533 c16 | Strix RTX 3080 | Aoc 1080P 25" 240 Hz G-Sync

janos666
Level 7
I usually had everything at Auto in the "Tweaker's PARAdise" sub-menu (or tried some minor adjustments to a few settings as per some random recommendations found on the internet - to no apparent avail, so I eventually dialed them back). Today, I set all the voltages there to Standard (as indicated in the footnote in the Setup when a line is highlighted) + 0.1 Volt, even the ones which I can't hope to understand or recognize but find completely irrelevant (so I literally just cranked all the knobs up by the same notches).

This didn't seem to help, so I also set the VCCIO and VSA to 1.35V (which yields ~1.38V under load --- nothing I didn't try earlier but didn't do the trick so tried to track back to lower values and search elsewhere) and I also pushed the Vram from 1.35 (the XMP preset) to 1.36 (sometimes I even pushed it up above 1.4V, may be even as high as 1.5V as a test but it didn't seem to help in itself).

And now will all the voltages cranked up to the sky (or least close the the clouds), it finally looked stable under LinX 9.3 ... until it didn't.

Is it possible my CPU has one of the worst memory controllers and simply can't do 3200MHz CL14, no matter what?

May be I should spare an entire day to run LinX again at fully default CPU and JDEC speeds. I remember running it with no error for 5+ hours in the past like that (2666MHz at whatever Auto timings, CPU Turbo disabled) but probably would not hurt to check again (could be a software fault).

74546

(The GFLOPS values fluctuate because I ran it in the background at low CPU priority. These remain fairly stable when I am not doing anything else.)

janos666
Level 7
I usually had everything at Auto in the "Tweaker's PAradise" sub-menu (or tried some minor adjustments to a few settings as per some random recommendations found on the internet - to no apparent avail). Today, I set all the voltages there to Standard (as indicated in the footnote in the Setup when a line is highlighted) + 0.1 Volt, even the ones which I can't hope to understand or recognize but find completely irrelevant (so I literally just cranked all the knobs up by the same notches).

This didn't seem to help, so I also set the VCCIO and VSA to 1.35V (which yields ~1.38V under load --- nothing I didn't try earlier but didn't do the trick so tried to track back to lower values and search elsewhere) and I also pushed the Vram from 1.35 (the XMP preset) to 1.36 (sometimes I even pushed it slightly above 1.4V as a test but it didn't seem to help in itself, so didn't leave it so high).

And now will all the voltages cranked up to the sky (or least close the the clouds), it finally looked stable under LinX 9.3 ... until it didn't.

Is it possible my CPU has one of the worst memory controllers and simply can't do 3200MHz CL14, no matter what?

May be I should spare an entire day to run LinX again at fully default CPU and JDEC RAM speeds. I remember running it with no error for 5+ hours in the past like that (2666MHz at whatever Auto timings, CPU Turbo disabled) but probably would not hurt to check again (could be a software fault). I even remember running it for 3+ hours the other day at slightly lowered RAM speeds (may be 3000MHz C14) but XMP (or at least very close) settings rarely make it past 1-2 hours (could be random luck though).

74547

(The GFLOPS values fluctuate because I ran it in the background at low CPU priority. These remain fairly stable when I am not doing anything else.)

Is the ram on the Asus compatibility list for your motherboard?
Have you ran the cpu at stock default speed and tested the ram?

Juggla wrote:
Is the ram on the Asus compatibility list for your motherboard?

Yes, this kit is on the official QVL (latest document revision).
Juggla wrote:
Have you ran the cpu at stock default speed and tested the ram?


Not for a long time. Several motherboard UEFI updates and Win10 spectre updates happened in the meantime, so I started a fresh run with the CPU fixed to it's base clock (taking the value from Intel's ARK page for the 8600K). It indeed ran significantly longer than usual but that could be influenced by many things (this time I left the PC completely alone instead of doing relatively light demanding work in the foreground from time to time + random luck + lower heat and probably less VRM noise due to the lighter power demand).

74555

When I have time, I will start yet another fresh test with the CPU at reasonable OC speeds (to push to heath and VRM workload higher for a more stressful environment rather than creating an optimal condition in this regard) and the RAM clocked to the highest officially supported JDEC profile (2666MHz if I am not mistaken) because I forgot to take a screenshot from a similar run and I am not sure how long it went before I stopped it (I don't fully trust my memory after trying so many random settings over a long period).

But I think I probably won't be able to find the problem on my own without temporarily replacing at least one component in the system. The reasons I didn't do so already are:

1: The system seems to be stable under Prime95 and normal everyday use, so why bother about the possibility of a hidden fire until there is a sign of smoke?

2: I saw several people having similar Residual differences with LinX 9.x, so it's still possible that nobody ever has matching Residual on CofeeLake systems in this test, and it's just a matter of random luck and how long we let the test run (+ other conditions like heat). May be I just falsely thought "it ran long enough" when I got convinced it's possible under the right conditions.

So, there could be either some bug in CoffeLake or a bug in linpack which only manifest itself with CoffeLake, or the usual suspects: faulty memory, faulty CPU, faulty motherboard, etc. It's really hard to tell from here.

Edit:

Well, I couldn't reproduce my earlier 6+ hours run with no residual glitch and found several conversations in which some people agreed that LinX>6.5 has this behavior (at least with SkyLake and later Intel CPUs), so my conclusion is that my one time long run was simply luck and this software should not be used in this manner (it might still be good for testing but one should not worry about checking the residuals --- although this might makes it less useful if the results aren't fully obvious).