cancel
Showing results for 
Search instead for 
Did you mean: 

Help, understanding overclocking

Murum
Level 7
INTRO

First of all this is my first post and topic, so i'd like to also say hello to everyone and to say what a great wealth of information those forums are.

As the topic title, am trying to understand overclocking, through software(GPU) at first so i can draw the max potential of my hardware, what i want to achieve with this topic is to place any questions i come across and see if anyone else can enlighten me(obviously i tried searching various OC forums and the lot and found no satisfactory answer or did not fully understand the answers there and i hope i do yours) you may want to tell me to simply do a thread on one of those forums instead of here, true i could and will eventually, but i see folks that have more than the average knowledge here so i'll pick your ears first and who know through it you and me may learn something more than what we know already, i've also gone through the overclocking guide stickies here again they are helpful and have been theres still things i don't understand and i don't want to blindly put settings without understanding how they work.

The purpose to why am trying to understand overclocking is because down the line with a new PC rig that i will be building i'd like to safely gain the max performance i can out of it as such i want to use my current rig to learn.

With the introduction and the why's out of the way let me begin.

My questions for now mostly be on software overclocking a GPU

What i will be using and the reason

I will be using GPU tweak II to do my OC, GPU-z/OpenHardwareMonitor for monitoring and Unigine Heaven/Valey along with 3dmark/furmark for benchmarking and stability testing

Reason am using gpu tweak and not afterburner is cause it came with my current gpu they pretty much use identical settings with the same options only difference i've seen so far is the 2d/3d app profiling system that afterburner has and gpu tweak does not, if am missing something and i should auto swap it due to a clear performance advantage over gpu tweak please tell me why i should.

System Information

The gpu i will be using is the Strix 1050ti 4gb on stock cooling, it's not a gpu that is even worth overclocking point known and understood again am trying to learn to overclock and not make a monster out of a 1050ti.

Am on Win7x64bit AFAICT i have no bottleneck issues on this old rig from the CPU from what i can see on an old Phenom ii 965 BE 3.7ghz OC and as i understand it you can tell about bottleneck when 1st,2nd core clocking are not clocking 100% usage under gpu usage, 8gb 1600mhz Ram CLK9 it's a very old rig nearing 7years old now and like i said in the process of building a new rig unfortunately am waiting for the new 2950x cpu and the next series xx80/ti so am making due with what i have.

Onward to helping me learn

GPU's default values

From gpu tweak

GPU boost clock: 1392mhz
GPU voltage: +0
MemClock: 7008(eff.1752mhz)
FanSpeed: +0/0%
Power Target: +0/100%
Temp Target: +0/83c


What i've done so far

So far from what i read and understood this is what i've done with gpu tweak

GPU boost clock: 1392->1472(+80)
GPU voltage: +0
MemClock: 7008(eff.1752mhz)->7258(+250) eff 1815mhz
FanSpeed: +0/0% (used a custom fan curve 40c@30%rpm->83c@100%rpm)
Power Target: +0/100%->110%(+10)
Temp Target: +0/83c->+1/84c


This is my current profile it's not fine tuned as i'd like to come to a final stable OC before i fine tune it, i've arived to this by the following methodology

GPU Clock upward increments of +10 and then benchmark using heaven/valey until i noticed on the sensor tab that the PerfCap reason started hitting the PWR reason dropped it by -10mhz after that and then proceeded to do the MemClock following the same principle by 100mhz increment then dropping by 50hz
once i hit the PWR reason again, then proceeded with a stability test on heaven for 30 minutes (as i've seen by that time i'd crash if i had any issue, i did an earlier clock upwards to +150hz on core and 1200hz on memory on 125% power target no voltage increase that crashed@28minutes in heaven.) then proceeded to game on the most GPU intensive game i had at the time(modded skyrim...yes modded it's probably a better stability test than heaven/valey/furmark/3dmark for me) for about an hour with no issues.

From my understanding getting the PWR reason on the PerfCap reason indicates that the GPU is hitting the ceiling on the max power it can utilize for those clocks and thus would require more power/voltage for effective use of higher clock numbers, as such i came to the conclusion that if am hitting PWR reason theres no point clocking higher if it will throttle down and not give me the max benefit.


Questions so far


  • I dread upping Voltage on a GPU without having VRM sensors and an aftermarket cooler that i know would cool the VRM's of the gpu or a simple LM TIM mod on the VRM and i shouldn't overvoltage otherwise, is that a correct principle to follow?

  • Is my methodology correct and am i understanding the PWR reason in PerfCap Reason correctly?

  • Can someone help me understand the power target slider in afterburner/gpu tweak?
    From what i can tell it's modifying the threshold or the TDP for downthrotling the card when it reaches either power or the temp set at, what i want to learn and understand is A) where can i safely place it at? B) how does it work? does it operate on a fixed power draw range from the card set by the bios? and the power target decides whats the max limit it can draw power from the fixed range?
    for example in most clocking guide's i've seen most recommend to just put it on max but their max is typically 110-125% mine is 200% which is what got me confused. On GPUz NvidiaBios it shows me that the power limits of the card are 52.5w min, 75w def, and 150w max is that the safety range the manufacturers determine that the card can operate on? and does the power target simply work as follows, anything above 100%(75w)->200%(150w)? or bellow 100% down to 67%(52.5w)

  • Continuing from Power Target Slider, what does it exactly mean to change the TDP of the hardware for example can you keep pumping voltage/watts to a piece of hardware as long as theres a sufficient cooling? and therefore clock it higher? does a hardware have a range of voltage/watts that once you reach that limit it will literally burn your hardware in a blaze of glory no matter the cooling?


as you can see am no expert or understand really how the power stuff work which is probably the main reason i hesitate and don't do anything.
and i realize those are question that many of you will cringe at and/or find silly and dumb, even so if i don't ask them i won't learn 😞

Thanks for taking the time reading all of this.


Notes*
Down the line in the future if i get more questions i will necro the thread by bump and update original post with the [EDIT xx/xx/xx time.xx:x] and then the new question above the note bracket
2,842 Views
7 REPLIES 7

Silent_Scone
Super Moderator
1) You will have no problems increasing the voltage cap on NVIDIA cards as there is both a soft and hard amperage limit in place. NVIDIA's Greenlight program (restricting cards from breaching certain power conditions) makes it extremely difficult to damage the GPU through software overclocking.

2) The Power Limit is induced when the card reaches the maximum TDP limit. By increasing the Power Target to the maximum value you allow the card to draw the maximum amount of current. Whether this limit is reached depends on thermal load, how demanding the application is and the core and voltage offset. Note the voltage offset slider simply unlocks higher voltage points along the GPU Boost Curve, which you can look at for yourself in the User Defined settings for Core offset in GPU Tweak.

3) See above, as long as temperatures are within spec, you can safely raise the Power Target. The limit works as you suggest in your post.

4) GPU Boost 3.0 works on GPU temperature and power conditions. As long as there is headroom, the card will boost higher than what the vendors specify until either condition is breached. Pascal has a voltage limit of 1.093v, which you will rarely see due to the limits in place. This includes even when the voltage slider is increased, as most cards will reach a power limit before being able to utilise this end of the spectrum.

I recommend using Futuremarks Firestrike stress test. This allows you to loop the test, but also monitors the boost clock for consistency. This test will tell you what your maximum potential overclock is before instability is found, but furthermore whether the card is able to maintain a consistent boost clock. If temperature conditions change drastically, you'll be able to see on the chart (as with GPU Tweak) at what temperature GPU Boost 3.0 is dropping the boost clock.

Hope this helps.
13900KS / 8000 CAS36 / ROG APEX Z790 / ROG TUF RTX 4090

Silent Scone wrote:
1) You will have no problems increasing the voltage cap on NVIDIA cards as there is both a soft and hard amperage limit in place. NVIDIA's Greenlight program (restricting cards from breaching certain power conditions) makes it extremely difficult to damage the GPU through software overclocking.

2) The Power Limit is induced when the card reaches the maximum TDP limit. By increasing the Power Target to the maximum value you allow the card to draw the maximum amount of current. Whether this limit is reached depends on thermal load, how demanding the application is and the core and voltage offset. Note the voltage offset slider simply unlocks higher voltage points along the GPU Boost Curve, which you can look at for yourself in the User Defined settings for Core offset in GPU Tweak.

3) See above, as long as temperatures are within spec, you can safely raise the Power Target. The limit works as you suggest in your post.

4) GPU Boost 3.0 works on GPU temperature and power conditions. As long as there is headroom, the card will boost higher than what the vendors specify until either condition is breached. Pascal has a voltage limit of 1.093v, which you will rarely see due to the limits in place. This includes even when the voltage slider is increased, as most cards will reach a power limit before being able to utilize this end of the spectrum.

I recommend using Futuremarks Firestrike stress test. This allows you to loop the test, but also monitors the boost clock for consistency. This test will tell you what your maximum potential overclock is before instability is found, but furthermore whether the card is able to maintain a consistent boost clock. If temperature conditions change drastically, you'll be able to see on the chart (as with GPU Tweak) at what temperature GPU Boost 3.0 is dropping the boost clock.

Hope this helps.


Thank you for taking the time to read and reply, i was not aware of the GreenLight program(it's my first NVidia card in 6ish years if not more)i will try to read up more on the Greenlight program, the user defined settings tip is much appreciated, i suppose when i understand software OC by logic i will be revisiting voltage operations in terms of bypassing hardware limits(hardware mods)..however this is definitely a subject that is beyond me and my understanding at this moment so i won't even touch it.

Again thanks your explanations on the PWR limit and target it places aside some of the hesitations i have, i will be swapping to Firestrike as per your suggestion as the feedback your describing is something that would be much appreciated on stability and see the kind of results and alterations i come up with.

As for the headroom, your referring to the TDP thresholds correct?

Silly question 😮 so wait is the boost indefinite does it simply continue to boost itself until it hits the TDP and what you're setting in the software is the starting initial boost clocks?

My foundation knowledge is...all over the place...:confused:

Murum wrote:
Thank you for taking the time to read and reply, i was not aware of the GreenLight program(it's my first NVidia card in 6ish years if not more)i will try to read up more on the Greenlight program, the user defined settings tip is much appreciated, i suppose when i understand software OC by logic i will be revisiting voltage operations in terms of bypassing hardware limits(hardware mods)..however this is definitely a subject that is beyond me and my understanding at this moment so i won't even touch it.

Again thanks your explanations on the PWR limit and target it places aside some of the hesitations i have, i will be swapping to Firestrike as per your suggestion as the feedback your describing is something that would be much appreciated on stability and see the kind of results and alterations i come up with.

As for the headroom, your referring to the TDP thresholds correct?

Silly question 😮 so wait is the boost indefinite does it simply continue to boost itself until it hits the TDP and what you're setting in the software is the starting initial boost clocks?

My foundation knowledge is...all over the place...:confused:



Boost clock headroom depends on TDP, voltage and thermal limits. After roughly 30-35 celcius, the curve levels out in terms of frequency. This is why watercooled cards are able to boost higher easier.
13900KS / 8000 CAS36 / ROG APEX Z790 / ROG TUF RTX 4090

Silent Scone wrote:
Boost clock headroom depends on TDP, voltage and thermal limits. After roughly 30-35 Celsius, the curve levels out in terms of frequency. This is why water cooled cards are able to boost higher easier.


I suppose water cooled cards you would or rather you would want to adjust the curve differently, or it wouldn't matter?

Murum wrote:
I suppose water cooled cards you would or rather you would want to adjust the curve differently, or it wouldn't matter?


Depends on the sample, the card will do what it will do at the applied voltage. Better cooling sometimes results in higher stable clocks, but this has become less applicable to Pascal than say to Kepler - so it depends on the GPU architecture, too.
13900KS / 8000 CAS36 / ROG APEX Z790 / ROG TUF RTX 4090

Silent Scone wrote:
Depends on the sample, the card will do what it will do at the applied voltage. Better cooling sometimes results in higher stable clocks, but this has become less applicable to Pascal than say to Kepler - so it depends on the GPU architecture, too.


I see, retrospectively wouldn't that mean that you can go an extra mile due to water cooling but as you say it depends on the sample, i suppose in the end of it all you play the hand you're served, or in this case the gpu/cpu.

I've also started using the superposition unigine benchmark for stress testing stability(using pc specs and not overkill settings) it's an overkill i think but it probably shows me the highest crash rate out of all the programs i used.

Thanks again for all the help and info the past couple of weeks taught me a lot 😄 now the waiting game begins for.

Murum wrote:
I see, retrospectively wouldn't that mean that you can go an extra mile due to water cooling but as you say it depends on the sample, i suppose in the end of it all you play the hand you're served, or in this case the gpu/cpu.

I've also started using the superposition unigine benchmark for stress testing stability(using pc specs and not overkill settings) it's an overkill i think but it probably shows me the highest crash rate out of all the programs i used.

Thanks again for all the help and info the past couple of weeks taught me a lot 😄 now the waiting game begins for.


You can go further on watercooling, yes. But with Pascal, without going sub zero, most samples will top out around the same which is anywhere in the region of 2100-2150. Very few samples being able to do the latter, this is simply due to the architecture.

No problem!
13900KS / 8000 CAS36 / ROG APEX Z790 / ROG TUF RTX 4090