cancel
Showing results for 
Search instead for 
Did you mean: 

Strix 1080Ti Overclocking Guide

Silent_Scone
Super Moderator

ASUS Strix 1080TI Overclocking Overview


It’s no secret that the NVIDIA Pascal architecture is monumentally fast. In the last 3 years alone, the expectation we have of real-time rendering has grown exponentially. With faster hardware available to the masses, games developers are free to explore vastly more resource intensive visual effects. Of course, there are many cogs in the well-oiled machine that is the PC gaming industry, but without the hardware to drive it none of it would be possible.

Unlike the 1080GTX, the 1080Ti is based on big Pascal (GP102). Sporting the same 3584 CUDA cores and an even higher boost clock renders the Titan X obsolete, putting the Ti firmly in the middle, just below the final full-fat iteration Titan Xp (3840 CUDA).
In an attempt to not tread on the Titan Xp's toes, NVIDIA have clipped the Ti's memory bus with 11GB of GDDR5X instead of the 12GB found on the former. In terms of capacity, most of us who are familiar with modern demand will know both amounts are ample for gaming tasks, omitting one or two eccentric scenarios. The Titan brand is also an NVIDIA exclusive one, meaning it does not benefit from the refined cooling and design that the AIB market usually offers.

Let's up the ante

As enthusiasts, it's in our nature to want to get the most out of the hardware, and maintaining a constant frame rate is undeniably the key. Advancements in display technology such as UHD and VR put the onus on ultra-low latency. And that means pushing a high-end GPU is no longer something just for the diehard, it's become a necessity to ring out those extra frames to hit the target set by high-refresh displays. Paul Engemann’s ‘Push It to the Limit’ should be playing in the background at this point for maximum effect.

The ASUS 1080Ti ROG Strix is comprised of a custom PCB with a high-quality VRM (10+2 phases vs 7 on the FE edition) for clean, high-current power delivery, as well as two 8 pin power connectors to ensure ample supply. A That’s topped with a brand new 2.5 slot cooling design, which nets you 1683MHz out of the box compared to the Founder Edition’s 1,582MHz. Moreover, we can raise this further once in the card’s OC mode to 1,708MHz. With this in hand, you’re already off to a good start if fluidity is the name of the game.
The enhanced cooler result in a more composed environment for the GPU cores. If you've lived with NVIDIA's vapour chamber cooler for any length of time, you'll understand just how appealing a capable thermal solution is when looking for an air-cooled card. In fact, the Strix’s triple wing-blade fan design remains entirely passive until reaching 55c.

64884


A quick perambulation

Dubbed MaxContact technology, the all-new 2.5 slot heat spreader design has 40% more surface coverage than the previous DirectCU offering.

On top, three Wing-Blade fans with IPX5 dust resistance for longer life operation.

64887


ASUS FanConnect II allows us to connect two additional PWM or DC fans. These can either be controlled automatically in sync with the GPU fans, or manually via Fan Xpert. A welcoming addition if wanting to control chassis intake and exhaust fan speeds based on GPU temperature. The RGB header also makes a return, allowing us to sync lighting effects with other Aura Sync capable hardware.

Unlike the Founders Edition, the Strix includes a dual-link DVI for those who were disappointed in its absence from other models. Sitting alongside are two HDMI 2.0b connectors, and three DisplayPort 1.2a, giving us the capability to drive up to 4 displays.

64885


Having quiet cooling is one thing, but the concrete alloy chokes help to reduce other sources of noise. Speaking personally, I've had the card on an open chassis sitting right next to me. Compared to the Founders Edition, the Strix is certainly less audible underload in this regard, also.

NVIDIA’s greenlight induces voltage limits that prevent us subjecting our cards to high voltage beyond what they deem is to be safe for a long lifespan. Ultimately, this means when you're looking for the best all round solution, thermals and noise are the most important. When we breach the thermal target our effective boost clock is reduced, and as a direct result so is performance.


Room for improvement

With introductions out of the way, the fun can start. Installing GPU Tweak II, we'll set out to see what's to be gained from overclocking.
The GPU Tweak II UI is intuitive and easy to navigate. By default, the UI is left in Simple Mode. This gives you the GPU speed increase over stock, GPU temperature, and Vbuffer usage. The conventional profiles are laid out here, too.

64880


OC Mode: This mode takes the card to 1708MHz whilst increasing our power target to 110%. If wanting the most aggressive performance whilst jumping straight into games, this is the profile for you.

Gaming Mode: This is the Strix default pre-set. The GPU clock speed will run at 1683MHz.

Silent Mode: In this mode the GPU clock speed will run at 1658MHz, with a reduced power target of 90%.

My Profile:
A quick select for your own predefined profiles, something we will be exploring shortly.

Game Booster: When performance is key, having anything unnecessary running in the background is frowned upon. This option allows us to automatically adjust Windows visual appearance for best performance, as well as turn off unnecessary Windows services that may be taking up resources.

0dB Fan: When paired with the Strix or other compatible cards, this button when enabled allows the card to remain passive until upon reaching 55c.

Info: Where you can find information about the 1080Ti as well as Tweak II

Tools: From here we can install Aura Graphics, which gives us full control of the RGB lighting on the Strix. You can either run this independently, or install the regular Aura software to sync lighting with compatible hardware.

Xsplit Gamecaster : Allows both streaming and recording through an intuitive interface which includes an FPS and VRAM overlay - something that GPU Tweak currently lacks (although Rivatuner is arguably still the application of choice for that purpose). In addition, it can change GPU Tweak profiles on –the-fly. Having used Xsplit now for a week or so, I have no complaints. Personally, I think it feels less intrusive than other alternatives such as NVIDIA Share. However, the snag is the free version limits recording to 720p and 30fps.


Professional Mode: This is where the magic happens. Some of you may be familiar with the interface already and simply want to get down to the number crunching. However, those new to GPU Tweak II should read on.

64881


Monitor: Here we can keep an eye on the state of things. For the first few hours, you might find you're looking to this window a great deal. By clicking the expand button, we can either discard items from the window we do not wish to use or rearrange them. Logging is also available if you wish to look back over any of these statistics.

Profiles:
Here we can select from the predefined profiles, or create and save our own after establishing an overclock

GPU Boost Clock: Sets the offset for our GPU boost clock. The maximum clock at a set frequency will depend on our temperature and power settings. This means that the frequency we see under our applied offset is almost certainly not what we will see under load. At stock, expect closer to 1950MHz on the Strix. Far higher than the touted boost specs.

GPU Voltage: On Pascal, voltage increase is expressed on a percent scale based on multiple voltage points. By default, the upper voltage points are locked. Once we increase the voltage offset, we unlock these upper voltage points, giving us some additional headroom. Pascal does not respond particularly well to voltage in terms of obtainable clocks, however, once we increase core speeds certain heavier scenarios may induce a voltage limit. This means GPU Boost 3.0 has reached the maximum performance for the default or set voltage. For this reason, we are better off leaving this at 100%. Obligatory disclaimers with overclocking apply here, although there are throttling and amperage measures in place, one still needs to be aware that overclocking voids your warranty.

Memory Clock: This controls the offset for the memory clock. Although performance gains are limited, the new GDDR5X IC can stretch its legs. My sample can achieve 6000MHz. Memory bandwidth can have more impact when using heavy anti-aliasing techniques and higher resolutions, however, best not to forget that core frequency is still king.

Memory related instability will normally manifest in an application or system hang. Like with tuning other aspects of the system, it’s best to overclock one thing at a time to avoid any red herrings and confusion.

Fan Speed:
Automatic or manual control of the Wing Blade fans. The default profile keeps the GPU cool even when overclocking. I've yet to see the card exceed 70C. That said, if cool as possible is the name of the game , we can set a fixed fan-speed or adjust the curve. Noise levels are entirely subjective, and the Strix is no different. You must see what works best for you.

Power Target: This allows us to increase the maximum power draw. Even if not looking to overclock the card, simply increasing the power target can net some additional performance. You will hit the power limit on the 1080Ti long before hitting a thermal one thanks to the Strix cooler - even when not overclocked. I'd recommend increasing this to 120% before starting.

FPS Target: No real introduction needed. With this set, the card will limit itself to the desired frame rate. Doing so can reduce temperature, and as a direct result help GPU Boost 3.0 to maintain a higher boost clock.

GPU Temp Target: This setting controls what temperature the card will maintain. Because of the Strix’s huge cooling capacity, we can safely set the priority to the power target. With the Strix cooler, load temps generally fall between 50-65c under load (with 21C ambient temps), which is well within the safe zone. Pascal also responds well to lower temperatures:




64870
64874
Image
64877
64878
64905
64879






With a quick adjust of the fan curve, we can shave 10c off GPU core temperatures. Better still, with a good water-cooling loop in tandem with an EK water block, we can keep things under 35C. The benefits of doing so can be seen in the benchmarks above (temperatures taken from The Division Benchmark).

Finding the limits

Should go without saying, but test the GPU at stock settings before you start overclocking, to make sure everything is in order, and to gauge system performance. CPU and memory stability if overclocking should also be in check. If we’re setting out to test the stability of our GPU, having other sub-systems unstable has the potential to create confusion if experiencing instability during the process.

Power Target: Raise the power target to 120%, so that we have the highest TDP limit available.

Core +70 Offset (1753MHz): Finding the max core frequency is straightforward, and most samples will do within 100MHz of each other. We will start by applying a +70 MHz offset. Once we start testing the card, if any crashing occurs, we can back off the offset by -10 MHz each time. No harm will come to the GPU; the driver will simply recover and we can try again. In the event the system doesn't recover; we must reboot. Continue to increase the offset by +10MHz every time until we experience instability.

Memory: +200 Offset (11210MHz): Most 1080Ti’s should be able to achieve this. This can be kept conservative for the time being; the gains are limited, and we don't want to produce any red herrings with unstable memory.

GPU Voltage 100%: Moving the voltage slider to 100% avoids hitting voltage limits when finding the max frequency. *

Custom fan curve (room/case ambient 20c): The screenshot below shows how I have set my fan curve. The Strix runs exceptionally cool even on the default curve. However, by increasing things slightly we can keep the card even cooler without much intrusion from the fans. The fans become audible at around 55%, but you will need to see what works best in your environment. If you prefer, you can also set the fan to 100% to try and find maximum attainable frequency, however, this isn't practical if noise is a concern.

64901


Put it to the test

3DMark's stress test feature lets us loop the initial graphics test up to 20 times, which takes roughly 10 minutes. What constitutes as a pass or fail - assuming the test doesn't crash - will depend on if the card can maintain the performance throughout, which is no sweat whatsoever for the Strix cooler. I recommend using the Firestrike Extreme or Ultra settings for this (1440p and 4K respectively). The test resolution is down-sampled, so it may even be run on a 1080P panel.

From the screenshot below, we can see that the GPU started out at 45C and finished at 64C. This is well within the cards stock temperature target. This gives us a consistent average frame rate throughout, with a pass rate of 99%, and a GPU core frequency between 2038-2050MHz (anything under 97% is automatically a fail). You can run the test at a higher core offset until you experience instability.

64882



Once we've passed consistently, I recommend moving away from synthetic tests. Now, simply use the card in games. Different titles stress the GPU in different ways, so there's little point running the same benchmarks over and over to test for unconditional stability. If we experience issues, we can relax settings to dial out the instability.

The extra mile - granular control

For some time now, we've seen the base clock and boost clock plastered on spec sheets. This doesn't tell us the whole story, especially with Pascal. GPU Boost uses a set frequency curve based on voltage points. If we set a fixed offset using the default curve positioned by NVIDIA, we're limiting our maximum frequency due to the fact the offset is applied to every point along the curve. Whereas this might be ok for some GPU, on others the voltage may be insufficient to provide stability at every point. For example, if we’re stable at 2100MHz at the midpoint in the curve and 2050Mhz at the very top, then we’re limited to 2050MHz.

Control of this curve has previously been gated off to the user, however, with Pascal and GPU Boost 3.0 we can manually set the frequency at these voltage points, opening the path to fine-grained tuning. Through GPU Tweak II and the user defined boost clock, we can set our core frequency at a given voltage point. For instance, my sample is comfortable with 2100MHz at around 1.035v. The catch, however, is that this method can take time. You're also not guaranteed to find any additional headroom, but if you don't try, you'll never know.

64883


What does this mean for our experience?

It doesn’t matter what type of gamer you are; frame rate and latency are important. Whether you’re running across the war-torn fields of France wielding an M1892 rifle, or rallying through the forests of Finland in a 4-door saloon, they’re a metric for those of us who either want the best experience the industry has to offer, or a competitive edge. Whichever it is, there’s no denying that stepping up the frequency slider for that little bit extra is a beckoning call, especially when it is so easy.

On this setup - in tandem with G-Sync - extracting performance through overclocking and keeping the GPU cool, ultimately ends up in a smoother experience. For the 3440x1440 target resolution on my personal system, one 1080Ti copes admirably thanks to G-Sync. That said, the benefit of a second card would certainly be welcome in some titles.

Of course, most of you will be aware by now that FPS is not the only metric, and arguably not the most important. The Alienware slogan was once “FPS is life”. In the present, we now know that those numbers don't tell us the whole story. Worth pointing out these benchmarks are recorded in FRAPS, so we're seeing what's being recorded at the rendering level.
*
13900KS / 8000 CAS36 / ROG APEX Z790 / ROG TUF RTX 4090
219,021 Views
53 REPLIES 53

Very nice tutorial. As an OC noob I have a couple of questions, though, before I start tinkering with my 1080TiOC. In GPU Temp Target what do you mean by "we can safely set the priority to the power target"? What should I set the GPU Temp Target as?

It looks like there are really only two values to adjust: GPU Boost Clock and Memory Clock. Should I adjust only one until it crashes then go back to the last stable value and adjust the other one until it crashes? What's the correct method?

Once everything is stable is it OK to dial back the voltage and power levels? Should I leave them at max?

Aside, out of the box my card in OC mode gives me a constant clock of 2000 on the nose. OC'ing isn't so important right now because I'm using a Samsung 40" 4K TV and not a G-Sync monitor. Need to save up for that.

Thanks.

IainM wrote:
Very nice tutorial. As an OC noob I have a couple of questions, though, before I start tinkering with my 1080TiOC. In GPU Temp Target what do you mean by "we can safely set the priority to the power target"? What should I set the GPU Temp Target as?

It looks like there are really only two values to adjust: GPU Boost Clock and Memory Clock. Should I adjust only one until it crashes then go back to the last stable value and adjust the other one until it crashes? What's the correct method?

Once everything is stable is it OK to dial back the voltage and power levels? Should I leave them at max?

Aside, out of the box my card in OC mode gives me a constant clock of 2000 on the nose. OC'ing isn't so important right now because I'm using a Samsung 40" 4K TV and not a G-Sync monitor. Need to save up for that.

Thanks.


Hi, no problem.


1) Set the GPU temp target to whatever you're comfortable with. The Strix cooler is capable enough that temperatures will be well within 80c. You can safely ignore the temperature target and raise the power target to 120% as suggested.

2) Work on both buses independantly till you find stablity. Starting with core, then moving on to memory. Keep in mind there are no real tangable gains on memory as few games are bandwidth constrained. Core is king.

3) If dialing in the overclock with the voltage slider already at max, then I'd suggest leaving it there.
13900KS / 8000 CAS36 / ROG APEX Z790 / ROG TUF RTX 4090

Just overclocked my new Asus ROG Strix 1080 Ti, after checking this thread and also some suggestions from Der8auer's video in the following link:
https://youtu.be/dG2Az0PclQA

First finding was that Der8auer's suggestions of +150 for GPU clock, and +550 for Memory clock were too optimistic for my silicon lottery ticket. Superposition managed to run once for my card under those settings, but never again. After that it always crashed, so I started from lower and explored back and forth till finding the borderline best settings for my card to run Superposition consistently and with the best possible score without artifacts or crashing.

Those settings for me were +125, +460, which still allowed the card to beat the 10K score barrier in Superposition, as shown in the image below. That score of 10027 was really the very highest I got. With those same OC settings some other runs yielded 10016 and 10020, but still all consistently above 10K. Still have to run lots of other benchmarks to make sure these settings are stable enough across the board. (Btw my card with no OC had a score of 9068, so overall this OC gives it ~10.6% improvement in performance)

Would like to mention that I could still bring the GPU clock slightly higher, up to +128 and the benchmark could still run ok, but the score would actually get worse by a lot, it would go down to around 9800. And with GPU clock at +130 the benchmark would crash for sure on me. The Mem clock could also be increased up to 480-500, but it would also make the score worse. So for me +125, +460 was the optimal, at least with Superposition. +124 and +150-160 could still get in the 10K ballpark or extremely close, in fact once I got a score of exactly 9999 with those, so still lots better than when using too high settings yielding ~9800 or then crashing.

So Best OC parameters are then not really the highest possible values that allow the card to run a benchmark without crashing it. Maximum possible score can be achieved with parameters slightly lower than the max-yet-still-non-crashing settings. This seems similar to what I had seen with my 1070 previously. I also reported on that in the 1070 overclocking thread here in the forums.

PS. I don't have a 4K monitor, so all of the OC and 4K benchmarking mentioned above was done using 4.00x DSR on the Nvidia Control Panel settings, and using a 1080p monitor.

Oxizee
Level 7
@JayvH: Guess you and me didnt win the sillicon race.

My card is at +52 GPU (1735 / 1974 boosted) and GPU Voltage 0, Memory Clock +62, Power target +120. Still looking for the limit. When i tried boosting my GPU above +55/60 the driver started to crash. Doing by steps now.

Oxizee
Level 7
I cant go higher than +50 (1733) on the GPU Clock. Memory isnt yet on his limit. I am testing with memory clock atm at +120 (11124) and trying each day a little bit higher. Power target 120, GPU Temp 90c, GPU Voltage 0%, Frame Rate Target 255.

Wished i could reach 2000mhz on the GPU. Guess i wont reach that.

JayvH wrote:
Yes, seems that we are not on the lucky side here.


Yes, I understand this so far. I was wondering why your default boost clock in Gaming Mode is that high. Mine is 1607 Mhz.
67801
But as I just rechecked, you have the OC version of the card and I use the non-OC version.



Yes that's the OC version 🙂


Oxizee wrote:
I cant go higher than +50 (1733) on the GPU Clock. Memory isnt yet on his limit. I am testing with memory clock atm at +120 (11124) and trying each day a little bit higher. Power target 120, GPU Temp 90c, GPU Voltage 0%, Frame Rate Target 255.

Wished i could reach 2000mhz on the GPU. Guess i wont reach that.



Temps can help with Pascal to an extent, as is shown in the guide. If you're looking to push things as far as they'll go within the NVIDIA defined amperage limits, watercooling is really the only option
13900KS / 8000 CAS36 / ROG APEX Z790 / ROG TUF RTX 4090

Oxizee
Level 7
Its no temp issue here. I wrote GPU Temp 90c, which i ment its the max temp slider for maximal temperature.

When i stress the card the Core maxes out on 1962 with around 68c degrees temp. So its having enough breathingroom. But i cant push more then +52 on the cores. After that the driver starts crashing after stress tests. Didnt found the limit of the memory clock yet. I am at +200 on memory atm.

67858

Oxizee wrote:
Its no temp issue here. I wrote GPU Temp 90c, which i ment its the max temp slider for maximal temperature.

When i stress the card the Core maxes out on 1962 with around 68c degrees temp. So its having enough breathingroom. But i cant push more then +52 on the cores. After that the driver starts crashing after stress tests. Didnt found the limit of the memory clock yet. I am at +200 on memory atm.

67858



Breathing room isn't really the problem. In lowering the temps further such as with watercooling, Pascal can respond fairly well in terms of obtainable clocks. It's down to the silicon, if you want to push further and willing to put the time in - try adjusting the voltage curve manually.
13900KS / 8000 CAS36 / ROG APEX Z790 / ROG TUF RTX 4090

Oxizee
Level 7
I am on +226 memory atm. GPU Core cant be upped(1733 seems to be the max). What can i do with GPU Voltage Slider? Any tips?