cancel
Showing results for 
Search instead for 
Did you mean: 

Strix 1080Ti Overclocking Guide

Silent_Scone
Super Moderator

ASUS Strix 1080TI Overclocking Overview


It’s no secret that the NVIDIA Pascal architecture is monumentally fast. In the last 3 years alone, the expectation we have of real-time rendering has grown exponentially. With faster hardware available to the masses, games developers are free to explore vastly more resource intensive visual effects. Of course, there are many cogs in the well-oiled machine that is the PC gaming industry, but without the hardware to drive it none of it would be possible.

Unlike the 1080GTX, the 1080Ti is based on big Pascal (GP102). Sporting the same 3584 CUDA cores and an even higher boost clock renders the Titan X obsolete, putting the Ti firmly in the middle, just below the final full-fat iteration Titan Xp (3840 CUDA).
In an attempt to not tread on the Titan Xp's toes, NVIDIA have clipped the Ti's memory bus with 11GB of GDDR5X instead of the 12GB found on the former. In terms of capacity, most of us who are familiar with modern demand will know both amounts are ample for gaming tasks, omitting one or two eccentric scenarios. The Titan brand is also an NVIDIA exclusive one, meaning it does not benefit from the refined cooling and design that the AIB market usually offers.

Let's up the ante

As enthusiasts, it's in our nature to want to get the most out of the hardware, and maintaining a constant frame rate is undeniably the key. Advancements in display technology such as UHD and VR put the onus on ultra-low latency. And that means pushing a high-end GPU is no longer something just for the diehard, it's become a necessity to ring out those extra frames to hit the target set by high-refresh displays. Paul Engemann’s ‘Push It to the Limit’ should be playing in the background at this point for maximum effect.

The ASUS 1080Ti ROG Strix is comprised of a custom PCB with a high-quality VRM (10+2 phases vs 7 on the FE edition) for clean, high-current power delivery, as well as two 8 pin power connectors to ensure ample supply. A That’s topped with a brand new 2.5 slot cooling design, which nets you 1683MHz out of the box compared to the Founder Edition’s 1,582MHz. Moreover, we can raise this further once in the card’s OC mode to 1,708MHz. With this in hand, you’re already off to a good start if fluidity is the name of the game.
The enhanced cooler result in a more composed environment for the GPU cores. If you've lived with NVIDIA's vapour chamber cooler for any length of time, you'll understand just how appealing a capable thermal solution is when looking for an air-cooled card. In fact, the Strix’s triple wing-blade fan design remains entirely passive until reaching 55c.

64884


A quick perambulation

Dubbed MaxContact technology, the all-new 2.5 slot heat spreader design has 40% more surface coverage than the previous DirectCU offering.

On top, three Wing-Blade fans with IPX5 dust resistance for longer life operation.

64887


ASUS FanConnect II allows us to connect two additional PWM or DC fans. These can either be controlled automatically in sync with the GPU fans, or manually via Fan Xpert. A welcoming addition if wanting to control chassis intake and exhaust fan speeds based on GPU temperature. The RGB header also makes a return, allowing us to sync lighting effects with other Aura Sync capable hardware.

Unlike the Founders Edition, the Strix includes a dual-link DVI for those who were disappointed in its absence from other models. Sitting alongside are two HDMI 2.0b connectors, and three DisplayPort 1.2a, giving us the capability to drive up to 4 displays.

64885


Having quiet cooling is one thing, but the concrete alloy chokes help to reduce other sources of noise. Speaking personally, I've had the card on an open chassis sitting right next to me. Compared to the Founders Edition, the Strix is certainly less audible underload in this regard, also.

NVIDIA’s greenlight induces voltage limits that prevent us subjecting our cards to high voltage beyond what they deem is to be safe for a long lifespan. Ultimately, this means when you're looking for the best all round solution, thermals and noise are the most important. When we breach the thermal target our effective boost clock is reduced, and as a direct result so is performance.


Room for improvement

With introductions out of the way, the fun can start. Installing GPU Tweak II, we'll set out to see what's to be gained from overclocking.
The GPU Tweak II UI is intuitive and easy to navigate. By default, the UI is left in Simple Mode. This gives you the GPU speed increase over stock, GPU temperature, and Vbuffer usage. The conventional profiles are laid out here, too.

64880


OC Mode: This mode takes the card to 1708MHz whilst increasing our power target to 110%. If wanting the most aggressive performance whilst jumping straight into games, this is the profile for you.

Gaming Mode: This is the Strix default pre-set. The GPU clock speed will run at 1683MHz.

Silent Mode: In this mode the GPU clock speed will run at 1658MHz, with a reduced power target of 90%.

My Profile:
A quick select for your own predefined profiles, something we will be exploring shortly.

Game Booster: When performance is key, having anything unnecessary running in the background is frowned upon. This option allows us to automatically adjust Windows visual appearance for best performance, as well as turn off unnecessary Windows services that may be taking up resources.

0dB Fan: When paired with the Strix or other compatible cards, this button when enabled allows the card to remain passive until upon reaching 55c.

Info: Where you can find information about the 1080Ti as well as Tweak II

Tools: From here we can install Aura Graphics, which gives us full control of the RGB lighting on the Strix. You can either run this independently, or install the regular Aura software to sync lighting with compatible hardware.

Xsplit Gamecaster : Allows both streaming and recording through an intuitive interface which includes an FPS and VRAM overlay - something that GPU Tweak currently lacks (although Rivatuner is arguably still the application of choice for that purpose). In addition, it can change GPU Tweak profiles on –the-fly. Having used Xsplit now for a week or so, I have no complaints. Personally, I think it feels less intrusive than other alternatives such as NVIDIA Share. However, the snag is the free version limits recording to 720p and 30fps.


Professional Mode: This is where the magic happens. Some of you may be familiar with the interface already and simply want to get down to the number crunching. However, those new to GPU Tweak II should read on.

64881


Monitor: Here we can keep an eye on the state of things. For the first few hours, you might find you're looking to this window a great deal. By clicking the expand button, we can either discard items from the window we do not wish to use or rearrange them. Logging is also available if you wish to look back over any of these statistics.

Profiles:
Here we can select from the predefined profiles, or create and save our own after establishing an overclock

GPU Boost Clock: Sets the offset for our GPU boost clock. The maximum clock at a set frequency will depend on our temperature and power settings. This means that the frequency we see under our applied offset is almost certainly not what we will see under load. At stock, expect closer to 1950MHz on the Strix. Far higher than the touted boost specs.

GPU Voltage: On Pascal, voltage increase is expressed on a percent scale based on multiple voltage points. By default, the upper voltage points are locked. Once we increase the voltage offset, we unlock these upper voltage points, giving us some additional headroom. Pascal does not respond particularly well to voltage in terms of obtainable clocks, however, once we increase core speeds certain heavier scenarios may induce a voltage limit. This means GPU Boost 3.0 has reached the maximum performance for the default or set voltage. For this reason, we are better off leaving this at 100%. Obligatory disclaimers with overclocking apply here, although there are throttling and amperage measures in place, one still needs to be aware that overclocking voids your warranty.

Memory Clock: This controls the offset for the memory clock. Although performance gains are limited, the new GDDR5X IC can stretch its legs. My sample can achieve 6000MHz. Memory bandwidth can have more impact when using heavy anti-aliasing techniques and higher resolutions, however, best not to forget that core frequency is still king.

Memory related instability will normally manifest in an application or system hang. Like with tuning other aspects of the system, it’s best to overclock one thing at a time to avoid any red herrings and confusion.

Fan Speed:
Automatic or manual control of the Wing Blade fans. The default profile keeps the GPU cool even when overclocking. I've yet to see the card exceed 70C. That said, if cool as possible is the name of the game , we can set a fixed fan-speed or adjust the curve. Noise levels are entirely subjective, and the Strix is no different. You must see what works best for you.

Power Target: This allows us to increase the maximum power draw. Even if not looking to overclock the card, simply increasing the power target can net some additional performance. You will hit the power limit on the 1080Ti long before hitting a thermal one thanks to the Strix cooler - even when not overclocked. I'd recommend increasing this to 120% before starting.

FPS Target: No real introduction needed. With this set, the card will limit itself to the desired frame rate. Doing so can reduce temperature, and as a direct result help GPU Boost 3.0 to maintain a higher boost clock.

GPU Temp Target: This setting controls what temperature the card will maintain. Because of the Strix’s huge cooling capacity, we can safely set the priority to the power target. With the Strix cooler, load temps generally fall between 50-65c under load (with 21C ambient temps), which is well within the safe zone. Pascal also responds well to lower temperatures:




64870
64874
Image
64877
64878
64905
64879






With a quick adjust of the fan curve, we can shave 10c off GPU core temperatures. Better still, with a good water-cooling loop in tandem with an EK water block, we can keep things under 35C. The benefits of doing so can be seen in the benchmarks above (temperatures taken from The Division Benchmark).

Finding the limits

Should go without saying, but test the GPU at stock settings before you start overclocking, to make sure everything is in order, and to gauge system performance. CPU and memory stability if overclocking should also be in check. If we’re setting out to test the stability of our GPU, having other sub-systems unstable has the potential to create confusion if experiencing instability during the process.

Power Target: Raise the power target to 120%, so that we have the highest TDP limit available.

Core +70 Offset (1753MHz): Finding the max core frequency is straightforward, and most samples will do within 100MHz of each other. We will start by applying a +70 MHz offset. Once we start testing the card, if any crashing occurs, we can back off the offset by -10 MHz each time. No harm will come to the GPU; the driver will simply recover and we can try again. In the event the system doesn't recover; we must reboot. Continue to increase the offset by +10MHz every time until we experience instability.

Memory: +200 Offset (11210MHz): Most 1080Ti’s should be able to achieve this. This can be kept conservative for the time being; the gains are limited, and we don't want to produce any red herrings with unstable memory.

GPU Voltage 100%: Moving the voltage slider to 100% avoids hitting voltage limits when finding the max frequency. *

Custom fan curve (room/case ambient 20c): The screenshot below shows how I have set my fan curve. The Strix runs exceptionally cool even on the default curve. However, by increasing things slightly we can keep the card even cooler without much intrusion from the fans. The fans become audible at around 55%, but you will need to see what works best in your environment. If you prefer, you can also set the fan to 100% to try and find maximum attainable frequency, however, this isn't practical if noise is a concern.

64901


Put it to the test

3DMark's stress test feature lets us loop the initial graphics test up to 20 times, which takes roughly 10 minutes. What constitutes as a pass or fail - assuming the test doesn't crash - will depend on if the card can maintain the performance throughout, which is no sweat whatsoever for the Strix cooler. I recommend using the Firestrike Extreme or Ultra settings for this (1440p and 4K respectively). The test resolution is down-sampled, so it may even be run on a 1080P panel.

From the screenshot below, we can see that the GPU started out at 45C and finished at 64C. This is well within the cards stock temperature target. This gives us a consistent average frame rate throughout, with a pass rate of 99%, and a GPU core frequency between 2038-2050MHz (anything under 97% is automatically a fail). You can run the test at a higher core offset until you experience instability.

64882



Once we've passed consistently, I recommend moving away from synthetic tests. Now, simply use the card in games. Different titles stress the GPU in different ways, so there's little point running the same benchmarks over and over to test for unconditional stability. If we experience issues, we can relax settings to dial out the instability.

The extra mile - granular control

For some time now, we've seen the base clock and boost clock plastered on spec sheets. This doesn't tell us the whole story, especially with Pascal. GPU Boost uses a set frequency curve based on voltage points. If we set a fixed offset using the default curve positioned by NVIDIA, we're limiting our maximum frequency due to the fact the offset is applied to every point along the curve. Whereas this might be ok for some GPU, on others the voltage may be insufficient to provide stability at every point. For example, if we’re stable at 2100MHz at the midpoint in the curve and 2050Mhz at the very top, then we’re limited to 2050MHz.

Control of this curve has previously been gated off to the user, however, with Pascal and GPU Boost 3.0 we can manually set the frequency at these voltage points, opening the path to fine-grained tuning. Through GPU Tweak II and the user defined boost clock, we can set our core frequency at a given voltage point. For instance, my sample is comfortable with 2100MHz at around 1.035v. The catch, however, is that this method can take time. You're also not guaranteed to find any additional headroom, but if you don't try, you'll never know.

64883


What does this mean for our experience?

It doesn’t matter what type of gamer you are; frame rate and latency are important. Whether you’re running across the war-torn fields of France wielding an M1892 rifle, or rallying through the forests of Finland in a 4-door saloon, they’re a metric for those of us who either want the best experience the industry has to offer, or a competitive edge. Whichever it is, there’s no denying that stepping up the frequency slider for that little bit extra is a beckoning call, especially when it is so easy.

On this setup - in tandem with G-Sync - extracting performance through overclocking and keeping the GPU cool, ultimately ends up in a smoother experience. For the 3440x1440 target resolution on my personal system, one 1080Ti copes admirably thanks to G-Sync. That said, the benefit of a second card would certainly be welcome in some titles.

Of course, most of you will be aware by now that FPS is not the only metric, and arguably not the most important. The Alienware slogan was once “FPS is life”. In the present, we now know that those numbers don't tell us the whole story. Worth pointing out these benchmarks are recorded in FRAPS, so we're seeing what's being recorded at the rendering level.
*
13900KS / 8000 CAS36 / ROG APEX Z790 / ROG TUF RTX 4090
219,047 Views
53 REPLIES 53

Oxizee wrote:
I am on +226 memory atm. GPU Core cant be upped(1733 seems to be the max). What can i do with GPU Voltage Slider? Any tips?


You'll have to try it and see. Every GPU is different in this regard. For instance, some may be stable at 2050Mhz at @ 1.065v, where as others may be unstable at this voltage, yet stable at 1.050v at the same frequency. Sadly there's no magic bullet with the adjustable voltage points.
13900KS / 8000 CAS36 / ROG APEX Z790 / ROG TUF RTX 4090

Oxizee
Level 7
Does the voltage affect the GPU Core breathingroom? So when i give more voltage i can put GPU Core a little higher again?

Edit: Upped GPU Voltage to Max (+100). GPU still on +50 (1733) and Memory Clock on +250 (11260). When boosted GPU Core Clock maxes out at 1987, 11246 memory clock. For now stable.

I just came across this thread and when I saw your clockspeeds I was quite surprised. My rog 1080ti of hits 2060 mhz on the core out of the box, is that normal? Cause you are talking about 2050 overcloked and mine goes higher without touching it.

Jorian123 wrote:
I just came across this thread and when I saw your clockspeeds I was quite surprised. My rog 1080ti of hits 2060 mhz on the core out of the box, is that normal? Cause you are talking about 2050 overcloked and mine goes higher without touching it.


I meant 1080 ti oc not of

Jorian123 wrote:
I just came across this thread and when I saw your clockspeeds I was quite surprised. My rog 1080ti of hits 2060 mhz on the core out of the box, is that normal? Cause you are talking about 2050 overcloked and mine goes higher without touching it.


That's how GPU Boost works until voltage and thermal limits are breached, but the bins are derived from ASIC quality. It's possible to hit higher clocks out of the box with some samples, but what's important is the average boost clock. This is why I've used the 3DMark stress test, but it comes down to most things of this nature - some samples are simply better than others.
13900KS / 8000 CAS36 / ROG APEX Z790 / ROG TUF RTX 4090

Silent_Scone
Super Moderator
Worth noting from the above, that voltage control is not enabled. You'll likely find that the reason setting a lower offset nets you better scores is because you're hitting a voltage limit.
13900KS / 8000 CAS36 / ROG APEX Z790 / ROG TUF RTX 4090

Silent Scone wrote:
Worth noting from the above, that voltage control is not enabled. You'll likely find that the reason setting a lower offset nets you better scores is because you're hitting a voltage limit.

Thanks, likely so, but not touching that control yet since it was not included in Der8auer's video.

There's also the fact that the current OC parameters were stable for running Superposition, and all other benchmarks (FurMark, Valley, Heaven, Time Spy, and Fire Strike) but not under Heaven when running it long enough. After 20 min, Heaven started showing some artifacts, even though not crashing. So started lowering the settings to find a Heaven-stable config. Posting shortly about it.

Following up on my comment above. Previous settings ran the benchmarks problem-free, but when stressing with Heaven for long enough (more than 20 minutes) there were artifacts. Hiccups and second-long black screens, even though Unigine recovered after those, not really crashing, but I stopped the run shortly after that anyway.

Lowered and kept testing:
GPU Clock: +100
Mem Clock: +450

Those are now my rock-solid Heaven-stable OC settings, even for more than an hour. Mem Clock at +460 got a Heaven score which was identical to using +450, so pointless, stayed with the lower number.

The card now reaches a score of up to 9941 in Superposition 4K, though most runs were between 9920 and 9940, so overall performance improvement is ~9.5%.

As mentioned by Silent Scone, notice this is a stable overclocking that does not include changes in the Core voltage control.

I have the non OC version, boost coming in at 1860mhz.

I currently have it at +105 core and +400 mem, which gives me a 1999.5 core and 5899 mem.

These values are stable in gaming, heaven and valley.

What I'm finding though is that during heaven, valley or gaming the core drops to about 1974 and sometimes even lower during a heaven/valley run.

Does this mean the +105 is not stable and it's downclocking to a stable OC?

I have the voltage set to 100% and power limit to 120%

I'd love some help in understanding this.

Thanks

xPepi wrote:
I have the non OC version, boost coming in at 1860mhz.

I currently have it at +105 core and +400 mem, which gives me a 1999.5 core and 5899 mem.

These values are stable in gaming, heaven and valley.

What I'm finding though is that during heaven, valley or gaming the core drops to about 1974 and sometimes even lower during a heaven/valley run.

Does this mean the +105 is not stable and it's downclocking to a stable OC?

I have the voltage set to 100% and power limit to 120%

I'd love some help in understanding this.

Thanks


The boost clock will settle depending on voltage and temperature conditions. The temperature conditions for clock speed is more sensitive on Pascal than on previous generations of GPU. Increasing the voltage slider unlocks the higher voltage points on the card, however, these aren't of any benefit if temperature conditions don't allow. Hence why you can see the boost clock lower as the card gets warmer.
13900KS / 8000 CAS36 / ROG APEX Z790 / ROG TUF RTX 4090