cancel
Showing results for 
Search instead for 
Did you mean: 
Silent_Scone
Super Moderator

Throughout its product cycle, the RTX 4090 has been praised for pushing the boundaries of what's possible in graphics technology. Powered by NVIDIA's Ada Lovelace architecture, the RTX 4090 boasts a remarkable leap in performance compared to its predecessors, leveraging advanced technologies like DLSS 3.0 and real-time ray tracing. With 24GB of GDDR6X memory, 16,384 CUDA cores, and a boost clock that can exceed 2.5 GHz, the RTX 4090 offers extraordinary performance. Nearly 2 years on, it still holds the performance crown by a substantial margin over the competition.

On a personal level, the RTX 4090 is probably one of my all-time favourite GPUs. The only iteration I can relate it to is NVIDIA's G80, or more importantly the 8800 Ultra. This absolute powerhouse was the first time I remember being able to slam games with 4x to 8x MSAA (multisampling), and it would just take it as it was nothing. Ampere, to me, is that same gob-smacking level of performance. It's easy to overlook certain things these days in an industry that's rife from top to bottom with bleeding-edge tech, but the fact you can run Cyberpunk 2077 in a fully path-traced mode is nothing short of insanity.


"I’ve Heard It’s a Power Hog!"

Interestingly, some comments are still found online, even today attempting to ridicule the 4090 for its power draw, painting it as a power-hungry beast that demands far more electricity than it's worth. While it's true that the RTX 4090 can draw a significant amount of power when pushed to its limits, sometimes over 500w on the stock TUF BIOS, this aspect is often misunderstood or misrepresented. In reality, the 4090’s power targets don't make much sense and can be misleading. If we reduce the power target, in some scenarios we sacrifice very little performance but gain a substantial power saving.

Many users, especially those new to high-end hardware, assume that higher power limits translate directly to better performance. While this is objectively true, the gains often diminish rapidly beyond a certain point. As the power limit increases, the performance improvements start to plateau. Pushing the card to its maximum power draw can lead to minimal FPS gains in games and slight improvements in professional applications, and sometimes isn't worth the additional power consumption and heat generation. The energy savings from a reduced power target can be substantial over time for gamers and professionals who use their systems for extended periods. This not only translates to lower electricity bills but also contributes to a smaller environmental footprint if you're bothered about such things!

In essence, the RTX 4090’s immense performance has reshaped the expectations and timelines within the GPU industry. Its ability to remain relevant and dominant has allowed NVIDIA to extend the gap between the 4000 and 5000 series, providing them with the opportunity to innovate at their own pace. As a result, the next generation of graphics cards is likely to bring even more groundbreaking advancements, making the wait worthwhile for enthusiasts and professionals alike.

 

Finding the Sweet Spot: Balancing Power and Performance

We’re starting from a baseline of 70% Power Target and exploring the benefits and trade-offs as we increase the power target to 100% and 133%. Additionally, we’ll explore the performance impact of pushing the card even further with an overclock and a voltage increase. This is a 285+ offset to the core, and over 1000+ offset to the memory.

NVIDIA began restricting the maximum voltage on the RTX 4090 GPU with the introduction of the AD102-301 chip variant, which limits the core voltage to 1.07V, compared to the 1.1V allowed by the earlier AD102-300 variant. For this reason, we won't exceed 1.07v.

1440P Power Target Impact

Shadow of the Tomb Raider

SOTR.png

Silent_Scone_1-1724872190795.png

 

Wukong

Silent_Scone_2-1724875252503.png

Silent_Scone_4-1724875321581.png

 

Forza Motorsport 

Forza Motorsport 5.png

Silent_Scone_5-1724875570998.png

 

Cyberpunk 2077

Cyberpunk.png

Silent_Scone_6-1724875669668.png

At 1440p, reducing the RTX 4090's power draw from 100% to 70% TDP cuts power consumption by 7.81% on average, while performance only drops by 1.36%. Efficient power, negligible performance loss.

4K Power Target Impact

Shadow of the Tomb Raider

SOTR 4K.png

Silent_Scone_7-1724875843907.png

Wukong

Wukong 4K.png

Silent_Scone_0-1724877220390.png

 

 

Forza Motorsport

FM5 4K.png

Silent_Scone_9-1724876139966.png

Cyberpunk 2077

CP2077 4K.png

Silent_Scone_11-1724876301051.png

Futuremark: Steel Nomad

Steel Nomad.png

Silent_Scone_1-1724877132693.png

At 4K from the batch of tests, reducing the RTX 4090's TDP from 100% to 70% results in a substantial power savings of around 21.98%, with only a minor performance decrease of approximately 5.76%. Significant efficiency, minimal performance sacrifice

Considering the minimal impact on performance when lowering the power target, combined with significant power savings, it becomes clear that the RTX 4090 is more than just a powerhouse—it's incredibly efficient below 350W.  For those who want incredible performance but are worried about either their environmental footprint or skyrocketing electricity bills, don't fret.