cancel
Showing results for 
Search instead for 
Did you mean: 
Silent_Scone
Super Moderator

 

AI rendering - What is "real"?

Suppose you’ve spent time in local hardware groups or on social media. In that case, you’ve likely seen terminology such as “fake frames” pop up in discussions about NVIDIA’s new frame generation technology. I’ve always found this perspective a little puzzling. What exactly defines a “real” frame, anyway? Does it truly matter if a frame isn’t created through the traditional rendering pipeline, as long as the final result looks identical—or at least indistinguishably close—to the viewer? In my personal experience since the launch of an RTX 4090, I've rarely found a scenario where enabling DLSS impedes the experience enough that it's not worth the trade. That said, early implementations of frame generation did suffer from a noticeable latency increase (The Witcher 3 remaster, for example).  If not just by observation alone, it's clear that not all implementations are made equal depending on the game. However, as a rendering technology, latency should be at the forefront of NVIDIA's concern when introducing more frames into the pipeline, and I think they've done a lot to address this looking at what we've seen thus far. Already, NVIDIA has gone on record to say that 80% of NVIDIA GPU users turn DLSS on, which is a bold claim indeed.

Interestingly, the concept of “fake” seems to hold far less weight in the handheld gaming space, especially with the growing popularity of devices like the ROG Ally. In this arena, AMD’s APU takes centre stage, squeezing every drop of performance to handle the latest AAA games while leveraging cutting-edge technologies like AMD Fluid Motion Frames (FMF). In this context, innovations that enhance playability—whether labelled “real” or not—aren’t just nice-to-haves; they’re essential. After experiencing FMF2 firsthand, I can confidently say it’s a transformative feature. Running Spider-Man at 1080p at 90 frames per second on a handheld device—a game originally remastered for PC and PS5—feels nothing short of miraculous.

Just a few years ago, if someone had told me we’d have playable, fully path-traced games within the next decade, I’d have thought they were dreaming. Yet here we are. Cyberpunk 2077’s recent updates have made that a reality. Achieving full path tracing at 4K and beyond requires an extraordinary level of rendering power—something that, for now, only NVIDIA’s frame generation can deliver. And such an achievement is certainly nothing to scoff at, regardless of how the experience is delivered.

 

giphygiphy

 

 

DLSS Timeline

When NVIDIA first introduced Deep Learning Super Sampling (DLSS) in 2018 alongside the RTX 20-series GPUs, the goal was clear: leverage the power of machine learning to enhance gaming performance and image quality simultaneously. At its core, DLSS was designed to solve a pressing challenge in modern gaming—delivering high resolutions and advanced graphics effects, like ray tracing, without overwhelming the GPU.

NVIDIA's Deep Learning Super Sampling (DLSS) technology has come a long way since its initial release. The original DLSS 1.0 was a bold attempt, but its results were often inconsistent, with limited game support and visual artifacts dampening its reception. However, DLSS 2.0 marked a significant turning point. AI-driven temporal upscaling achieved a balance of sharp visuals and higher frame rates, transforming how gamers experienced high-resolution and demanding titles.

Then came DLSS 3.0, which introduced a controversial new feature: frame generation. By injecting AI-generated frames into the rendering pipeline, NVIDIA enabled smoother gameplay even when hardware couldn't keep up with traditional frame rendering. While frame generation demonstrated its potential to enhance perceived performance, it also drew criticism from some gamers and purists who labelled these "fake frames" as a departure from true rendering fidelity. Questions about input lag and visual authenticity further fueled the debate.

Now, with DLSS 4.0 on the horizon, NVIDIA faces both anticipation and scrutiny. Can it build on its previous successes while addressing the criticisms surrounding AI-driven techniques? The evolution of DLSS is as much a story of technological breakthroughs as it is about the shifting expectations of what makes gaming "real."

DLSS 3, while revolutionary, struggles to deliver a consistently better-than-native image quality. Artifacts with frame generation are often more noticeable than the fine detail issues seen in DLSS 2. One of the most glaring problems occurs with UI elements, where frame generation attempts to reconstruct the entire image, including text and vector graphics. Something that DLSS 4 will attempt to address head-on.

 

Silent_Scone_1-1736529829022.jpeg

Vision Transformer Model vs. CNN

  • DLSS 3.0: Utilized a convolutional neural network (CNN) to process localized pixel data and track changes over successive frames.
  • DLSS 4.0: Introduces a vision transformer model, which evaluates the importance of every pixel across the entire frame and over multiple frames. This approach enables greater stability, sharper detail, and improved anti-aliasing.

Key Difference: DLSS 4.0 employs a more advanced AI model that processes frames holistically, as opposed to the localized analysis used in DLSS 3.0.

Silent_Scone_0-1737030607744.png

ezgif-3-c077346725.gif

 

Multi-Frame Generation vs. Single Frame

NVIDIA is hinging a lot of the performance metrics from their CES showcase on the new technologies behind multi-frame generation. Previously on DLSS 3.0 implementation, there'd be a single synthesised frame in between being interpolated between rendered frames. DLSS 4.0 allows for up to 3 frames in the pipeline.

 

  • DLSS 3.0: Supported single-frame generation, interpolating one frame between rendered frames.
  • DLSS 4.0: Expands this capability to interpolate two or three frames between rendered frames, providing a more significant boost in perceived frame rates.

Key Difference: DLSS 4.0 introduces multi-frame generation, a leap forward from the single-frame approach of DLSS 3.0.

Silent_Scone_0-1736529792628.jpeg


Ray Reconstruction Enhancements

  • DLSS 3.0: Lacked specific features to enhance ray-traced visuals, relying on existing techniques for general upscaling.
  • DLSS 4.0: Adds ray reconstruction, designed to upscale ray-traced lighting, reflections, and shadows with greater clarity and fewer visual artifacts.

Key Difference: DLSS 4.0 explicitly improves ray-traced elements, making ray tracing more realistic and artifact-free.


Backwards Compatibility

  • DLSS 3.0: Limited compatibility, primarily optimized for RTX 30-series and 40-series GPUs.
  • DLSS 4.0: Backwards compatible with current and future implementation (75 games at launch), enabling enhanced visuals in games using DLSS 2.0 or higher. However, multi-frame generation is exclusive to the RTX 50-series GPUs.

Key Difference: DLSS 4.0’s core improvements are accessible to a broader range of GPUs, while some features remain exclusive to newer hardware.

 

Latency and Responsiveness

  • DLSS 3.0: Relied on NVIDIA Reflex to offset latency introduced by single-frame generation.
  • DLSS 4.0: Introduces frame-pacing technology in the RTX 50-series hardware to ensure smoother and more consistent frame delivery, even with multi-frame generation.

Key Difference: DLSS 4.0 adds hardware-level enhancements for frame pacing, maintaining responsiveness despite generating multiple frames. Using Cyberpunk 2077, NVIDIA demonstrated that introducing DLSS 4.0 with 4x (or 3 injected frames into the pipeline) results in an incredibly small latency penalty. 

DLSS-2-vs-3.5-vs-4-latency-1456x1089.jpg


Future Scalability

  • DLSS 3.0: Delivered iterative improvements to its AI model but was limited by CNN-based architecture.
  • DLSS 4.0: Built on a scalable vision transformer model, designed to deliver ongoing improvements through better AI training and refinement.

Key Difference: DLSS 4.0 is built for continuous evolution, offering greater potential for future enhancements than the iterative updates seen with DLSS 3.0.

 

Lower memory usage

NVIDIA has also emphasized DLSS 4's Frame Generation as having the potential advantage of reducing memory usage. This makes it particularly appealing for lower-end SKUs with limited VRAM. NVIDIA is currently claiming as much as 30% reduction. This improvement may stem from the new transformer model, which could offer benefits over the traditional CNN approach—though performance metrics will need to be validated upon launch.

 

Silent_Scone_0-1737028081656.png

 

Time for a new perspective?

Ultimately, no matter how new performance innovations are introduced, the benefits of DLSS are becoming increasingly hard to overlook. When it comes to evaluating its impact, image quality and latency remain the two critical factors. As AI-generated frames inch closer to being indistinguishable from rasterized ones, the argument against so-called "fake frames" grows weaker. Only time will tell how far this technology can go!