Nvidia RTX DLSS: Everything you need to know | Computing
Alongside the fastest graphics cards ever built for consumers, Nvidia‘s Turing-generation of GPUs also made possible some intriguing new features for gamers everywhere. Ray tracing is the easiest to wrap your head around, but deep learning supersampling, or DLSS, is a little more nebulous.
Even if it’s more complicated to understand though, DLSS has the potential to be the greatest feature of Nvidia’s 2000-series graphics cards, improving visuals and increasing performance in the same breath. To help you understand just how it works, here’s our guide to everything you need to know about Nvidia’s RTX DLSS technology, so you can decide whether it’s enough of a reason to upgrade to a new 2080 or 2080 Ti.
What is DLSS?
Nvidia hasn’t been very clear on what exactly deep learning super-sampling actually is, but it does provide a few broad strokes descriptions of it. It revealed the following in its breakdown of the Turing Architecture:
“DLSS leverages a deep neural network to extract multidimensional features of the rendered scene and intelligently combine details from multiple frames to construct a high-quality final image. DLSS uses fewer input samples than traditional techniques such as temporal anti-aliasing (TAA) while avoiding the algorithmic difficulties such techniques face with transparency and other complex scene elements.”
Aliasing creates the jagged edges of an object in a scene, and the process known as “anti-aliasing” helps mitigate that effect. In all it’s different forms, whether it’s multi-sampling, fast approximate, or temporal, they work by approximating what should appear in the gaps between pixels. DLSS works a little like that, but that’s not the whole picture.
DLSS also leverages some form of super-sampling to arrive at its eventual image. That involves rendering content at a higher resolution than it was originally intended for and using that information to create a better-looking image. But super-sampling typically results in a big performance hit because you’re forcing your graphics card to do a lot more work. DLSS however, appears to actually improve performance.
What does DLSS actually do?
DLSS provides a better looking overall image without using as much of the graphics card’s main processing capabilities as traditional anti-aliasing techniques. In theory, that should free up those resources for additional rendering tasks, thereby opening up the possibility of greater detail levels or higher frame rates, depending on the preferences of the user.
The technical director of Final Fantasy XV recently claimed that DLSS has a dramatic impact on performance when applied to the game, effectively giving gamers a better looking and higher-frame-rate experience. We haven’t seen any examples of this outside that benchmark and Epic’s Infiltrator demo as of yet. Final Fantasy XV isn’t considered to have a great implementation of other AA techniques like TAA either, but it’s still encouraging.
With no actual games available to test these claims as of yet, it’s hard to nail down exactly what DLSS is doing, but some have taken steps to figure it out all the same. TomsHardware performed a relatively detailed investigation using the Final Fantasy XV benchmark which utilizes DLSS. He was able to force the demo to run without anti-aliasing, providing a unique insight into the difference between the demo with DLSS, TAA, and no anti-aliasing.
The results were far from cut and dry, with some instances where DLSS looked better than TAA, and some where no anti-aliasing at all seemed preferable. In most cases though, DLSS performed best at 4K resolution and seemed to improve image quality the longer a scene ran and the more information it had to draw from to create its composite images.
The testing also discovered that a scene with DLSS running at 4K resolution actually ran faster with higher framerates than no anti-aliasing at all, providing a hint of what DLSS might actually be doing under the hood.
How does DLSS work?
Nobody is quite sure how DLSS works just yet, but we do have some hints. By Nvidia’s description, we could surmise that it uses AI to render the final image seen by gamers, drawing from different rendered frames to construct an altogether cleaner image with less of an overhead. But it’s not as simple as that.
TomsHardware’s testing suggests that DLSS might actually be rendering a scene at a lower resolution than it was set for, then upscaling certain elements of it to give the impression of a better overall image. That would explain the higher frame rates when DLSS was enabled versus no AA solution at all, and could be why TAA occasionally results in a better-looking image. Technically, it’s rendering at a higher resolution.
It also appears that DLSS might actually employ some measure of anti-aliasing as part of its rendering process, which would explain why games that employ TAA have been the first to offer it as a feature.
It may be that with DLSS, what we’re looking at is a real-time implementation of Nvidia’s screenshot-enhancing Ansel technology. It renders the image at a lower resolution to provide a performance boost, then applies various effects — including, it seems, anti-aliasing — to deliver a relatively comparable overall effect to raising the resolution. The tensor-core-powered AI component is effectively inferring how the final image should look and creating that from its lower-resolution source.
Uncertain, but intriguing
DLSS still isn’t exactly understood. We’ll need to see a number of additional, real-time gaming examples to truly tell what it does, how it does it, and whether it’s something that’s worth upgrading our hardware for. Even at this early stage though, it is intriguing.
Deep learning supersampling has the potential to give gamers who can’t quite reach comfortable frame rates at resolutions above 1080P, the ability to do so with inference. If that turns out to be true, DLSS could end up being the most impactful feature of the new generation of Nvidia’s RTX Turing cards. They aren’t as powerful as we might have hoped, the ray tracing effects are pretty but could have a big negative effect on performance, but DLSS could give us the best of both worlds: Better-looking games that perform better too.
The best place for this kind of technology could be in lower-end cards like the rumored GTX 2060 or GTX 2050. If DLSS gives them the ability to render at higher resolutions and detail levels than their GPU core and memory would typically allow, that could make them very desirable, especially considering the inflated price of the higher-end alternatives.
However, the problem remains that this is an Nvidia technology that requires new hardware and compatible software. At this time, that latter component is notably absent. The list of games that will introduce this feature currently sits at 25, which is great if you plan to play ARK: Survival Evolved, We Happy few, Darksiders III, or PUBG, among others, but it does leave us a little concerned about the future of DLSS.
It could be that in a year or two it’s a commonplace feature in most games due to its ease of implementation and the dominance of RTX GPUs in gamer systems. But if game developers don’t implement DLSS en masse, it may end up as something far more niche and unsupported. It could end up like the often (surprisingly) controversial Nvidia Hairworks, which is nice to have, but not a must-have feature.