When the graphics card knows better what the game should look like. DLSS 2.0 is hard to tell from witchcraft

Artificial intelligence in games has so far been associated with simulating our opponents. However, it is increasingly used to generate graphics. The latest algorithms are almost inexplicably good.

GeForce RTX systems seem to be the biggest technical leap in graphics cards since the time of the first GeForce, called GeForce 256. This legendary card was the first to introduce Transform & Lightning technology, i.e. it relieved the computer's central processor from almost all graphics calculations, not only accelerating part of the process rendering, in which 3dfx's cards competing with Nvidia specialized.

Another significant revolution seems to be the shaders introduced to graphics cards, although this is not exactly the idea of ​​Nvidia itself. Shaders allowed for programmatic manipulation of pixels, which made it possible to generate complex visual effects at a relatively low performance cost.

Rest? Well, many will point to solutions such as HairWorks, G-Sync and many others, positively influencing the reception of games. However, I do not think I would hurt Nvidia with the opinion that after T&L and the introduction of shaders in graphics systems, raw computing power was increasing.

GeForce RTX proves that Nvidia can still title itself as an innovation leader.

The RTX systems introduced to the market almost two years ago offered us much more than just even higher computing power and relative trinkets - such as the hair simulations indicated above or synchronization of frames with the monitor. Nvidia has included technologies that can be boldly called groundbreaking - at least on the consumer market.

It was RTX systems that introduced ray tracing under thatched roofs, i.e. an image rendering technique in which the light is fully simulated (which significantly simplifies building the game and, as a result, increases its visual realism). It is also the Variable Rate Shading technique that allows you to dynamically change the detail of the image and Mesh Shaders or Sampler Feedback, allowing for far-reaching optimization in the image rendering process.

As of today, Nvidia is the only manufacturer that offers the implementation of these techniques at the hardware level. This will certainly change, as the competition is awake. Technical partner of Sony and Microsoft will provide its own implementations of these techniques as part of universal APIs such as DirectX 12 Ultimate. The method of their implementation, however, is still a mystery, and the RTXs have been offering it for a long time - which means that Nvidia has almost certainly been working on new technologies for a long time, which will probably appear soon in its new graphics cards.

The biggest revolution, however, is in the DLSS. This algorithm was originally created to support the ray tracing process.

One might wonder, why is ray tracing not the revolution itself? In fact, this question would be very reasonable: it is ray tracing that will ultimately have the greatest impact on the visual quality of games. But for now, these are the first steps in developing this technique on devices that consumers can afford. Ray tracing is a technique that was developed many decades ago. So why is it just now showing up in our computers?

Tracing light rays, simulating their path, reflections and how they affect objects with different textures and reflectivity is an extremely complex task, requiring a lot of computing power. This is a huge effort even for the fastest GeForce RTX cards on the market, even though they contain computational cores dedicated exclusively to ray-tracing conversion acceleration.

DirectX Raytracing

No wonder that so far, among the games that simulate ray tracing in almost the entire rendering process, there are productions like Quake II or Minecraft. These more visually complex productions only apply ray tracing to certain elements of the image - for example, to simulate light reflections in pools of liquid - while shading the rest with simplified methods. This has a significant impact on image quality, but the path to full ray tracing is quite far. What's worse, even this partial simulation puts a heavy strain on the graphics system.

So Nvidia has developed a very unusual solution. It is called DLSS (Deep Learning Super Sampling) and its purpose is to free the resources of the graphics processor. By using the deep learning technique, it can reconstruct an image with a lower resolution to the native resolution. In other words, in theory, the game would be rendered in a lower resolution, which significantly relieves the graphics card, and DLSS would make the end result look as if the game was creating image frames in native resolution from the very beginning.

How was it? At first, DLSS did not impress.

The DLSS 1.0 technique made a great impression in the context of its innovation and the very idea for action. However, the opinions of players and reviewers were divided as to its effectiveness. DLSS 1.0 was available in only a few games - initially in Battlefield V and Metro Exodus - and required training with each game separately. So it was impossible to turn on this solution for any title. The effects were also not that overwhelming.

DLSS 1.0 clearly worked. Using Tensor cores available in systems with the Volt architecture, the algorithm used Supersampling (one of the anti-aliasing techniques, i.e. smoothing the diagonal edges of polygons) corrected by a neural network trained for this purpose.

The desired effect was achieved in theory. The benefits of GPU rendering with significantly fewer pixels are obvious, and the performance gains can be used for more accurate ray tracing (or for other purposes). The visual effect, however, was not so impressive. DLSS 1.0 had a hard time rendering complex geometric patterns, resulting in unwanted screen artifacts. Many players and testers found that while the idea itself is great and its implementation is impressive, a simple upscaling may be a better decision.

Fortunately, Nvidia cannot rest on its laurels. I still don't understand how DLSS 2.0 can work that well.

Before I justify this pean in the above headline, first a handful of technicalities. DLSS 2.0 was trained with DGX-1 servers in a slightly different method from the first generation algorithm. The neural network has been trained by Nvidia by comparing the graphically perfect and supercomputer-rendered game frames and comparing them with their low-resolution counterparts. This information is stored in the card driver.

DLSS 2.0 uses new techniques for temporal feedback and a new, faster SI model that uses Tensor cores more efficiently and performs its tasks twice as fast as the previous version, improving framerate and removing limitations on supported GPUs, settings and resolution. It also does not require manual learning of the algorithm in each of the games. The mechanism only needs information from the game engine (and therefore the game must support it) about motion vectors that describe the direction in which the objects in the scene move from frame to frame.

The effects are amazing.

- Any sufficiently advanced technology is indistinguishable from magic - according to one of the three visionary laws of Arthur C. Clarke. And it is difficult for me to comment on the effects of the experiment conducted by Digital Foundry - the most reliable and valued medium dealing with game technology.

DF editors decided to test DLSS 2.0 in practice, on the example of the Control game that supports the mechanism. I do not want to summarize the details of the test, I refer those interested to the material . I would just like to highlight an observation that is crucial from our point of view. Full HD image converted by DLSS 2.0 to Ultra HD was characterized by higher detail and graphic quality than the image rendered natively in Ultra HD. Let it soak in and write again: Nvidia's artificial intelligence produces better results than the game engine. It is absolutely amazing.

A demonstration of the power of DLSS 2.0 is the game Death Stranding, originally optimized for the graphics systems present in PlayStation 4. Enabling the mechanism in this game allows you to get over 100 frames per second in 1440p resolution or over 60 fps in 4K resolution on each GeForce RTX graphics card. No wonder that buyers of these graphics cards receive it as a free add-on to test their computing power.

And extremely important.

The Deliver Us the Moon exploration game also supports DLSS 2.0 as well as ray tracing. On the RTX 2060 system with ray tracing enabled and in Full HD resolution, it works at an average of about 40 fps. With DLSS 2.0 enabled (ie when the game is rendered in HD and then reconstructed to Full HD by the algorithm) the frame rate increases to around 75. It is worth noting that with DLSS 2.0 set to high quality mode. DLSS 2.0 offers three image quality modes (Quality, Balanced, Performance) that control the rendering resolution. Performance mode can in theory even multiply the frame rate with little effect on image quality.

Considering the fact that the challenges faced by modern graphics cards are not small, DLSS 2.0 is truly revolutionary. Delighted with new products and commercials, we rightly expect comfortable play in Ultra HD resolution at at least 60 fps and of course with at least partial ray tracing. This is not yet possible when opting for traditional pixel perfect rendering methods. After all, it is the resolution, i.e. the number of pixels on the screen, that determines the load on the graphics processor to the greatest extent.

DLSS 2.0 basically solves this problem. The game engine can create consecutive image frames in Full HD or 1440p, having four or twice as few pixels to process, thus gaining a power reserve for use in ray tracing or increasing the frame rate. The lost detail of the image is flawlessly supplemented by DLSS 2.0, which means that we get a smooth game in 4K on a graphics card, the price of which is not counted in tens of thousands of zlotys. And as the tests show, DLSS 2.0 not only does not degrade quality, it even magically improves it.

GeForce RTX paved the way for the industry to follow.

A dream in which the ray tracing technique is fully implemented in all games, with adequate fluidity and resolution, is still a pipe dream. For now, RTX systems have only opened the door to a new, better world in which light is simulated and, as a result, the visuals gain unprecedented realism. These are the only processors for today that are able to take on this task at all.

If the development of graphic innovations went at a predictable pace, we would have to wait many more years and many generations of integrated circuits for ray tracing to be used more than just in key elements of the image. DLSS 2.0 and its subsequent versions will significantly reduce this time - and yet the performance gained by it can also be used for purposes other than ray tracing, just pay attention to how it helps with the above-mentioned Death Stranding. Today, it makes games look beautiful and run quickly on graphics cards for mass customers. What's next? Given the unexpected effectiveness of DLSS 2.0, it's hard to predict. There is certainly something to expect, though, and with quite a bit of excitement.

* The text was created in cooperation with morele.net



When the graphics card knows better what the game should look like. DLSS 2.0 is hard to tell from witchcraft

Comments

  1. Nice post for me and It is a very different blog than the usual ones I visit. From this post , I get more knowledge and I read a lot of interesting content here. Thanks for sharing a knowledgeable post. Graphic Design Company in India

    ReplyDelete
  2. Great job, Well done and always keep sharing the best knowledge with us. Thanks for sharing valuable information. Managed IT Support Service in Melbourne

    ReplyDelete

Post a Comment

Popular posts from this blog

What is VoLTE and how can you activate it on your Xiaomi

So you can check the battery status of your Xiaomi smartphone and how many cycles you have performed

How to exit the FASTBOOT mode of your Xiaomi if you have entered accidentally

Does your Xiaomi charge slowly or intermittently? So you can fix it

Problems with Android Auto and your Xiaomi? So you can fix it

If your Xiaomi disconnects only from the WiFi it may be because of that MIUI setting

How to change the font in MIUI and thus further customize your Xiaomi: so you can change the type, color and size of the letters of MIUI

What is the Safe Mode of your Xiaomi, what is it for and how can you activate it

Improve and amplify the volume of your Xiaomi and / or headphones with these simple adjustments

How to activate the second space if your Xiaomi does not have this option