Breaking News

What is the frame generation and should you use it in your games?


Earlier this year, Nvidia announced its new line of GPUS of the 50 series With a new hot trailer feature: “Multi-frame generation”. Based on early frame generation technology, these new GPUs allow games to create several video frames based on a single frame rendered in the normal way. But is it a good thing? Or are they just “false frames?” Well, it’s complicated.

At a very basic level, the “frame generation” refers to the technique of using AI models of in -depth learning to generate frames between two images of a game rendered by the GPU. Your graphics card does the most grumpy work to create “Framework one” and “Frame three” according to 3D models, lighting, textures, etc., but then the tool generation tools take these two images and make a supposition of which “frame two” should look like.

The multi-frame generation goes further. Instead of simply generating A Additional framework, it generates several. This means that on the highest parameters, three out of four images can be generated, rather than rendered directly. Whether it’s a good thing, however, strongly depends on the type of game you play and what you want your game experience to be.

What is the difference between the scale and the generation of frame?

The new multi-trame generation of Nvidia is part of its announcement of DLSS 4. DLSS means Deep Learning Super Sampling and, as its name suggests, its previous iterations did not concern the generation of frame, but rather the replacement (or rise in power).

In this version of technology, a GPU would make a low -resolution version of a frame – according to 1080p – then strengthens it to a higher resolution like 1440p or 2160p (4K). The “Deep Learning” in DLSS refers to the formation of an automatic learning model on each game individually to give the upscaper a better idea of ​​what the superior Res frame should look like.

Today, DLSS refers more to a whole series of tools that Nvidia uses to explain better performance, and the above method is generally called super resolution. The generation of frame, on the other hand, takes two whole executives and generates an entirely new framework between them from zero.

Of course, it is also possible to use all this technology simultaneously. You can find yourself in situations where your GPU technically only one low -resolution frame for both – or more, on the most recent GPUs – resolution frames you see. If it looks like a lot of extrapolation, well, this is the case. And, incredibly, it works quite well. Most often.

When is the frame generation useful?

In a relatively short time, we saw the request imposed on the GPU exploding. As mentioned above, 4K resolutions contain the amount of information on pixels like 1080p. In addition, while the media like films and television have remained in relatively coherent frames from 24 to 30 per second, players demand more and less at least 60 frames per second as a reference, often pushing that even higher at 120 frames per second or 240FPS for high -end machines. And do not start on the absurd Samsung screen capable of supporting up to 500 images per second.

If your GPU had to calculate each pixel of a 4K 120 image (or 500) times every second, the resulting fire coming from your PC would be visible from the space – at least for the games with the type of graphics detailed and corrected to the shelves to which we are used from the AAA titles.

From this point of view, the framework generation is not only useful, it is necessary. On the latest NVIDIA GPUs, the multi-trame generation can allow a game to increase its image frequency by several hundred images per second Even in 4K, while looking rather super. It is simply not a possible image frequency for this resolution without an industrial platform.

When it works (and we will come back to it), the generation of frame can allow a smoother movement and less eye fatigue. If you want to have a taste of the difference, This little tool Allows you to experiment with different image frequencies (as long as your screen supports it). Try to compare 30 IPS to 60FPS or 120FPS and follow each ball with your eyes. The effect becomes even more austere if you deactivate the movement vague which, for many games, would be the default.

For chaotic games with a lot of movement, these additional frames can be a huge advantage, even if they are not exactly perfect. If you were to closely examine the frame images by frame, You might see artifactsBut they could be less visible by playing – at least, that is how it should work in theory.

What are the disadvantages of the framework generation?

In practice, the way this technology works can vary considerably on a basis per game, as well as by the power of your machine. For example, go from 30 IPS to 60FPS with frame generation can watch Jankier that if you go from 60 images per second to 120 images per second. This is due, at least in part, to the fact that at the frequencies of lower images, there is more time between the reference frames, which means more riddles for the generated frames. This leads to more noise and artefacts.

The question of whether these artefacts will disturb you is also very subjective. For example, if you swing in the city Spider-Man 2And the trees in the background look stranger than they should, would you even notice? On the other hand, for slower atmospheric games like LakeWhere graphic details and the design of the whole are more important for vibrations, ghosts and maculinations may seem more pronounced.

What do you think so far?

It should also be noted that the artifacts are not necessarily inherent in any framework generation. To start, better entry frames can lead to better generation of frame. Nvidia, for example, boasts new models behind the super resolution and the reconstruction of the shelves – a whole other Technology tech to improve the results of the tracing of the shelves in which we simply do not have enough time to enter – for Improve the images that have passed to the generation of the pipeline frame.

You can think about it a bit like a giant and complex version of a phone game. The only way to get the most precise and detailed frames of your game is to return them directly. The more steps you add to extrapolate pixels and additional frames, the more likely for errors. However, our tools are gradually improving to reduce these errors. So it’s up to you to decide whether more executives or more details are worth it.

Why the framework generation is (probably) bad for competitive games

There is a major exception to all this argument, and that’s when competitive games. If you play online games like Overwatch 2,, Marvel RivalsOr FortniteSo Smooth Motion is not necessarily your main concern. You might be more concerned about latency, which is to say the delay between the moment you react to something and when your game has recorded your reaction.

The generation of frame complicates latency problems because it requires the creation of order frames. Remember our previous example: the GPU generates a one frame, then the three frame, then the frame generator offers the frame that two should be. In this scenario, the game cannot really to show You will supervise two until it is understood which executive three should be.

Now, in most cases, this is generally not a problem. At 120 images per second, each frame is only on screen for approximately 8.33 milliseconds. Your brain cannot even record this delay, so it is unlikely that it will lead to a huge problem. In fact, human reaction time is generally measured in the hundreds of milliseconds. For completely non-scientific evidence, go ahead and try This reaction time test. Let me know when you drop less than 10 milliseconds. I will wait.

However, this becomes a problem in competitive games, because the delays of frame is not the only latency problems with which you deal with. There is the latency between your keyboard and your computer, between your computer and the server, and between the server and the other players.

Most of these individual links in the chain can be quite low, but they must be synchronized somewhere. This “somewhere” is in the game price. How often you play the game you play updates on the server. For example, Overwatch 2 At a tick rate of 64. This means that every second, the server updates what happened in the game 64 times, or once every 15.63 milliseconds.

It’s just enough that if, let’s say, your game shows you our rhetorical framework, where the enemy cassidy is in your reticle, but has not yet updated to supervise three, when it is not, the server could have stained before your screen was updated. This could mean that your shot is registered as a lack even if it has the impression of having had to strike. It is also the only problem that can really get worse with multi-cam,.

There are ways to alleviate this blow – for example, Nvidia reflex technology This reduces the latency of entries in other areas, but it is not something that can be avoided entirely. If you are playing competitive online games, you better lower your graphic settings below to get a better image frequency, rather than using the frame generation for the moment.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button