• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Intel's Xess will get Frame Generation

winjer

Member

We introduce ExtraSS, a novel framework that combines spatial super sampling and frame extrapolation to enhance real-time rendering performance. By integrating these techniques, our approach achieves a balance between performance and quality, generating temporally stable and high-quality, high-resolution results.

Leveraging lightweight modules on warping and the ExtraSSNet for refinement, we exploit spatial-temporal information, improve rendering sharpness, handle moving shadings accurately, and generate temporally stable results. Computational costs are significantly reduced compared to traditional rendering methods, enabling higher frame rates and alias-free high resolution results.

Evaluation using Unreal Engine demonstrates the benefits of our framework over conventional individual spatial or temporal super sampling methods, delivering improved rendering speed and visual quality. With its ability to generate temporally stable high-quality results, our framework creates new possibilities for real-time rendering applications, advancing the boundaries of performance and photo-realistic rendering in various domains.

cQlKrVf.png

Frame extrapolation is another way to increase the framerate by only using the information from prior frames. Li et al.[2022] proposed an optical flow-based method to predict flow based on previous flows and then warp the current frame to the next frame. ExtraNet [Guo et al .2021] uses occlusion motion vectors with neural networks to handle dis-occluded areas and shading changes with G-buffers information. Their methods fail when the scene becomes complex and generate artifacts in the disoccluded areas. Furthermore, it requires higher resolution inputs since they only generate new frames. We are the first ones to propose a joint framework to solve both spatial supersampling and frame extrapolation together while staying efficient and high quality.

Interpolation vs. Extrapolation

Frame interpolation and extrapolation are two key methods of Temporal Super Sampling. Usually frame interpolation generates better results but also brings latency when generating the frames. Note that there are some existing methods such as NVIDIA Reflex [NVIDIA2020 ]decreasing the latency byusing a better scheduler for the inputs, but they cannot avoid the latency introduced from the frame interpolation and is orthogonal to the interpolation and extrapolation methods.

The interpolation methods still have larger latency even with those techniques. Frame extrapolation has less latency but has difficulty handling the disoccluded areas because of lacking information from the input frames. Our method proposes a new warping method with a lightweight flow model to extrapolate frames with better qualities to the previous frame generation methods and less latency comparing to interpolation based methods.

There is a lot more info in the article, so it's too big to post it all.
But one part of the info is that Intel also demoed the tech running on an AMD CPU with an Nvidia GPU.
So this probably means this tech will be open for all GPU vendors.

In some render time performance tests, Intel showcased a system running an AMD Ryzen 9 5950X CPU with an NVIDIA GeForce RTX 3090 GPU. The RTX 3090 GPU was also running the same Intel XeSS Frame-Generation (Extrapolation) method which means that this would be the second frame-gen technology besides AMD's FSR 3 to feature support across all vendors which once again shows the commitment of Intel being open-source friendly.
 
Last edited:

Crayon

Member
Nice. I'm still not totally sold on frame gen but I see the promise in it. I cap all my games and if you do that, you see the headroom it takes to keep those 1% lows in a reasonable spot. Most games actually have to average well over 60 to sustain that locked 60 when you look into a complex scene or a bunch of explosions go off. Letting the native fps average around 60 and having fake frames inserted to fill in the dips sounds great. I don't buy expensive gpu's so that would make a big difference.
 

hinch7

Member
If it doesn't introduce horrible artifacts and generally gets similar results like from intropolation from AMD/Nvidia, this would be pretty awesome to have. Less latency and no needing to rely on vendor reducing lag tech (Reflex, Antilag+) and being open source.. plus being hardware agnostic and having better image quality than FSR. Yes, please.
 

Dorfdad

Gold Member
Having used on on my integrated nuc I’m excited for the glorious 30fps!!

They need to get to parity with amd before adding more features imho
 
Probably will be better than whatever AMD has right now. Can't believe that not only Nvidia is far ahead of AMD, but even Intel - who came later - provided better scaling technology than AMD.

I love what Intel is doing here. I hope they launch a big flagship GPU next go around. Light a fire under leather jacket man’s ass.
I think the leather guy is more into AI now than even consumer hardware (though of course they will sell tons of it at high prices).
 
Last edited:

BennyBlanco

aka IMurRIVAL69
I think the leather guy is more into AI now than even consumer hardware (though of course they will sell tons of it at high prices).

Definitely, but they will still want to keep their stranglehold on the marketshare they have. Time will tell how much they care they aren’t gonna stop making GPUs.
 
Definitely, but they will still want to keep their stranglehold on the marketshare they have. Time will tell how much they care they aren’t gonna stop making GPUs.
True - but just like with Microsoft and AI, where they even buy Oracle's cloud and delay Linkedin Azure transition - they will have to find a balance between home sales and AI GPU as there is a limit on the production capacity. And now more and more companies are entering the ARMs race too. If they invest more in AI, they might create a scarcity in consumer GPUa, but if they invest less in AI, then somebody else will grab some market there and so on .
 
Last edited:

winjer

Member
I'm sick of this new technology...

optimize game / engine or whatever... all this power for nothing make me sick.

But the issue is not the upscaling tech, the issue are lazy devs and publishers.
For a while, these upscalers were just an nice extra to have in games. But most devs now are using them as a crutch to forego optimization.
This is why we can't have nice things....
 
Very very nice but FG itself is bad IMO.
Picture quality is awful no matter what you do and it adds latency by design. We are going from crisp lossless video output to compressed youtube video picture quality, with glitches and more latency.
 

Tarnished

Member
Great, I hope that they nail it and launch a high-end GPU that can compete with Nvidia, it's the only way we'll get a decent price cut.
 
Top Bottom