• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Game Graphics Technology | 64bit, procedural, high-fidelity debating

dr_rus

Member
The slides are talking about performance improvement that is the marketing DX12 used.
Not that it would be easy, or do mean more like Dx12 marketing on games?

Because a lot of games if i'm not mistaken just did d3d11 on d3d12 ports.

The slides are talking about lack of performance improvements actually and provide figures like +10% in GPU limited scenarios which are way more grounded in reality and practice than the outlandish claims of +XXX% gains we've had from MS and h/w vendors a couple of years ago. And +10% percent in a GPU limited scenario is pretty regular between IHV driver updates for example.

A lot of games will do DX11 to DX12 ports for a long time to come since there are still a lot of people who don't and don't even plan to use Windows 10. What's more important, it's absolutely not a fact at the moment that doing a purely DX12 renderer will produce a faster renderer than "porting" an existing DX11 to DX12 one would. The potential is there but the underlying hypothesis that DX11 is underutilizing current GPU h/w seems more and more shaky with each new DX12 release. Basically, unless your DX11 renderer is heavily CPU limited right now (the only example is AotS right now? even there it's mostly true for AMD h/w only) you won't get much from DX12, no matter if it's a "port" or a "built from scratch for DX12" effort.
 

dr_rus

Member
Some Gameworks updates announced yesterday with 1080Ti seems to have been omitted from the livestream: GDC 2017: NVIDIA Gameworks goes DX12 and more !

A. All Gameworks libraries are DX12 in addition to DX11 now, including Flex, Flow, Hairworks and Turf Effects:

1n2krq.jpg
2c0kz2.jpg
3jyk1l.jpg


B. More GW libraries are being open sourced:

5egjbt.jpg
 

cripterion

Member
So from that pic, Nvidia turf effects in GR : Wildlands?

Oddly enough none of the betas had any driver updates from them. Hopefully comes out polished and running well on the flagship cards.
 

dr_rus

Member
FLEX 1.1 is supporting DX11 and DX12 solvers in addition to the old CUDA one now: NVIDIA FLEX SDK 1.1 is available for download
Add support for DirectX, in addition to CUDA there is now a cross-platform DirectX 11 and 12 version of the Flex libraries that Windows applications can link against
So it should in theory run GPU accelerated on any DX11/DX12 capable h/w now.

For those who don't know what FLEX is it's a GPU accelerated physics solver which isn't a part of PhysX per se but is a newer separate effort. I think it was last used in Fallout 4 when they've added weapon debris?

Upd: Unity's future rendering presentation: http://aras-p.info/texts/files/2017_GDC_UnityScriptableRenderPipeline.pdf
 

dno_1966

Member
Apologies if this isn't the right place but does anyone have any suggestions for adding flowing water, for streams, for something we're working on. The best we've seen is Last Of Us, is there anything better?
 

nOoblet16

Member
Ok so I have a question regarding shader based foliage interaction as seen in UC4. It looked amazing in that game where you could have the tip of a character interact with the tip of a foliage and it'd move correctly. I honestly thought it was physics based.

So how does it work ? I cannot understand how shaders can simulate physical interaction with actual polygons. When it comes to water we can simply use shaders like parallax mapping to give the illusion of depth based interaction but the depth itself doesn't actually exists...it's an illusion. The polygons themselves that make up the water body are unaffected from this interaction.

So how is it that when it comes to foliage we can use shaders to actually make polygons move using shaders ? And what are the drawbacks of using shader based interaction over actually physics ?

Lastly, is turf effects just that? i.e. Shader based foliage interaction or is it something else?
 
Ok so I have a question regarding shader based foliage interaction as seen in UC4. It looked amazing in that game where you could have the tip of a character interact with the tip of a foliage and it'd move correctly. I honestly thought it was physics based.

So how does it work ? I cannot understand how shaders can simulate physical interaction with actual polygons. When it comes to water we can simply use shaders like parallax mapping to give the illusion of depth based interaction but the depth itself doesn't actually exists...it's an illusion. The polygons themselves that make up the water body are unaffected from this interaction.

So how is it that when it comes to foliage we can use shaders to actually make polygons move using shaders ? And what are the drawbacks of using shader based interaction over actually physics ?


Lastly, is turf effects just that? i.e. Shader based foliage interaction or is it something else?

Sounds like something you would do in the geometry shader part of the pipeline.
 

Jux

Member
Sounds like something you would do in the geometry shader part of the pipeline.

Nope, you don't need geometry shader to do that. Simple vertex shader animation will do the trick. Nobody uses geometry shaders anyway, performance is pretty carppy for anything but the simplest.
 

Laiza

Member
The checkerboarding technique is pretty cool. Makes me wonder if NVidia and AMD can make that a generic part of their drivers instead of (or in addition to) regular downsampling.

I'd certainly appreciate the performance boost. Shaving off 5ms on 1800p render time is no joke.
 

RoboPlato

I'd be in the dick
The checkerboarding technique is pretty cool. Makes me wonder if NVidia and AMD can make that a generic part of their drivers instead of (or in addition to) regular downsampling.

I'd certainly appreciate the performance boost. Shaving off 5ms on 1800p render time is no joke.
Mass Effect is 13ms(!) faster using checkerboard. Holy shit.

Curious to see if Guerrilla is going to do a presentation on their CB implementation. Being able to get a full 2160c with improved textures and AF is really impressive and shows a great management of bandwidth.
 

Laiza

Member
Yeah, wow. Hitting that kind of performance with those kinds of results... Hope we can see something like that on PC sometime.

This slide is pretty informative:
Interesting to see how many things render at half-resolution. A lot of them are pretty typical and expected (HBAO, SSS, etc.), and most games already do the same thing where they can get away with it.

What's most interesting is all the stuff that actually renders at 3200x1800. Seems to me like this is something that really needs to be worked on on a game-to-game basis, rather than something that can easily be applied generically across all games. But what do I know? I'm no graphics programmer.
 

RoboPlato

I'd be in the dick
Yeah, wow. Hitting that kind of performance with those kinds of results... Hope we can see something like that on PC sometime.

This slide is pretty informative:

Interesting to see how many things render at half-resolution. A lot of them are pretty typical and expected (HBAO, SSS, etc.), and most games already do the same thing where they can get away with it.

What's most interesting is all the stuff that actually renders at 3200x1800. Seems to me like this is something that really needs to be worked on on a game-to-game basis, rather than something that can easily be applied generically across all games. But what do I know? I'm no graphics programmer.
I assume the half-res stuff doesn't play well with a sparse render field so it has to be condensed.

Surprised at how much is natively rendered at 1800p too.
 

gamerMan

Member
Ok so I have a question regarding shader based foliage interaction as seen in UC4. It looked amazing in that game where you could have the tip of a character interact with the tip of a foliage and it'd move correctly. I honestly thought it was physics based.

So how does it work ? I cannot understand how shaders can simulate physical interaction with actual polygons. When it comes to water we can simply use shaders like parallax mapping to give the illusion of depth based interaction but the depth itself doesn't actually exists...it's an illusion. The polygons themselves that make up the water body are unaffected from this interaction.

So how is it that when it comes to foliage we can use shaders to actually make polygons move using shaders ? And what are the drawbacks of using shader based interaction over actually physics ?

Lastly, is turf effects just that? i.e. Shader based foliage interaction or is it something else?

There are two types of shaders: vertex and pixel shaders. Vertex shaders give you the ability to transform vertices. The process of animating the foliage in the vertex shader is called touch bending. You can control how much the vertices bend by using detail bending to make portions of the foliage bend more or less.

Then you animate the foliage by offsetting the vertices of the quad in the vertex shader when a collision is detected. You can implement it similar to a wind shader. Here is what touch bending it looks like. https://www.youtube.com/watch?v=rGZTLnxofBw
 

tuxfool

Banned
The checkerboarding technique is pretty cool. Makes me wonder if NVidia and AMD can make that a generic part of their drivers instead of (or in addition to) regular downsampling.

I'd certainly appreciate the performance boost. Shaving off 5ms on 1800p render time is no joke.

Technically there is nothing preventing it on PC, though I doubt it could work at a generic driver level, you need way too much information that only exists at the engine level. Siege offers its own take on this, it isn't as robust, but it works.
 
Technically there is nothing preventing it on PC, though I doubt it could work at a generic driver level, you need way too much information that only exists at the engine level. Siege offers its own take on this, it isn't as robust, but it works.

Also doesnt work as well without the hardware buffers sony added to polaris. You also need vulkan or dx12
 

Laiza

Member
Just saw this presentation of character creation middleware that's being used for Star Citizen, and I gotta say, I'm bloody well impressed. The use of scanned models that can be interpolated between is quite a good solution for solving the problem of players creating absolute monstrosities with a more unrestrained morph system, and the quality level is extremely high.

Just look at the facial expressions when he plays the audio file! Puts Bioware's work to shame (not that that's very difficult, mind you, but it certainly solidifies my disappointment at the work they've done with Andromeda).

Pretty cool stuff. Looking forward to seeing it implemented into Star Citizen proper.
 
Just saw this presentation of character creation middleware that's being used for Star Citizen, and I gotta say, I'm bloody well impressed. The use of scanned models that can be interpolated between is quite a good solution for solving the problem of players creating absolute monstrosities with a more unrestrained morph system, and the quality level is extremely high.

Just look at the facial expressions when he plays the audio file! Puts Bioware's work to shame (not that that's very difficult, mind you, but it certainly solidifies my disappointment at the work they've done with Andromeda).

Pretty cool stuff. Looking forward to seeing it implemented into Star Citizen proper.

It's certainly an impressive tech demo. Let's see it in the game first, and as a character creator, before we're using it to poo-poo ME:A though.

Also, mind you, this is a middle ware solution. It's hardly a surprise that they're doing a better job of it when that's the only thing they do.
 
Yeah, wow. Hitting that kind of performance with those kinds of results... Hope we can see something like that on PC sometime.

This slide is pretty informative:

Interesting to see how many things render at half-resolution. A lot of them are pretty typical and expected (HBAO, SSS, etc.), and most games already do the same thing where they can get away with it.

What's most interesting is all the stuff that actually renders at 3200x1800. Seems to me like this is something that really needs to be worked on on a game-to-game basis, rather than something that can easily be applied generically across all games. But what do I know? I'm no graphics programmer.
I assume the half-res stuff doesn't play well with a sparse render field so it has to be condensed.

Surprised at how much is natively rendered at 1800p too.

Well technically that slide does not necessarily mean that the effects in fact render internally at the resolution listed there on the slide. Rather, that is the "step" of the rendering pipeline, and that step's output resolution, where they are calculated/rendered. Something like motion blur or DOF could still be a different internal resolution or use a lower sampling rate relative to the output resolution of that step. In fact, something like TAA does as is mentioned in the slides. The CB Resolve + TAA in ME:A is nearly twice as expensive as the CB Resolve + TAA in BF1 inspite of them running at the same resolution step.
Just saw this presentation of character creation middleware that's being used for Star Citizen, and I gotta say, I'm bloody well impressed. The use of scanned models that can be interpolated between is quite a good solution for solving the problem of players creating absolute monstrosities with a more unrestrained morph system, and the quality level is extremely high.

Just look at the facial expressions when he plays the audio file! Puts Bioware's work to shame (not that that's very difficult, mind you, but it certainly solidifies my disappointment at the work they've done with Andromeda).

Pretty cool stuff. Looking forward to seeing it implemented into Star Citizen proper.

It's certainly an impressive tech demo. Let's see it in the game first, and as a character creator, before we're using it to poo-poo ME:A though.

Also, mind you, this is a middle ware solution. It's hardly a surprise that they're doing a better job of it when that's the only thing they do.
Yeah I think it definitely looks great and look toward the implementation in SC.
Here is an FXGuide link all about it.
Here is nice collection of links on subject.
http://momentsingraphics.de/?p=127

Also nice one.
http://loopit.dk/banding_in_games.pdf

BTW, thanks for linking these :D
 

wxxd

Neo Member

RoboPlato

I'd be in the dick
Does anyone have any links with decent info on temporal injection? I find it to be a very, very impressive reconstruction technique and would love to learn more about it.
 
Sebbi (Sebastian Aaltonen) of Redlynx (Trials) fame just had the game he was working on announced. It seems to have a very interesting rendering make up according to him:
Claybook
It uses a heavily modified version of UE4.
This is the game I have been working on for the last 1.5 years. SDF based geometry, GPU based physics simulation and ray traced visuals.
Sculpted meshes converted to SDF "brushes". We combine them at runtime. We also edit the SDF every frame (based on character interaction).
Yes. Our ray tracer outputs g-buffer, shadow masks and AO. UE does post processing and direct lighting. Plus lots of async compute physics.
We do lots of GPGPU work (physics, world & character modification, etc). Haven't had time to investigate whether a port could be possible.
The background scene (outside the play area) is made out of polygons...
https://giant.gfycat.com/MarriedArtisticAzurevasesponge.webm
 

Newboi

Member

tuxfool

Banned
Absolutely amazing! I'm curious though, what separates their Ray-Tracing solution from the Distance field shadows and ambient occlusion techniques that UE4 already employs?

I suspect this technique is very similar to how Media Molecule is using SDF for Dreams.
 
Top Bottom