• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Game Graphics Technology | 64bit, procedural, high-fidelity debating

Parsnip

Member
1) Scaling different portions of the screen at different pixel densities. For instance, take, say, the outside edges of the screen and render them at a lower pixel density to be upscaled than the centre of the screen. Essentially, the outside edges would be blurrier than the centre due to less pixels being natively rendered, but human eyes tend to focus much less on the edges than the centre.

This sounds like foveated rendering I think. The video seems like an extreme example.
I could see it being pretty handy option for traditional gaming, not just VR.
 

missile

Member
There is a slight misconception about out of fovea vision in computer graphics,
I guess. People think vision just gets blurred when being out of fovea (like
we see in the demo above), but that's not the full story. For example; look
straight ahead and put one of your hands way to the right or left and spread
your fingers. You can make out the fingers pretty good. They aren't that much
blurred. Hence, out of fovea rendering needs, next to some blurring, an edge
detector to enhance some of the edges.
 

Javin98

Banned
So I know I brought up Hairworks and similar implementations last time, but I'm still looking for the answers to one thing with little success. Been reading the Nvidia tweak guides and every time Hairworks or Purehair in the case of RotTR is mentioned, it just says "tens of thousands of hair strands", sometimes even tessellated hair strands. What I'm wondering is, how many polygons are actually used on these hair implementations. I've been looking around the Internet for answers for at least two weeks now but no dice.
 

pottuvoi

Banned
So I know I brought up Hairworks and similar implementations last time, but I'm still looking for the answers to one thing with little success. Been reading the Nvidia tweak guides and every time Hairworks or Purehair in the case of RotTR is mentioned, it just says "tens of thousands of hair strands", sometimes even tessellated hair strands. What I'm wondering is, how many polygons are actually used on these hair implementations. I've been looking around the Internet for answers for at least two weeks now but no dice.
If the tesselation is done by standard dx11 pipeline and source is quad of 2 triangles the maximum poly count per hair is 128. (Dx tesselation has maximum multiplication of only 64 and you need 2 triangles for camera facing quad.)
Of course the hair can be pre-tesselated, so maximum can be higher.

If I have understood correctly both of the approaches scale tesselation with distance as well as amounts of strands. (They increase width of hair to keep volume similar.)

Due to scaling saying 200000 strands for a creature is feasible.
 

Javin98

Banned
If the tesselation is done by standard dx11 pipeline and source is quad of 2 triangles the maximum poly count per hair is 128. (Dx tesselation has maximum multiplication of only 64 and you need 2 triangles for camera facing quad.)
Of course the hair can be pre-tesselated, so maximum can be higher.

If I have understood correctly both of the approaches scale tesselation with distance as well as amounts of strands. (They increase width of hair to keep volume similar.)

Due to scaling saying 200000 strands for a creature is feasible.
So assuming that the tessellated hair strands are 128 polygons each, and a character model like Lara Croft has 30,000 individual strands of hair, doesn't that make the poly count 3,840,000 for the hair alone? Doesn't that seem insanely high?

Think I definitely misunderstood something somewhere. Can you please elaborate a bit more? :p
 
So assuming that the tessellated hair strands are 128 polygons each, and a character model like Lara Croft has 30,000 individual strands of hair, doesn't that make the poly count 3,840,000 for the hair alone? Doesn't that seem insanely high?

Think I definitely misunderstood something somewhere. Can you please elaborate a bit more? :p

Does pure hair even use tessellation? I think its unlikely. If it does its definitely a very low level
 

Javin98

Banned
Does pure hair even use tessellation? I think its unlikely. If it does its definitely a very low level
Hmm, you're probably right that it doesn't. How about The Witcher 3, then? I recall reading that Geralt's beard and hair use tessellated hair strands. Similarly, the hair strand count would be tens of thousands and if tessellated to the same degree, would be millions of polygons as well. Am I getting something off here?
 

Frozone

Member
I think the cost is in the shaders not the geometry throughput. Which is why all the hair doesn't look good at all. If you can't self-shadow the hair, then it's not going to look right (even with a diffuse lobe).
 
Hmm, you're probably right that it doesn't. How about The Witcher 3, then? I recall reading that Geralt's beard and hair use tessellated hair strands. Similarly, the hair strand count would be tens of thousands and if tessellated to the same degree, would be millions of polygons as well. Am I getting something off here?

Hairworks uses isoline tessellation. Not a good solution for todays gpus. Purehair is compute based and much more efficient/better looking
 

Javin98

Banned
I think the cost is in the shaders not the geometry throughput. Which is why all the hair doesn't look good at all. If you can't self-shadow the hair, then it's not going to look right (even with a diffuse lobe).

Hairworks uses isoline tessellation. Not a good solution for todays gpus. Purehair is compute based and much more efficient/better looking
Now if you guys don't mind, I'm wondering about the raw polycount only. I'm not talking about how efficient or demanding a solution is. What's been killing me for the last two weeks is the polygon count for these hair implementations.

I believe for FFXV, hair is compromised of 20K polygons and it uses a mix of polygon strips and hair strands.
 
Now if you guys don't mind, I'm wondering about the raw polycount only. I'm not talking about how efficient or demanding a solution is. What's been killing me for the last two weeks is the polygon count for these hair implementations.

I believe for FFXV, hair is compromised of 20K polygons and it uses a mix of polygon strips and hair strands.

Hairworks results in micro polygons which is bad even for nvidia gpus. Dont know about exact numbers tho
 

pottuvoi

Banned
So assuming that the tessellated hair strands are 128 polygons each, and a character model like Lara Croft has 30,000 individual strands of hair, doesn't that make the poly count 3,840,000 for the hair alone? Doesn't that seem insanely high?

Think I definitely misunderstood something somewhere. Can you please elaborate a bit more? :p
Yes, it's very high. (For lara model I'm sure they do not tessellate to that high as it's mostly straight hair.)

Although I'm quite sure that the tessellation is rarely at 64x. (at least it shouldn't be 64x for whole time.)
Nvidia hardware is quite good at discarding small polygons that do not contribute to the image, AMD Polaris should be quite good as well.

Here is some tutorials on hairworks etc. (Shows LoD etc.)
http://www.cgmeetup.net/home/nvidia-hairworks-presentation-and-tutorial/
https://developer.nvidia.com/content/hairworks-authoring-considerations

Documentation.
http://docs.nvidia.com/gameworks/content/artisttools/hairworks/product.html

TressFX
http://gpuopen.com/tressfx-3-1/
http://amd-dev.wpengine.netdna-cdn....The-Fast-and-The-Furry-Nicolas-Thibieroz.ppsx

Purehair/TressFX3 in Deusex
http://cdn.wccftech.com/wp-content/...-in-Deus-Ex-Universe-Projects-TressFX-3-0.pdf
 

Javin98

Banned
Yes, it's very high. (For lara model I'm sure they do not tessellate to that high as it's mostly straight hair.)

Although I'm quite sure that the tessellation is rarely at 64x. (at least it shouldn't be 64x for whole time.)
Nvidia hardware is quite good at discarding small polygons that do not contribute to the image, AMD Polaris should be quite good as well.

Here is some tutorials on hairworks etc. (Shows LoD etc.)
http://www.cgmeetup.net/home/nvidia-hairworks-presentation-and-tutorial/
https://developer.nvidia.com/content/hairworks-authoring-considerations

Documentation.
http://docs.nvidia.com/gameworks/content/artisttools/hairworks/product.html
Thanks for the links. I may finally get the answer I seek!

Anyway, do we have any idea roughly how many polygons Lara's hair on Purehair uses? Pardon me if it's in those links, I can't read them now, I'm in class.

Okay, so assuming the tessellation factor is at 4, that would be 4x2=8 polygons per hair strand. 30,000 hair strands would be 240,000 polygons for the hair alone. Even that seems kinda high for XB1 hardware.
 

dr_rus

Member
Hairworks uses isoline tessellation. Not a good solution for todays gpus. Purehair is compute based and much more efficient/better looking

Not good for GCN GPUs you mean, NV GPUs are perfectly fine with it, subpixel triangles will be discarded. Purehair doesn't use any compute as far as I'm aware (unless you mean for animation / physics), it's purely VS/PS based.
 

MarkV

Member
Adam executable download (size: 3 GB; Windows DX11 only)

It is expected to run 30 fps v-synced @ 1440p on a GeForce GTX 980 and Intel Core i7.

There are two quality settings included: ‘Fantastic’ and ‘Good’. ‘Fantastic’ is the main target, intended for reasonably powerful gaming computers, whereas ‘Good’ is intended for less powerful desktops and gaming laptops. Keep in mind, though, that to preserve the look of the film, there’s no ‘low’ spec for this demo, and as such it might run slowly on older hardware or non-gaming laptops.

You can play it back as a film, pause and rotate the camera to look around (within a restricted area), and you can move light sources in real time.

https://blogs.unity3d.com/2016/11/01/adam-demo-executable-and-assets-released/
 

dr_rus

Member
We are introducing the VK_NVX_device_generated_commands (DGC) Vulkan extension, which allows the GPU to generate the most frequent rendering commands on its own. Vulkan, therefore, now has functionality similar to DirectX12’s ExecuteIndirect, however we added the ability to change shaders (pipelines) on the GPU as well. This means for the first time an open graphics API provides functionality to create compact command buffer streams on the device avoiding the worst-case state setup of previous indirect drawing methods.
https://developer.nvidia.com/device-generated-commands-vulkan
 

HTupolev

Member
So, if I'm understanding this right, the GI solution in The Division uses the typical (?) pre computed irradiance transfer, but can be considered real time because it is updated every frame?
Updating the lighting result in real time for cheap GI is the entire point of precomputed radiance transfer.

In The Division, there's essentially an ultra-low-precision representation of the game world that probes use. Using this simplified representation, the game can render an ultra-blurry "cubemap" of sorts at the location of each probe every frame.
These probes then get used to create a lighting volume, which is used when rendering the actual scene.

The simplified world representation can be lit by the sky and sun, by local dynamic lights, and by the probe results from the last frame (this results in light bounce). However, it's only aware of geometry that existed at the time of precomputation; dynamic objects are lit by the results of the probes, but they do not affect the creation of those results. This is the "not real time" aspect of it: the representation of the scene that light bounces through is baked.
 

Javin98

Banned
Updating the lighting result in real time for cheap GI is the entire point of precomputed radiance transfer.

In The Division, there's essentially an ultra-low-precision representation of the game world that probes use. Using this simplified representation, the game can render an ultra-blurry "cubemap" of sorts at the location of each probe every frame.
These probes then get used to create a lighting volume, which is used when rendering the actual scene.

The simplified world representation can be lit by the sky and sun, by local dynamic lights, and by the probe results from the last frame (this results in light bounce). However, it's only aware of geometry that existed at the time of precomputation; dynamic objects are lit by the results of the probes, but they do not affect the creation of those results. This is the "not real time" aspect of it: the representation of the scene that light bounces through is baked.
Ah, interesting. Thanks for the explanation. There are so many methods of "real time" GI that I should list down all of them, haha.
 
There was a paper / blog entry I think not too long ago about adding a blue noise filter instead of the usual white noise or dither pattern stuff you see for things like screen space ambient occlusion among other diffuse stuff, anyone have a link to that?

I cannot find it :/
 

dr_rus

Member
Is DirectX 12 Worth the Trouble?

b253c3506d29.jpg
792235e53c37.jpg

This is where the marketing bullshit around DX12 probably ends.
 

KKRT00

Member
In Paper they test light trees with 65k lights but does anyone have any idea how many lights current gen games average and max per frame?

i think some CryEngine games had like up to 5k per frame.
I think Frostbite supported even more, but dunno about real usage.
 

KKRT00

Member
Based on how abysmal most DX12 implementations have been, I'm not surprised in the slightest.

The only good DX12 game I can think of is Gears of War 4 and it was a Microsoft-owned studio that created it lol
Because most studios just convert some of their engine code to DX12 and call it a day.

Star Citizen devs said that they first eliminating all old legacy code that was there since DX9 and convert everything to DX11 and then convert core components from there to DX12.
They working on it for a year already and they still do not have ETA, because its really big change for such a complex engine.
 

Frozone

Member
In Paper they test light trees with 65k lights but does anyone have any idea how many lights current gen games average and max per frame?

The number of lights could be any number if using deferred rendering (limited only by the framebuffer size). The actual cost of lighting is in creating shadows for each one. I've only seen at most 2 shadow-casting lights within a good radius of the camera. IMO, this is the biggest hurdle to jump for that next step in realism.
 

Jux

Member
The number of lights could be any number if using deferred rendering (limited only by the framebuffer size). The actual cost of lighting is in creating shadows for each one. I've only seen at most 2 shadow-casting lights within a good radius of the camera. IMO, this is the biggest hurdle to jump for that next step in realism.

Not quite. Shadow is an added (potentially big) cost but the lighting is nowhere near free even in deferred lighting engine. The metrics used usually is in term of lighting overdraw (how many lights can I afford per pixels?). Depending on how efficient your deferred engine is, acceptable values may vary widely.
 

pottuvoi

Banned
The number of lights could be any number if using deferred rendering (limited only by the framebuffer size). The actual cost of lighting is in creating shadows for each one. I've only seen at most 2 shadow-casting lights within a good radius of the camera. IMO, this is the biggest hurdle to jump for that next step in realism.
Framebuffer size doesn't limit amount of lights.
Doom had quite nice amount of shadowing lights, it used shadowmap caching to reduce unnecessary work.

I wouldn't wonder if modern games have up to tens of thousands of lights in a normal frame.

Uncharted had fun little trick with 64k lights for a bounce light from flashlight. (Lights distributed in dithern pattern within frame, TAA to combine.)
 
Top Bottom