• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Game Graphics Technology | 64bit, procedural, high-fidelity debating

dr_rus

Member
ceYb.png

http://advances.realtimerendering.com/s2016/Filmic SMAA v7.pptx

Pretty interesting that newer versions of SMAA are faster than FXAA already. It saddens me when I see some game using FXAA in 2016 - like NMS for example - when SMAA 1x is hardly slower and provides a much better quality.
 
havent read them yet but heres some siggraph presentations from developers under activision. the subdivision one i will get to first

http://c0de517e.blogspot.fr/2016/08/activision-siggraph-2016.html?m=1
It really saddens me that people still call out cod for using slightly modified id tech 3 when in reality they push forward thinking innovative tech.
Agreed.
Some truly great work there.
The newest iterations of their technology have been rather awesome. Boasting a wide range of modern effects while keeping frametimes generally really low (especially on PC when you have the rig for it, their games run very well in my own experience).

It makes me wonder what they will cook up next and when we may perchance see that character rendering research make its way it to production games, because atm, the latest COD does not seem to take advantage of everything presented in that original paper.
ceYb.png

http://advances.realtimerendering.com/s2016/Filmic SMAA v7.pptx

Pretty interesting that newer versions of SMAA are faster than FXAA already. It saddens me when I see some game using FXAA in 2016 - like NMS for example - when SMAA 1x is hardly slower and provides a much better quality.

Yeah, was just about to post that. The breadth of all the edge cases that are looked at, compared, and handled to a great extent in that presentation is pretty rad.

It is extremely disappointing when a game these days ships with FXAA and FXAA only (it is a nice option to allow people I guess that for some odd reason prefer it...).
 
The improvements around SMAA are really nice. Apart from something very performance-heavy like SGSSAA, I think the SMAA implementation in Ryse is the AA that impressed me the most. It really did manage to give that filmic, sleek look to the game. Something as good looking would be the UE4 or DOOM TAA implementation.

Just looked at the latest FFXV gameplay video. While there are definitely some improvements regarding performance and IQ since the Platinum demo, we are still not there. They are still using some bad oversharpening solution to counterbalance the kinda soft look of the image sometimes, aaand it can be really rough :

xpJOWaQ.png


4eg6vd7.jpg


Too bad some people like such a thing. When I see NMS players being happy to apply an Reshade sharpening preset and claiming that IQ has improved very much, I know I will never understand how people can be happy with sharpening as a whole when it is a complete IQ destroyer. Same thing with The Witcher 3.

And I think I mentionned some time ago that I had some problems with the way they are doing SSR :

u0oObTd.jpg


It looks like their solution is not taking depth into account at all. From what we saw up until now, it can sometimes really break a shot as a whole. A shame.
But given what the showed us at SIGGRAPH, I doubt they would choose such an implementation without any reason. And performance might definitely be one of them since SSR can become easily quite costly.

It is really infuriating to me that the game is somehow missing the last step after so much effort to implement some really cool pieces of technology, hope they will manage something before the release date with their day-one patch


EDIT : Higher quality screenshots from the Gamersyde video.
 

squidyj

Member
i don't know what you're talking about in that reflection shot, the arm blocks the ssr because it is it is in the buffer over what would have been reflected, dunno why you have an arrow pointing to the fishing line.
 
Did.

From my experience, this looks like an HDR (tone mapping) issue.

Possible indeed.

i don't know what you're talking about in that reflection shot, the arm blocks the ssr because it is it is in the buffer over what would have been reflected, dunno why you have an arrow pointing to the fishing line.

I wasn't actually pointing at the fishing line but the fish, my bad. And I know that it is a common problem in some SSR implementations, it does not make it less inelegant. It feels especially distracting in motion.
 

pottuvoi

Banned
I wasn't actually pointing at the fishing line but the fish, my bad. And I know that it is a common problem in some SSR implementations, it does not make it less inelegant. It feels especially distracting in motion.
Yes, it's quite distracting in most games.

Would be nice to see deep z-buffer approaches get more common.
Even if it is just different layer for characters and environment.
 
You mean the halos around the sparks?

No, the sparks themselves are way too sharpened and are producing artifacts around them. But maybe that is what you mean by "halo".

I actually think Zettai has it right here. The haloing around the sparks reminds me quite a bit of the typical results from oversharpening on hot white areas of an image. You get that typical "darkened halo" / "ringed" look.

For example, a game's spark particle effects with no sharpening and TAA:
vlcsnap-2016-08-17-11ytpnb.png


Those same sparks with over-sharpening and TAA:
vlcsnap-2016-08-17-11jjqw1.png


Notice how the thinest particles have the obvious blackened outline around them.
 

squidyj

Member
Possible indeed.



I wasn't actually pointing at the fishing line but the fish, my bad. And I know that it is a common problem in some SSR implementations, it does not make it less inelegant. It feels especially distracting in motion.

This is the sort of issue that would normally be 'solved' by a cubemap reflection which doesn't seem to be getting it done here. From reading up they don't have local cubemaps outdoors but they do indoors.

Yes, it's quite distracting in most games.

Would be nice to see deep z-buffer approaches get more common.
Even if it is just different layer for characters and environment.

Doesn't help to just have a deep Z buffer, you need a deep G Buffer in order to be able to correctly color the reflection. in a way you're making a sort of camera-space voxelization of the scene.
 

pottuvoi

Banned
This is the sort of issue that would normally be 'solved' by a cubemap reflection which doesn't seem to be getting it done here. From reading up they don't have local cubemaps outdoors but they do indoors.
It might have helped some yes, placing and aligning those might be hard.

Would love to see some form of distance field method for midrange and out of screen reflections. (very lowres presentation to keep tracing fast.)
Might be enough to have texture from overhead view or input from local cubemaps to get half decent result when SSR messes up etc.
Doesn't help to just have a deep Z buffer, you need a deep G Buffer in order to be able to correctly color the reflection. in a way you're making a sort of camera-space voxelization of the scene.
Agreed.
 

Rogas

Banned
Bold question but..

What graphics would be quickly achievable on the Neo or Scorpio if they were exclusively developed by a first party team and at 1080p/30fps?
 

Javin98

Banned
Bold question but..

What graphics would be quickly achievable on the Neo or Scorpio if they were exclusively developed by a first party team and at 1080p/30fps?
That's a pretty tough question to answer, honestly, but I'll state my opinion anyway. I'm confident if Uncharted 4 was developed exclusively for the Neo, all those demanding technical features such as MSAA, really high res shadows and fully alpha blended hairs would all be achievable at 1080/30. Sure, it's arguable that the final product came pretty close to that teaser, but the graphics tech used in many aspects is significantly inferior.

In general, though, I'd say shadow resolution would be the top priority if the games were developed exclusively for Neo/Scorpio. I'm quite sure the standard polycount for main characters would also be above 150K at LOD0. Just my two cents anyway. We can't really be sure unless Microsoft is just feeding us misleading PR. I'm quite certain Sony won't let devs develop exclusively for Neo.
 

Rogas

Banned
That's a pretty tough question to answer, honestly, but I'll state my opinion anyway. I'm confident if Uncharted 4 was developed exclusively for the Neo, all those demanding technical features such as MSAA, really high res shadows and fully alpha blended hairs would all be achievable at 1080/30. Sure, it's arguable that the final product came pretty close to that teaser, but the graphics tech used in many aspects is significantly inferior.

In general, though, I'd say shadow resolution would be the top priority if the games were developed exclusively for Neo/Scorpio. I'm quite sure the standard polycount for main characters would also be above 150K at LOD0. Just my two cents anyway. We can't really be sure unless Microsoft is just feeding us misleading PR. I'm quite certain Sony won't let devs develop exclusively for Neo.

It's quite honestly a waste, I don't buy new systems to play jacked up games of last gen. I hope Sony and Microsoft revise their policy a year or so after launch, that to be fair to the OG PS4 and X1 userbase.
 

Javin98

Banned
It's quite honestly a waste, I don't buy new systems to play jacked up games of last gen. I hope Sony and Microsoft revise their policy a year or so after launch, that to be fair to the OG PS4 and X1 userbase.
If you're expecting games to be developed exclusively for the new systems, I suggest you drop that expectation now and don't buy those systems. I don't know about Scorpio since Microsoft's PR is deceptive and misleading as hell, but Sony has clarified many times that the Neo is meant for the hardcore gamers. I recall rumours that a Slim revision wouldn't benefit Sony much, so they decided to take it a step further and make it a mid gen hardware upgrade. It makes sense for those of us who are obsessed with the technical capabilities of the hardware, but for 95% of gamers, they don't know or can't even see the difference. I believe Yoshida has also said that Neo won't affect the PS4's lifecycle, implying that a true next gen console is coming a few years down the line.
 

illamap

Member
Bold question but..

What graphics would be quickly achievable on the Neo or Scorpio if they were exclusively developed by a first party team and at 1080p/30fps?

it is hard to say. I suppose star citizen is the closest you have when devs target more powerful hardware and imo differences arent huge but of course sc looks a little bit better especially in scale.

Also going forward i think the whole first party = best graphics isnt true anymore even now. For example iirc(could be completely miss remembered) frostbite team has 300 devs when even bigger first party studios like naughty dog have under 50-100 programmers with ice team. And since hardware architechture is similiar as is the amount of memory. There is just no way first partys can be as effective on much smaller dev teams. We have yet to see 30fps game on FB with big budget current gen.

Bit of your question but i am little bit skeptical of having already a great looking ps4/xbone game made substansially better with "effects" etc. on neo/scorpio. Common suggestions like better AA, less pop-in/longer draw distance etc. are a bit minor improvements.

AA is already good with state of the art TAA and if you have lot of pop-in current gen gen machine imo devs have failed in giving pleasent image. Longer draw distance requires more submitted drawcalls from cpu so i have know idea how much cpu overclock will help on that front. Particle effects have to most potential to be improved to neo/scorpio but they arent always visible and also might affect gameplay visibility so in a way you would have worse experience on neo/scorpio as far as visibility is concerned. AO could see improvements with things like capsules on games that dont use them, but again these improvements are quite minor imo.

Less shadow acne is definently a bigger improvement on providing more pleasing image but i also think there is still opportunities to improve shadow acne on on current gen machines. After all order 1886 provides mostly acne free shadows. For now i guess it is just matter of question how much of frame budget you give to shadows.

TL:DR i dont think there is overabundance of "effects" you can just add to game making it to look a lot better.

Anyways these are just my noob thoughts on the subject. Feel free to disagree :D
 

Javin98

Banned
Less shadow acne is definently a bigger improvement on providing more pleasing image but i also think there is still opportunities to improve shadow acne on on current gen machines. After all order 1886 provides mostly acne free shadows. For now i guess it is just matter of question how much of frame budget you give to shadows.
This is mostly because The Order has static environments, relatively few dynamic light sources and fixed time of day. The Order manages to have really high shadow resolutions because a lot of the shadowing is baked.

Also, Dictator, I sent a PM more than a week ago. :p
 

KKRT00

Member
nice post
I disagree with SC. SC still havent shown to most what is capable of, because they not shown fully polished scenes from singleplayer :) I think people will be surprised, maybe not this Friday, but definitely when they show vertical slice or story trailer of S42 on CitizenCon in October :)
People really do not realize how much detail goes into the game polished elements. For example guns are made from 60k polys and have additional PoM details. Thats more than many current gen games use for their full hero characters.

But i fully agree that 1st party is not relevant. 3rd surpassed 1st in technology for years now, though they mostly are doing more open ended or dynamic games that are not fully art polished like 1st titles.
I mean Battlefront's level geometry still is not beat by any game and Battlefield 1 just seems to be notch above in every department.
I cant wait for Mass Effect Andromeda, because its an IP has so amazing art that i managed to make UE games look quite good. And that will be 30fps Frostbite title.

What i personally expect from Scorpio and Neo game is more dynamic lighting and very high quality shading. Current gen consoles are already capable of good AA and post-processing. Now we have to scale up shading and lighting.
 

gofreak

GAF's Bob Woodward
great for AMD, not so much for nvidia. that 380 is awfully close to a 970. the deferred numbers are more in line with what youd expect wrt the perf differential. presentations like this explain poor nvidia performance in cross platform AAA titles as of late, especially vulkan/dx12 titles
 

Durante

Member
Might well be a result of the NV tiling discovered recently, which should be particularly effective in reducing the external memory BW requirements in a deferred shading workload.

It's also pretty wild how according to that benchmark a 970 can do 2160p with 2xMSAA in the same time it takes a XB1 to do 1080p with no AA. What? I know it's slow, but that's a ridiculous difference on the face of it.
 
Might well be a result of the NV tiling discovered recently, which should be particularly effective in reducing the external memory BW requirements in a deferred shading workload.

It's also pretty wild how according to that benchmark a 970 can do 2160p with 2xMSAA in the same time it takes a XB1 to do 1080p with no AA. What? I know it's slow, but that's a ridiculous difference on the face of it.

1080p is going to magnify all the weaknesses in xbone architecture. esram amt not huge, 16 rops a huge bottleneck, slow ddr3 ram
 

dr_rus

Member
great for AMD, not so much for nvidia. that 380 is awfully close to a 970. the deferred numbers are more in line with what youd expect wrt the perf differential. presentations like this explain poor nvidia performance in cross platform AAA titles as of late, especially vulkan/dx12 titles

If by "close" you mean ~30% behind then sure, it's "close". This is very much a GCN optimization again targeted at modern console platforms. NV h/w doesn't need s/w triangle culling and I wonder if Polaris won't show as big gains as Tonga there as well. And I believe that I've read somewhere that a visibility buffer like this can also be implemented much more efficiently by using FL12_1 features - which they completely omit in their research. So yeah, another completely GCN-centric optimization approach. Dat console wins, eh?
 
Slides are up for the GDC Europe '4K Breakthrough' presentation:

http://www.confettispecialfx.com/gdce-2016-the-filtered-and-culled-visibility-buffer-2/

http://www.conffx.com/Visibility_Buffer_GDCE.pdf

zoiX2Ew.png


inNLI95.png


Memory footprint comparison:

ZPdyifP.png


jjEDl2k.png


A performance comparison:

tZbHSqT.png


Seems like AMD hardware might benefit in particular from it at higher resolutions.

Of course, we'll have to wait and see if anyone adopts this approach and how it does in real world applications. But it seems promising.
This reminds me of Eidos Montreal's Deus Ex Deferred+ idea: another interesting way at saving bandwidth by taking advantage of the new APIs.
 
If by "close" you mean ~30% behind then sure, it's "close". This is very much a GCN optimization again targeted at modern console platforms. NV h/w doesn't need s/w triangle culling and I wonder if Polaris won't show as big gains as Tonga there as well. And I believe that I've read somewhere that a visibility buffer like this can also be implemented much more efficiently by using FL12_1 features - which they completely omit in their research. So yeah, another completely GCN-centric optimization approach. Dat console wins, eh?

how did you get 30%? im getting a best case scenario of 22% @ 1080p 4xaa with other res/settings as low as 11%.
 

dr_rus

Member
how did you get 30%? im getting a best case scenario of 22% @ 1080p 4xaa with other res/settings as low as 11%.

Well, it's probably closer to 20%, I've looked at the deferred results when I compared the timeframes. Still, 20% isn't "close", especially considering that this research do jack shit to optimize the code for NV h/w.
 

Javin98

Banned
Just a brief rationale before I get to the main point. As some of you know me already know, I just got a new gaming PC a few days ago, with an i5 6500 and GTX 1060. Anyway, I've been looking at benchmarks of some of the most demanding games today to see what my PC is really capable of.

So looking at the Nvidia tweak guides for The Division and GTA V, PCSS is one of the options available for shadow quality in both games. The guides state that PCSS is the most accurate method of shadowing in the latter (HFTS in the former). My question is, does anyone really think so? I mean, I'm not a big fan of super sharp shadows. I do, in fact, prefer, soft and diffused looking shadows, but in my opinion, at least, how the hell is PCSS "accurate" in any way? Look at this:
http://international.download.nvidia.com/geforce-com/international/comparisons/grand-theft-auto-v/grand-theft-auto-v-soft-shadows-interactive-comparison-1-nvidia-pcss-vs-softest.html

This is in comparison to the Softest option. I much prefer it over PCSS, but a middle ground between the two would have been perfect. To me, PCSS almost looks too blurry and undefined. Another example:
http://international.download.nvidia.com/geforce-com/international/comparisons/grand-theft-auto-v/grand-theft-auto-v-soft-shadows-interactive-comparison-2-nvidia-pcss-vs-softest.html

Look at the shadows on the right side. It's almost impossible to make out the shapes IMO. Now, I know PCSS is technically more impressive, and I apologize if this is an unpopular opinion, but I would like to see what you guys think about this.

Slightly off topic, let's just say I do eventually pick up GTA V on my PC. What settings would be ideal for my PC? I'm targeting 1080/60, but I don't mind occasional dips to mid or low 50's.
 

nOoblet16

Member
It's accurate because it takes distance of the object into account, shadows lose definition over distance and as such will look softer and more diffused the further they are from the object that is casting shadows. This happens because the ambient lights start interfering and make the shadows lose their form.

If you look at a shadow cast by a tree under a sun then you'll find that you can't really make out the individual leaves like you can in those screenshots you posted.
 

Javin98

Banned
It's accurate because it takes distance of the object into account, shadows lose definition over distance and as such will look softer and more diffused the further they are from the object that is casting shadows. This happens the further the shadow is from the object casting the shadow the more the ambient lights interfere and make it lose its form.

If you look at a shadow cast by a tree under a sun then you'll find that you can't really make out the individual leaves like you can in those screenshots you posted.
I have actually observed shadows closely in real life, and while what you're saying is completely true, my issue is that PCSS overly softens the shadows and at times even loses certain details altogether. Examples from The Division this time:
http://images.nvidia.com/geforce-com/international/comparisons/tom-clancys-the-division/tom-clancys-the-division-shadow-quality-interactive-comparison-002-nvidia-hfts-vs-nvidia-pcss.html

Notice how some details seem non existent when using PCSS. Most evidently, the shadows of the cables at the centre of the image seems gone. Here's a comparison of HFTS and high:
http://images.nvidia.com/geforce-com/international/comparisons/tom-clancys-the-division/tom-clancys-the-division-shadow-quality-interactive-comparison-002-nvidia-hfts-vs-high.html

In this comparison, the shadows of said cables are there, but clearly lower res and less resolved on High. If you ask me, High is actually closer to what HFTS is achieving than PCSS. Again, my personal preference. I'm just not a big fan of PCSS's overly diffused and soft look.
 

pottuvoi

Banned
I have actually observed shadows closely in real life, and while what you're saying is completely true, my issue is that PCSS overly softens the shadows and at times even loses certain details altogether. Examples from The Division this time:
http://images.nvidia.com/geforce-com/international/comparisons/tom-clancys-the-division/tom-clancys-the-division-shadow-quality-interactive-comparison-002-nvidia-hfts-vs-nvidia-pcss.html

Notice how some details seem non existent when using PCSS. Most evidently, the shadows of the cables at the centre of the image seems gone. Here's a comparison of HFTS and high:
http://images.nvidia.com/geforce-com/international/comparisons/tom-clancys-the-division/tom-clancys-the-division-shadow-quality-interactive-comparison-002-nvidia-hfts-vs-high.html

In this comparison, the shadows of said cables are there, but clearly lower res and less resolved on High. If you ask me, High is actually closer to what HFTS is achieving than PCSS. Again, my personal preference. I'm just not a big fan of PCSS's overly diffused and soft look.
What you saw in that comparison is called peter panning, caused by sampling offset to reduce shadow acne. (also some blurring is due to low resolution shadow map.)
This is something HFTS helps with in division and is used with PCSS. (HFTS is blended out with distance and replaced by PCSS)
http://www.gdcvault.com/play/1023518/Advanced-Geometrically-Correct-Shadows-for

Actual softness of shadows from the sun is ~0.5 degrees. (on earth.)
 

dogen

Member
As people mentioned earlier, GDC Vault slides are starting to populate and we are getting some cool papers. I have yet to look at it, but "[URL=" Cinematic Quality, Anti-Aliasing in Quantum Break[/URL]" just came out. Let's see if that explains the horrible trailing the game has or if that was just the upscaling.
edit: I guess QBs horrible performance is somewhat contextualised there. Light-prepass, ew.

Are light prepass techniques common this gen? I don't remember the tradeoffs versus regular deferred shading(smaller g-buffer? can't remember..), but rendering geometry twice per frame has to hit those older gcn GPUs pretty hard I think.

edit - ahh, so smaller g-buffer, and easier to use msaa...
 

nOoblet16

Member
So I understand that games have been using HDR lighting since last gen but since the screens were all standard range they had to convert the HDR image into standard range for it to be displayed properly, much in the same way cameras do HDR by mixing 3 images of varying level of luminosity to fake it. HDR lighting in games is used to have lightsources of varying levels of luminosity without affecting the textures from getting white/black crushed...but due to the inevitable conversion (and last gen due to low quality HDR lighting) for display on Standard Range it would still end up being being somewhat limited in nature, with HDR television this conversion is not necessary and the output can have full range....Am I right so far?

If yes then can someone explain it to me just how exactly does doing HDR output costs resources ? (Since MS supposedly overclocked the One S for this).
 

pottuvoi

Banned
So I understand that games have been using HDR lighting since last gen but since the screens were all standard range they had to convert the HDR image into standard range for it to be displayed properly, much in the same way cameras do HDR by mixing 3 images of varying level of luminosity to fake it. HDR lighting in games is used to have lightsources of varying levels of luminosity without affecting the textures from getting white/black crushed...but due to the inevitable conversion (and last gen due to low quality HDR lighting) for display on Standard Range it would still end up being being somewhat limited in nature, with HDR television this conversion is not necessary and the output can have full range....Am I so far?

If yes then can someone explain it to me just how exactly does doing HDR output costs resources ? (Since MS supposedly overclocked the One S for this).
Almost, even Current 'HDR' tv cannot show wast dynamic range needed to create realistic images.

Instead of tone mapping to LDR TV and it's 256 brightness values, we tonemap to 1024 values. (10 bit)
 

nOoblet16

Member
So it shouldn't really require any extra performance at all and PS4's HDR would really be no different than Pro's HDR even with it's outdated HDMI with it's relatively limited bandwidth.
 

pottuvoi

Banned

squidyj

Member
I've been doing some reading about HDR display technology,

I'm not entirely clear on how that interacts with the color spaces. Working with the existing gamma function we have to be aware of making sure all the resources and calculations are being done in a linear color space, after which we apply gamma to make it look right on the display. I'm not entirely sure how this process changes when working with a display that's using something like dolby's Perceptual Quantizer.
 

dr_rus

Member
I've been doing some reading about HDR display technology,

I'm not entirely clear on how that interacts with the color spaces. Working with the existing gamma function we have to be aware of making sure all the resources and calculations are being done in a linear color space, after which we apply gamma to make it look right on the display. I'm not entirely sure how this process changes when working with a display that's using something like dolby's Perceptual Quantizer.

Have you read those?
https://developer.nvidia.com/preparing-real-hdr
https://developer.nvidia.com/sites/default/files/akamai/gameworks/hdr/UHDColorForGames.pdf
 

Proelite

Member
On the topic of check board rendering, does any game do dynamic checkerboard rendering? If the load permits, reconstruct from 1/2 pixel or render native if the GPU is free to do so. Of course the historical pixel positions have to correspond 1 : 1.

So essentially this
1 / 4 pixel checkboard rendering. 50% GPU savings.
1 / 2 Diagonal lines reconstructive rendering. Lets call it zebra rendering. 25% GPU savings
Native rendering.
 

pottuvoi

Banned
On the topic of check board rendering, does any game do dynamic checkerboard rendering? If the load permits, reconstruct from 1/2 pixel or render native if the GPU is free to do so. Of course the historical pixel positions have to correspond 1 : 1.

So essentially this
1 / 4 pixel checkboard rendering. 50% GPU savings.
1 / 2 Diagonal lines reconstructive rendering. Lets call it zebra rendering. 25% GPU savings
Native rendering.
Currently, no.

Doom does use dynamic resolution with TAA reconstruction to 1080p. (At least on XO.)
 

Teeth

Member
Since fancy upscaling is the new hotness, I've been thinking about some non-uniform scaling options that I've wondered whether any other developers have looked into. Some things I've been wondering about:

1) Scaling different portions of the screen at different pixel densities. For instance, take, say, the outside edges of the screen and render them at a lower pixel density to be upscaled than the centre of the screen. Essentially, the outside edges would be blurrier than the centre due to less pixels being natively rendered, but human eyes tend to focus much less on the edges than the centre.

2) Rendering different portions of the screen at different pixel densities based on content. For instance, on a game like Uncharted, rendering the main avatar and local environment assets at full pixel density and then checkerboarding the remaining pixels. Basically creating a mask of prioritized elements, rendering them at native density and then everything outside of the mask to be upscaled.

3) Finding high-frequency edges and rendering them at native pixel density and then rendering the rest of the image at lower resolution to be upscaled. Basically, colour blobs and gradients would need much less pixel density to look okay upscaled.

Do any of these sound unfeasible?
 

pottuvoi

Banned
Since fancy upscaling is the new hotness, I've been thinking about some non-uniform scaling options that I've wondered whether any other developers have looked into. Some things I've been wondering about:

1) Scaling different portions of the screen at different pixel densities. For instance, take, say, the outside edges of the screen and render them at a lower pixel density to be upscaled than the centre of the screen. Essentially, the outside edges would be blurrier than the centre due to less pixels being natively rendered, but human eyes tend to focus much less on the edges than the centre.

2) Rendering different portions of the screen at different pixel densities based on content. For instance, on a game like Uncharted, rendering the main avatar and local environment assets at full pixel density and then checkerboarding the remaining pixels. Basically creating a mask of prioritized elements, rendering them at native density and then everything outside of the mask to be upscaled.

3) Finding high-frequency edges and rendering them at native pixel density and then rendering the rest of the image at lower resolution to be upscaled. Basically, colour blobs and gradients would need much less pixel density to look okay upscaled.

Do any of these sound unfeasible?
Depends really how one can make it to work with every part of the engine. (post process etc.)
Also it must be constant save without visible artifacts, which might be hard to do.

1. pretty much what Sony did on PsVR demos. (Masked quads depending on how far from middle they were and used TAA and motion vectors to reconstruct missing pieces,)

2. Been wondering the same, but mostly for areas in which the quality isn't needed anyway.
IE. parts of screen which will get wide Depth of Field blurring or really fast moving objects and thus blurred by motion blur.
Variable rate shading methods could be used for such, not sure which method would work best.

3. A lot of things are already rendered in low resolution. (SSAO, some transparents etc.)
MSAA trick could be used to have full 4k resolution for edges and keep shading at 1080p.

If texture space shading will become common I expect such things to become a lot more used. (as well as shading some things in very slow framerate.)
 
Top Bottom