• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Inside Playstation 4 Pro: How sony made the first 4k games console

renzolama

Member
It's unfortunate that technical discussions between people who claim to be professionals turn into name calling and insults so quickly when they could be valuable exchanges of knowledge that would benefit everyone interested in the subject.
 
Wait, I'm a little confused. Are they saying the PS4 Pro is a legitimate, bonafide 4K games machine? I thought the PS4 Pro was only capable of upscaling retail games to 4K quality unless they are counting native 4K indie, lower budget style games.
 

belvedere

Junior Butler
Wait, I'm a little confused. Are they saying the PS4 Pro is a legitimate, bonafide 4K games machine? I thought the PS4 Pro was only capable of upscaling retail games to 4K quality unless they are counting native 4K indie, lower budget style games.

Via magic called checkerboard rendering, potentially. So far DF has spoke positively about it saying that unless you're closer to the display than 3ft it's unnoticeable.
 
Thanks guys. I honestly had no idea that the PS4 could natively run games in 4K beyond indie titles. I insinuated that everything was upscaled.
 
Thanks guys. I honestly had no idea that the PS4 could natively run games in 4K beyond indie titles. I insinuated that everything was upscaled.

The bigger names will be 'upscaled' yes, but the DF article goes into the new type of scaling that is quite a bit different than the methods of the past. Every one who has seen it in person has been impressed - the games that aren't 'native' 4K apparently look very, very close to it via this checkboard technique, and it can be combined with other up-rendering/scaling techniques as well. It's far better than just say, rendering a 1440p image and letting the TV do the scaling.

For all intents and purposes, the Pro is a 4K machine in that it can readily take advantage of 4K displays for significantly increased clarity.
 
That's a significant lot of upgrades over the OG PS4 specs. The more efficient GPU and delta colour compression for the memory are great extras.
 

dr_rus

Member
They didn't cut out features on their consumer cards so they can sell it in their high-margin pro cards?
Nvidia has been doing that for ages.

When someone "cuts out" a big engine from a cheaper car is it artificial or not? The h/w isn't there in consumer cards, that was a marketing positioning decision, not something artificial. Nv has been doing this for ages, and look where Nv is now and where their competitors are.

The biggest thing to take out of this is that IF FP16 will begin to actually benefit GeForce (gaming) markets - they will be able to bring that h/w into their consumer products rather fast.
 

Tripolygon

Banned
When someone "cuts out" a big engine from a cheaper car is it artificial or not? The h/w isn't there in consumer cards, that was a marketing positioning decision, not something artificial. Nv has been doing this for ages, and look where Nv is now and where their competitors are.

The biggest thing to take out of this is that IF FP16 will begin to actually benefit GeForce (gaming) markets - they will be able to bring that h/w into their consumer products rather fast.
Nvidia artificially restricts the performance of FP16 in their consumer cards in order to sell it in their pro cards. You are hung up on the word artificial as if it means hardware support is there and they are using software to restrict it (yes they have also done that before via software). That's not what the word artificial means, it means they are creating market segmentation, it is good business.

I don't care about their market share, thats not what i was talking about. Someone said FP16 is slow in PC using Nvidia cards as example, I simply pointed out thats because Nvidia restricts the performance of FP16 on their consumer cards.

You can stay hung up on the word artificial but we are going to agree to disagree. I hate car analogies, they seldom make sense. Two cars with the same engine but one car has a higher horsepower rating. This can be accomplished by restricting the exhaust and air inlets, change valve timing etc etc
 

nemisis0

Member
Hmmm shopto stock for the pro has changed from pre order to instock, probebly a mistake but have never seen them do that with a console before unless they are selling some early.
 

Vashetti

Banned
Thing is that all the talk about how same (more or less) h/w with a slight clock bump leads to compatibility issue shows that Sony don't really care about s/w compatibility with future platforms. Otherwise this shit would be abstracted beyond APIs enough to not make any difference.

This kinda reaffirms Leadbetter's impression that PS5 may be a "clean slate" again, loosing all compatibility with PS4/Pro s/w.

Even Cerny said to EG.

PS5 will be the new arch to push things without holding old hardware compatibility.

Uh, if I'm reading that right, it's kinda worrying.

I've been buying more digital titles on PS4 lately on the assumption that the x86 shift this gen would pretty much assure these games will work on future generation machines for the foreseeable future.

I'm going to be really disappointed if PS5 makes us start all over again. I thought MS and Sony were doing away with restarts every generation, I wanted to start building a proper digital library.
 

Kleegamefan

K. LEE GAIDEN
You have no clue how game development works on closed architecture on consoles.

None. Nada. Zip. Zilch.

You have no clue how low-level APIs work.

"throwing a better hardware to a game is precisely the easiest way to get that game to run flawlessly"

Oh, why sure it is! Sure, sure. I mean, forget the fact that people like Chubigans and I work on PS4 daily. We have no clue what we are talking about. We certainly don't have an intimate knowledge of the system architecture or the APIs for OG/Pro and can't cite the differences leading to the decision that performance as 1:1 is the appropriate choice.

Wait, no... That's not it... Dammit where are my PS4 developer cliffnotes!

I don't even really need to explain the 360/XO compatability, do I? I'll just ask and see if you can get to the answer yourself:

Does every 360 game run on XO right out of the box? Or do only certain games have compatability?

There should be a light bulb going off above your head any second now followed by an "ah ha!" and then a "doh!".


Oh snap
 

Kaako

Felium Defensor
You have no clue how game development works on closed architecture on consoles.

None. Nada. Zip. Zilch.

You have no clue how low-level APIs work.

"throwing a better hardware to a game is precisely the easiest way to get that game to run flawlessly"

Oh, why sure it is! Sure, sure. I mean, forget the fact that people like Chubigans and I work on PS4 daily. We have no clue what we are talking about. We certainly don't have an intimate knowledge of the system architecture or the APIs for OG/Pro and can't cite the differences leading to the decision that performance as 1:1 is the appropriate choice.

Wait, no... That's not it... Dammit where are my PS4 developer cliffnotes!

I don't even really need to explain the 360/XO compatability, do I? I'll just ask and see if you can get to the answer yourself:

Does every 360 game run on XO right out of the box? Or do only certain games have compatability?

There should be a light bulb going off above your head any second now followed by an "ah ha!" and then a "doh!".
Glad to see more actual devs who know what they're talking about shedding light. I did say god dayuuumn tho lol.
 

onQ123

Member
Uh, if I'm reading that right, it's kinda worrying.

I've been buying more digital titles on PS4 lately on the assumption that the x86 shift this gen would pretty much assure these games will work on future generation machines for the foreseeable future.

I'm going to be really disappointed if PS5 makes us start all over again. I thought MS and Sony were doing away with restarts every generation, I wanted to start building a proper digital library.

Question: do you want better PS4 games or do you want a new generation of games with better interface & better all around instead of just giving you better versions of what you already have?


From the last generation to this generation Xbox & PlayStation didn't change much but I think they are a lot better than they would have been if they would have just improved on the older specs to keep all the old games compatible .
 

Vashetti

Banned
Question: do you want better PS4 games or do you want a new generation of games with better interface & better all around instead of just giving you better versions of what you already have?

I want better games, but I want all of my games from this gen to carry forward. I assumed this was one of the primary reasons for the shift to x86 so we don't have to start over again.
 

dr_rus

Member
Nvidia artificially restricts the performance of FP16 in their consumer cards in order to sell it in their pro cards. You are hung up on the word artificial as if it means hardware support is there and they are using software to restrict it (yes they have also done that before via software). That's not what the word artificial means, it means they are creating market segmentation, it is good business.

I don't care about their market share, thats not what i was talking about. Someone said FP16 is slow in PC using Nvidia cards as example, I simply pointed out thats because Nvidia restricts the performance of FP16 on their consumer cards.

You can stay hung up on the word artificial but we are going to agree to disagree. I hate car analogies, they seldom make sense. Two cars with the same engine but one car has a higher horsepower rating. This can be accomplished by restricting the exhaust and air inlets, change valve timing etc etc

That's exactly what word "artificial" means and you're just grasping for straws now. There are no FP16 h/w beyond the single SP for CUDA compatibility in NV's consumer GPUs, and you don't know nearly enough to say that it's "artificial" and not in fact a conscious choice to make consumer part simpler and better performing where it actually matters - namely in FP32.

No, FP16 isn't slow on consumer parts because NV restricts anything. It's actually as fast as FP32 in gaming applications. The only part which is slow is the CUDA native FP16 code which isn't even being used in gaming anywhere and is there just to make sure that people who write stuff for big Tesla machines are able to debug the same stuff on their notebooks.

Uh, if I'm reading that right, it's kinda worrying.

I've been buying more digital titles on PS4 lately on the assumption that the x86 shift this gen would pretty much assure these games will work on future generation machines for the foreseeable future.

I'm going to be really disappointed if PS5 makes us start all over again. I thought MS and Sony were doing away with restarts every generation, I wanted to start building a proper digital library.

Same here, I was kinda hoping that Sony will try to build a software platform on x86 but it seems that they won't and that PS5 will likely break compatibility again.
 
Uh, if I'm reading that right, it's kinda worrying.

I've been buying more digital titles on PS4 lately on the assumption that the x86 shift this gen would pretty much assure these games will work on future generation machines for the foreseeable future.

I'm going to be really disappointed if PS5 makes us start all over again. I thought MS and Sony were doing away with restarts every generation, I wanted to start building a proper digital library.

I think you have nothing to worry about. The PS5 talk is in reference to lack of forward compatibility with PS4. So it will be a new gen with all the significant upgrades that implies and with exclusive games that won't run on PS4/Pro. Backwards compatibility is a separate issue and I think highly likely.
 
It's unfortunate that technical discussions between people who claim to be professionals turn into name calling and insults so quickly when they could be valuable exchanges of knowledge that would benefit everyone interested in the subject.

Seems like that's hard to do when arm chair experts are just going to deflect the knowledge and not engage in actual tech speak but bring up useless comparison that make no sense to the context of what this system is doing.

I'm following and everything is alien speak.
 

Tripolygon

Banned
That's exactly what word "artificial" means and you're just grasping for straws now. There are no FP16 h/w beyond the single SP for CUDA compatibility in NV's consumer GPUs, and you don't know nearly enough to say that it's "artificial" and not in fact a conscious choice to make consumer part simpler and better performing where it actually matters - namely in FP32.
That's not what the word artificial means in my context so don't tell me what it means. You sarcastically asked if that was what i meant and i said yes so i dont know what you're still going on about.

man made or produced by human beings rather than occurring naturally

I chose my words carefully, i said Nvidia artificially restricts them in their consumer cards. That is a segmentation that was made by Nvidia to sell pro cards that have higher margins. They've been doing that for compute features in their consumer cards.

Ok after reading the link that ethomaz quoted to support his claim

Anandtech

Limiting the performance of compute-centric features in consumer parts is nothing new for NVIDIA. FP64 has been treated as a Tesla feature since the beginning, and consumer parts have either shipped with a very small number of FP64 CUDA cores for binary compatibility purposes, or when a GeForce card uses an HPC-class GPU, FP64 performance is artificially restricted. This allows NVIDIA to include a feature for software development purposes while enforcing strict market segmentation between the GeForce and Tesla products. However in the case of FP64, performance has never been slower than 1/32, whereas with FP16 we’re looking at a much slower 1/128 instruction rate. Either way, the end result is that like GP104’s FP64 support, GP104’s FP16 support is almost exclusively for CUDA development compatibility and debugging purposes, not for performant consumer use.
As for why NVIDIA would want to make FP16 performance so slow on Pascal GeForce parts, I strongly suspect that the Maxwell 2 based GTX Titan X sold too well with compute users over the past 12 months, and that this is NVIDIA’s reaction to that event. GTX Titan X’s FP16 and FP32 performance was (per-clock) identical its Tesla equivalent, the Tesla M40, and furthermore both cards shipped with 12GB of VRAM. This meant that other than Tesla-specific features such as drivers and support, there was little separating the two cards.

Clearly the author of the article didn't know nearly enough to make that claim.
 
There were a couple helpful analogies earlier regarding the differences between FP16 and FP32 and how PS4Pro can benefit from that, but can someone answer this:

How will the increased capabilities of FP16 manifest in games? In pure layman's speak, what will this make shinier?

Just higher quality shaders?
 
It's too early to tell the generational leaps of PS5 mean a completely clean slate from a library perspective.

What we know from what Cerny hints to us is that PS5's hardware will not be built with hardware level BC for PS4 in-mind. But that doesn't mean PS5 can't "emulate" PS4 games and keep BC via software emulation.

Obviously they're not going to make any promises at this stage, but considering the competition's intent to build BC-library moving forward, I can't imagine PS5's development at the moment not taking into account a PS4-emulator to simulate the PS4 platform.
 

Razgreez

Member
Obviously they're not going to make any promises at this stage, but considering the competition's intent to build BC-library moving forward, I can't imagine PS5's development at the moment not taking into account a PS4-emulator to simulate the PS4 platform.

Hit the nail on the head with the bolded. Sony will likely stick to x86 and remain partnered with AMD however, to take full advantage of any new developments without being hamstrung by any previous generation drawbacks the next hardware would likely start like the PS4 did - with a clean developer requirement driven slate and PS4 BC/Emulation being a relative priority.

I was one of those who presumed Sony were moving away from generations however, Cerny's reasoning is sound in describing why that would not be cogent.
 

Avtomat

Member
There were a couple helpful analogies earlier regarding the differences between FP16 and FP32 and how PS4Pro can benefit from that, but can someone answer this:

How will the increased capabilities of FP16 manifest in games? In pure layman's speak, what will this make shinier?

Just higher quality shaders?

My understanding is that it will in fact lead to lower quality shaders but hear me out before you jump to conclusions.

FP16 processing allows devs to use lower precision data for shading but in some cases the shaders do not require such high precision and you can use the lower precision data to have the computation done twice as fast.

So the point of FP16 is that in those cases which the game devs have to figure out and identify they can get work done twice as fast freeing up resources quicker to do other things. The opportunities to implement this will vary game by game & scene by scene, so its not 2x as fast everywhere.

So in fact you can have these lower quality shaders (the user will never notice) but the frame rate will be higher or you can use the time saved to have higher quality shaders and prettier graphics elsewhere - where the user will notice.
 

onQ123

Member
There were a couple helpful analogies earlier regarding the differences between FP16 and FP32 and how PS4Pro can benefit from that, but can someone answer this:

How will the increased capabilities of FP16 manifest in games? In pure layman's speak, what will this make shinier?

Just higher quality shaders?


Say if a dev made a game for the PS4 & they used FP32 for a lot of tasks that they could have used FP16 for, that same dev could make that game for the PS4 Pro & use FP16 & get a little over 4X the performance.

This is where we will see native 4K PS4 Pro games vs 1080P PS4 games.


88Q1L4p.jpg


http://www.neogaf.com/forum/showpost.php?p=215207877&postcount=3366
 

JohnnyFootball

GerAlt-Right. Ciriously.
I see people asking about the best SSD and many mentioning Samsung.

Yes, Samsung is pretty much the defacto brand for SSDs.

However.....


Don't waste your money getting the best SSD. Save that for a PC or laptop.

Get a decent SSD as it will be many many many times faster than the HDD that ships with the console. Crucial or Mushkin brand SSDs are easy enough to recommend.

http://www.newegg.com/Product/Product.aspx?Item=N82E16820226596

Of course everyone here is assuming that an SSD will even provide a worthwhile benefit, despite the fact that it provided a neglible benefit on the current PS4. Even on SATA2 the SSD should have blown away the stock HDD, but it didn't.
 

ethomaz

Banned
Say if a dev made a game for the PS4 & they used FP32 for a lot of tasks that they could have used FP16 for, that same dev could make that game for the PS4 Pro & use FP16 & get a little over 4X the performance.

This is where we will see native 4K PS4 Pro games vs 1080P PS4 games.


88Q1L4p.jpg


http://www.neogaf.com/forum/showpost.php?p=215207877&postcount=3366
FP16 is not a compressed FP32.

You lose range and accuracy using FP16 that can end having a loss in IQ after the processing.

And FP16 is only twice faster than FP32 on Pro... that over 4x performance is bullshit again.

You can use FP16 in games where you don't need FP32 range of precision and talking about real game the boost changing to FP16 where you can is around 20% from what I read.

Even the 2x performance boost is fantasious because you can't use FP16 in every single place.
 

Alej

Banned
FP16 is not a compressed FP32.

You lose range and accuracy using FP16 that can end having a loss in IQ after the processing.

And FP16 is only twice faster than FP32 on Pro... that over 4x performance is bullshit again.

You can use FP16 in games where you don't need FP32 range of precision and talking about real game the boost changing to FP16 where you can is around 20% from what I read.

Even the 2x performance boost is fantasious because you can't use FP16 in every single place.

4x performance over Vanilla PS4, i think you didn't understand what he is logically saying.

FP32 task on PS4 is four times slower the same task processed in FP16 on Pro. (assuming the difference in FLOPs between PS4 and Pro is over 2x).
 

ethomaz

Banned
4x performance over Vanilla PS4, i think you didn't understand what he is logically saying.

FP32 task on PS4 is four times slower the same task processed in FP16 on Pro. (assuming the difference in FLOPs between PS4 and Pro is over 2x).
I see that but the difference between a game using FP16 over FP32 is nowhere near 2x in a 2:1 GPU. And you need to if the loss in quality is worth of it.
 

Alej

Banned
I see that but the difference between a game using FP16 over FP32 is nowhere near 2x in a 2:1 GPU. And you need to if the loss in quality is worth of it.

He didn't say that though. No one is saying that.

But let people say it's a 8.4TFLOPs machine (in FP16) because they are right, even if it means nothing.

In the reality, it's 4.5xPS4>PRO>2.25xPS4.
 

onQ123

Member
FP16 is not a compressed FP32.

You lose range and accuracy using FP16 that can end having a loss in IQ after the processing.

And FP16 is only twice faster than FP32 on Pro... that over 4x performance is bullshit again.

You can use FP16 in games where you don't need FP32 range of precision and talking about real game the boost changing to FP16 where you can is around 20% from what I read.

Even the 2x performance boost is fantasious because you can't use FP16 in every single place.

It all just went over your head,


PS4 Pro is already over 2X the PS4 & when you add in devs optimizing & using FP16 where they used FP32 & FP16 (When FP16 was limited to the same throughput as FP32 on PS4) you get over 4X the performance.

Don't forget that PS4 Pro has extra hardware like advance work distributor , ID buffer , color compression & so on that will lighten the load vs making a game on the PS4.
 

ethomaz

Banned
It all just went over your head,


PS4 Pro is already over 2X the PS4 & when you add in devs optimizing & using FP16 where they used FP32 & FP16 (When FP16 was limited to the same throughput as FP32 on PS4) you get over 4X the performance.

Don't forget that PS4 Pro has extra hardware like advance work distributor , ID buffer , color compression & so on that will lighten the load vs making a game on the PS4.
You are comparing to PS4 I misunderstood... I thought you are talking about 4x over Pro with FP32.
 

ethomaz

Banned
BTW a bit of GPU story.

Up to NV35 (FX 5900) the GPUs didn't have a native FP32 unit and for that it was way slower than FP16. ATI did have a better FP32 unit but still slower to FP16 but faster than nVidia F32 emulation. ATI had a middle option called FP24 that had close performance to FP16 with better trade in image quality to FP32.

nVidia was accused of cheating because they forced via drivers to games runs in FP16 being faster than ATI solution but the output image was way lower than FP24 option in ATI... the FP16 output was really a big trade in terms of image quality.

GPUs after that moved to FP32 native units and all performance and image quality were mensured in FP32.

Mobile GPUs uses FP16 because they can't show the differences in image quality in a small screen making it a trade worth of it.

Big screen GPUs are all FP32 utilization because the better quality and optimal performance (the fact they moved to native FP32 units made the FP16 be emuled until Polaris and that way lower performance than FP32).

People thing FP16 is a new thing and it will give free performance boost to games is laying to themselves.

FP16 offer lower image quality compared to FP32 and not every part of code can run in FP16 (most shaders can)... the gain won't be close to 2x performance with degradation in image quality.

Industry needs and should move to FP64 when the hardware implementation get cheaper and free on performance :p
 

Colbert

Banned
BTW a bit of GPU story.

Up to NV35 (FX 5900) the GPUs didn't have a native FP32 unit and for that it was way slower than FP16. ATI did have a better FP32 unit but still slower to FP16 but faster than nVidia F32 emulation. ATI had a middle option called FP24 that had close performance to FP16 with better trade in image quality to FP32.

nVidia was accused of cheating because they forced via drivers to games runs in FP16 being faster than ATI solution but the output image was way lower than FP24 option in ATI... the FP16 output was really a big trade in terms of image quality.

GPUs after that moved to FP32 native units and all performance and image quality were mensured in FP32.

Mobile GPUs uses FP16 because they can't show the differences in image quality in a small screen making it a trade worth of it.

Big screen GPUs are all FP32 utilization because the better quality and optimal performance (the fact they moved to native FP32 units made the FP16 be emuled until Polaris and that way lower performance than FP32).

People thing FP16 is a new thing and it will give free performance boost to games is laying to themselves.

FP16 offer lower image quality compared to FP32 and not every part of code can run in FP16 (most shaders can)... the gain won't be close to 2x performance with degradation in image quality.

Industry needs and should move to FP64 when the hardware implementation get cheaper and free on performance :p

applause.gif
 

onQ123

Member
BTW a bit of GPU story.

Up to NV35 (FX 5900) the GPUs didn't have a native FP32 unit and for that it was way slower than FP16. ATI did have a better FP32 unit but still slower to FP16 but faster than nVidia F32 emulation. ATI had a middle option called FP24 that had close performance to FP16 with better trade in image quality to FP32.

nVidia was accused of cheating because they forced via drivers to games runs in FP16 being faster than ATI solution but the output image was way lower than FP24 option in ATI... the FP16 output was really a big trade in terms of image quality.

GPUs after that moved to FP32 native units and all performance and image quality were mensured in FP32.

Mobile GPUs uses FP16 because they can't show the differences in image quality in a small screen making it a trade worth of it.

Big screen GPUs are all FP32 utilization because the better quality and optimal performance (the fact they moved to native FP32 units made the FP16 be emuled until Polaris and that way lower performance than FP32).

People thing FP16 is a new thing and it will give free performance boost to games is laying to themselves.

FP16 offer lower image quality compared to FP32 and not every part of code can run in FP16 (most shaders can)... the gain won't be close to 2x performance with degradation in image quality.

Industry needs and should move to FP64 when the hardware implementation get cheaper and free on performance :p


This is why using FP16 for 4K make sense because a pixel on the same size 4K TV vs a 1080P TV is going to be 1/4 the size. then you add in the advances in hardware & software development a 8.4TF FP16 console will look amazing.


I spoke on this a few months ago


On the same size TV 4K pixels will be 1/4 the size of a 1080P pixel so the 4K pixels don't need to have as much detail in them as a pixel from a 1080P game. it's probably going to be hard to tell the difference between pixels made using FP32 & FP16 at 4K resolutions.

In the paper they say they would rather davs use checkerboard rendering at 4K than to have a normal 1440P game because a 1440P game on a 4K TV don't look much different from a 1080P game on a 1080P TV.

4K cheap pixels will look better than 1440P high quality pixels on a 4K TV. it's a middle ground it's not going to look as good as 4K without the tricks but it's going to look better than the next best thing.
 

Tripolygon

Banned
BTW a bit of GPU story.

Up to NV35 (FX 5900) the GPUs didn't have a native FP32 unit and for that it was way slower than FP16. ATI did have a better FP32 unit but still slower to FP16 but faster than nVidia F32 emulation. ATI had a middle option called FP24 that had close performance to FP16 with better trade in image quality to FP32.

nVidia was accused of cheating because they forced via drivers to games runs in FP16 being faster than ATI solution but the output image was way lower than FP24 option in ATI... the FP16 output was really a big trade in terms of image quality.
This is a reoccurring thing between ATI (AMD) and Nvidia. Nvidia also accused ATI for cheating using the same tactic.
 
So what does the 16 bit operation at twice the speed do? I've seen it described as 8.4 tf in certain cases for certain operations
 

ethomaz

Banned
This is a reoccurring thing between ATI (AMD) and Nvidia. Nvidia also accused ATI for cheating using the same tactic.
Yeap AMD used FP16 in some games benchmarks recently but it was only for the render buffer I guess... not shaders or others parts.

mVidia reached a 12% performance boost with the trick enabled.

nVidia PR.

Important note if you are testing the following applications:

* Dawn of War 2
* Empire Total War
* Need for Speed: Shift
* Oblivion
* Serious Sam II
* Far Cry 1

AMD has admitted that performance optimizations in their driver alters image quality in the above applications. The specific change involves demoting FP16 render targets to R11G11B10 render targets which are half the size and less accurate. The image quality change is subtle, but it alters the workload for benchmarking purposes. The correct way to benchmark these applications is to disable Catalyst AI in AMD’s control panel. Please contact your local AMD PR representative if you have any doubts on the above issue.
NVIDIA’s official driver optimization’s policy is to never introduce a performance optimization via .exe detection that alters the application’s image quality, however subtle the difference. This is also the policy of FutureMark regarding legitimate driver optimizations.

NOTE: If you wish to test with Need for Speed: Shift or Dawn of War 2, we have enabled support for FP16 demotion – similar to AMD – in R260 drivers for these games. By default, FP16 demotion is off, but it can be toggled on/off with the AMDDemotionHack_OFF.exe and AMDDemotionHack_ON.exe files which can be found on the Press FTP.
For apples-to-apples comparisons with our hardware versus AMD, we ask that you run the AMDDemotionHack_ON.exe when performing your graphics testing with these games. In our own internal testing, speedups of up to 12% can be seen with our hardware with FP16 demotion enabled.

More info with picture comparison: http://www.geeks3d.com/20100916/fp1...-by-ati-to-boost-benchmark-score-says-nvidia/
 

lord pie

Member
Fp16 is an ideal storage format for modern game engines and is widely supported as such - even Vita supported fp16 render targets after all, but it very rarely is an adequate computation format, which is why support is rare outside of the mobile space - the computations involved have to be quite simple and (importantly) of limited value range, otherwise the loss of precision gets too great. Just a few calculations and you can end up with effective precision of just a handful of bits.

Put it this way, in fp16 there isn't enough precision to calculate (10.0 + 1.0 / 500.0). The result is 10.0
That's dramatic precision loss from a single MAD instruction, let alone the accumulated precision loss from the 100s of instructions typical in a modern shader. This is why it typically isn't a good computation format.

Further more,

Modern systems (even in the mobile space) generally are not computation limited. They are bandwidth and latency limited - hence why fp16 is great as a storage format, and also why the primary benefit of fp16 at the shader level is reduced register pressure *not* the reduced instruction count.

Saying supporting fp16 makes the machine twice as fast is *incredibly* misleading and shows staggering ignorance to modern game development. It's a useful tool to squeeze out say 15% percent or so where memory/register bottlenecks and precision constraints allow.
 

Patataboy

Neo Member
I'm always baffled when non developers start complaining about hardware limitation (or what they think the limitation is)
Let's face it, there is no point on complaining about something that won't bother you in the end, that you won't use, or that you have zero knowledge to begin with.

I'm a gamer, not a dev, I'm more concerned about the UI, the OS, the services, than the hardware itself.
When devs tell us that something is missing or is just great, why do I see so many players arguing?
Especially when those devs take the time to try to explain what they are doing, how they go around critical situation, teach us the hardware architecture ... it is freaking interesting, educational, but still unelaborated complains rise.

I hoped so many times that Sony would do the Yaroze experience one more time so more people would really know (for those who have the capacity and time to code on it) what it is to fiddle with this hardware.
 

Inuhanyou

Believes Dragon Quest is a franchise managed by Sony
The most interesting information to me was the doubling down by Sony on there being a PS5. That shows just how optional this unit is, and they can build on and use the techniques learned with this hardware in that generational shift.

But i think some people have it confused, just because Sony sees generations as something to continue doesn't mean backwards compatibility is unlikely. It just means they aren't mandating MS style forwards compatibility, which i personally think hurts game development.

I don't know where the viewpoint came from that because Sony still see a huge upgrade complete with a new generation standard as a new gen, that that means automatically leaving behind the previous generation. The entire point of moving to x86 i thought was to make this kind of thing much easier, and even in 3 years, GCN, RAM and such are all going to be similar to how they operate today.
 

onQ123

Member
http://www.eurogamer.net/articles/d...tation-4-pro-how-sony-made-a-4k-games-machine

Geometry rendering is a simpler form of ultra HD rendering that allows developers to create a 'pseudo-4K' image, in very basic terms. 1080p render targets are generated with depth values equivalent to a full 4K buffer, plus each pixel also has full ID buffer data - the end result is that via a post-process, a 1080p setup of these 'exotic' pixels can be extrapolated into a 4K image with support for alpha elements such as foliage and storm fences (albeit at an extra cost). On pixel-counting, it would resolve as native 4K image, with 'missing' data extrapolated out using colour propagation from data taken from the ID buffer. However, there is a profound limitation.



That sound familiar


When the pixel counters go to count the pixels it will be 8294400 pixels & not 2073600 pixels scaled across 8294400 pixels.
 
Say if a dev made a game for the PS4 & they used FP32 for a lot of tasks that they could have used FP16 for, that same dev could make that game for the PS4 Pro & use FP16 & get a little over 4X the performance.

This is where we will see native 4K PS4 Pro games vs 1080P PS4 games.


88Q1L4p.jpg


http://www.neogaf.com/forum/showpost.php?p=215207877&postcount=3366
Yeah i was interested in this ever since you first mentioned it. btw the ps4 is not capable of FP16 right? This is exclusively a pro feature until the scorpio comes around?
 
Top Bottom