• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[Digital Foundry] New PS5 Pro GPU details emerge - including a 2.35GHz max boost clock

It's not really meaningless, it keeps our expectations in check.

If the CPU is Zen2, who's to say the GPU isn't still RDNA 2. Only time will tell I guess.

It becomes a debate over semantics, you guys love quoting Cerny, so what were his exact words ; "we have a custom GPU based on the RDNA 2 architecture".

No one can even agree on what RDNA 2 is... is the Series X RDNA 2? posters have argued it isn't because "True RDNA 2 has infinity cache". We can play this game all day.
 

Loxus

Member
It becomes a debate over semantics, you guys love quoting Cerny, so what were his exact words ; "we have a custom GPU based on the RDNA 2 architecture".

No one can even agree on what RDNA 2 is... is the Series X RDNA 2? posters have argued it isn't because "True RDNA 2 has infinity cache". We can play this game all day.
Yes it's custom but that custom hardware still has a version number, which is found in AMD drivers and compiler.

It ain't that hard to understand. You can literally go and research it.

Locuza already did all the hard work.
lswLPdr.jpeg


You can even look at the die shot and see the PS5 chip was completed before RDNA2 was finalized. Even Mark Cerny said AMD is continuously revising their tech for RDNA2 but they (Sony) has their own roadmap to follow.
 

Mr.Phoenix

Member
Not really.

There are no standardized benchmarks test for consoles so all we’ve see is different levels of optimizations from different developers for random games on each platform.
Which is a more reflective and accurate "real world" performance metric of the two platforms. I mean you can look at benchmark scores all we want, but at the end of the day, how the game actually runs on the hardware is all that matters.

Especially when you consider that intangibles like the ease of development, tools maturity, and incentive to optimize are just as prevalent and important factors as everything else that never gets factored into a benchmark.

All that aside though, I tend to not focus on the PS5 (and in turn the PS5pro) as being RDNA1.2.3 or X because at the end of the day, they are rightfully described as, by both Sony and AMD as semi-custom RDNA. And that could literally mean anything at all.

We are the ones trying to force them into classifications that even the people that make them never did, and as such, so doing, so will always cause confusion.

The PS5pro, could have the Shaders from RDNA2, the front end of RDNA3, the cache structure of RDNA3, the AI units from RDNA4, the RT units from RDNA4, the dual issue implementation used in RDNA4, cache scrubbers not found in any RDNA GPU, some stuff not found in any RDNA GPU while missing some of the IO stuff found in every RDNA GPU from RDNA 2 and up like an infinity cache....etc. That is literally what makes it a semi-custom GPU. So why are we here trying to call it RDNA2/3/4....etc.

Yes it's custom but that custom hardware still has a version number, which is found in AMD drivers and compiler.

It ain't that hard to understand. You can literally go and research it.

Locuza already did all the hard work.
lswLPdr.jpeg


You can even look at the die shot and see the PS5 chip was completed before RDNA2 was finalized. Even Mark Cerny said AMD is continuously revising their tech for RDNA2 but they (Sony) has their own roadmap to follow.
And the version number when talking custom means nothing more than, X GPU was finalized before Y GPU. Nothing about it say, X GPU is using some tech from Z GPU that was finalized a year later than X GPU.
 
Last edited:
Which is a more reflective and accurate "real world" performance metric of the two platforms. I mean you can look at benchmark scores all we want, but at the end of the day, how the game actually runs on the hardware is all that matters.

Especially when you consider that intangibles like the ease of development, tools maturity, and incentive to optimize are just as prevalent and important factors as everything else that never gets factored into a benchmark.

All that aside though, I tend to not focus on the PS5 (and in turn the PS5pro) as being RDNA1.2.3 or X because at the end of the day, they are rightfully described as, by both Sony and AMD as semi-custom RDNA. And that could literally mean anything at all.

We are the ones trying to force them into classifications that even the people that make them never did, and as such, so doing, so will always cause confusion.

The PS5pro, could have the Shaders from RDNA2, the front end of RDNA3, the cache structure of RDNA3, the AI units from RDNA4, the RT units from RDNA4, the dual issue implementation used in RDNA4, cache scrubbers not found in any RDNA GPU, some stuff not found in any RDNA GPU while missing some of the IO stuff found in every RDNA GPU from RDNA 2 and up like an infinity cache....etc. That is literally what makes it a semi-custom GPU. So why are we here trying to call it RDNA2/3/4....etc.


And the version number when talking custom means nothing more than, X GPU was finalized before Y GPU. Nothing about it say, X GPU is using some tech from Z GPU that was finalized a year later than X GPU.

What really matters in this case, is that the RT implementation seems to be RDNA4 that should be a HUGE upgrade compared to current consoles

The rest of the GPU specs are not as significant
 

Bojji

Member
I think the whole RDNA 1 and 2 debate is pretty much meaningless at this point, I mean... what are the implications exactly?

We've already seen how PS5 and Series X perform respectively.

From raster (and probably RT as well) perspective we know that both consoles are very close, almost the same.

Things that Xbox fanboys said are partially true, features that Xbox have (and PS5 is missing) are not utilized because PS5 is the dominant platform this gen. This shit is present since 2018 (Turing) and we almost don't have games that uses SFS, VRS (few games, bad results) and MS (2 games). Even Microsoft don't give a fuck, Xbox exclusives don't have them as well.

With pro in the picture that has all this maybe some of this stuff will actually be used now.
 

twilo99

Member
Which is a more reflective and accurate "real world" performance metric of the two platforms. I mean you can look at benchmark scores all we want, but at the end of the day, how the game actually runs on the hardware is all that matters.

Especially when you consider that intangibles like the ease of development, tools maturity, and incentive to optimize are just as prevalent and important factors as everything else that never gets factored into a benchmark.

The way a game “actually runs” is not entirely dependent on the hardware, but on a plethora of different things related to software. If you take the dev tools and the hardware and consider those a single package, they I guess benchmarking games makes sense, but even then most 3rd party games are essentially PS5 games that are ported to directx.

I don’t disagree with you, games are the most important thing, but they are certainly not an accurate representation of what the hardware is capable of because of the human factor in developing those games.
 

Loxus

Member
And the version number when talking custom means nothing more than, X GPU was finalized before Y GPU. Nothing about it say, X GPU is using some tech from Z GPU that was finalized a year later than X GPU.
Why you guys acting like Sony isn't using AMD hardware?

What custom work did Sony do with RT?
Mark Cerny literally said they're using the same RT as AMD.

AMD drivers says PS5 RT version is 1.0 and XBSX/RDNA2 is 1.1. What us so hard to understand about that?

If you would just do some research man, it ain't that hard.
 

ChiefDada

Gold Member
Why you guys acting like Sony isn't using AMD hardware?

Why is this farfetched? We already know PSSR architecture is custom right?


What custom work did Sony do with RT?
Mark Cerny literally said they're using the same RT as AMD.

AMD drivers says PS5 RT version is 1.0 and XBSX/RDNA2 is 1.1. What us so hard to understand about that?

Ok but what do the numbers mean? What's the logic considering there is no AMD RT prior to RDNA 2? For all we know, RT 1.1 is "better" than RT 1.2 and the numbering only indicates timing sequence of when chip designs were completed.

If you would just do some research man, it ain't that hard.

That's why we have you to break it down for us, brother❤️
 

ChiefDada

Gold Member
Are you sure it was AMD that said that or Digital Foundry.
And who do you think designs the hardware that Sony uses.
It's AMD for the main SoC. And Marvell for the SSD controller.

It was from the MLiD docs. It's Sony saying it. "Fully custom design". On a necessary tangent, you can also see here that the 33TF performance is specifically related to the ML capabilities, and not the GPU raster

Fz6V5z3.jpeg
 

SlimySnake

Flashless at the Golden Globes
I dont think so. Sony specifically says:
Dude. AMD is making the hardware for Sony. The tensor cores are likely lifted straight out of RDNA4 just like the RT cores.

Even the cerny IO was made by AMD with AMD's engineers. Lisa Su talked about it several times. Cerny just tells them what he wants and AMD does it for him.
 

Loxus

Member
Why is this farfetched? We already know PSSR architecture is custom right?




Ok but what do the numbers mean? What's the logic considering there is no AMD RT prior to RDNA 2? For all we know, RT 1.1 is "better" than RT 1.2 and the numbering only indicates timing sequence of when chip designs were completed.



That's why we have you to break it down for us, brother❤️
You could research Cyan Skillfish as well.
That's another PS5 AMD codename.
 

ChiefDada

Gold Member
Dude. AMD is making the hardware for Sony. The tensor cores are likely lifted straight out of RDNA4 just like the RT cores.

Even the cerny IO was made by AMD with AMD's engineers. Lisa Su talked about it several times. Cerny just tells them what he wants and AMD does it for him.

Ok, I guess completely ignoring the proof I just laid out in front of you WHICH COMES DIRECTLY FROM SONY is one way to go...

Edit: I'm not even sure what you're arguing. Sony architects it and AMD builds it. It is Sony's design, not AMDs. Same situation with the custom i/o and cache scrubbers which are not in PC GPUs.
 
Last edited:

winjer

Gold Member
It was from the MLiD docs. It's Sony saying it. "Fully custom design". On a necessary tangent, you can also see here that the 33TF performance is specifically related to the ML capabilities, and not the GPU raster

Fz6V5z3.jpeg

And it's designed by AMD.
Again, wither WMMA or a tensor core.
 

SlimySnake

Flashless at the Golden Globes
Ok, I guess completely ignoring the proof I just laid out in front of you WHICH COMES DIRECTLY FROM SONY is one way to go...

Edit: I'm not even sure what you're arguing. Sony architects it and AMD builds it. It is Sony's design, not AMDs. Same situation with the custom i/o and cache scrubbers which are not in PC GPUs.
They are utilizing amd’s tensor cores. If they add or remove stuff to it, it becomes custom. But the tensor cores are AMDs.,
 

ChiefDada

Gold Member
And it's designed by AMD.
Again, wither WMMA or a tensor core.

Huh? What what are we even debating anymore? Multiple partners can be hired to assist development but at the end of the day it is a custom architecture to only be used in Sony consoles. Sony is the client here. I've never heard anyone vehemently claim it was Toshiba who designed the Cell processor in PS3 and not Sony.
 

winjer

Gold Member
Huh? What what are we even debating anymore? Multiple partners can be hired to assist development but at the end of the day it is a custom architecture to only be used in Sony consoles. Sony is the client here. I've never heard anyone vehemently claim it was Toshiba who designed the Cell processor in PS3 and not Sony.

What partners. It's only AMD designing the SoC.
 

SlimySnake

Flashless at the Golden Globes
There is zero evidence to support this as of now.



The wording used is "Fully Custom Architecture" not "Semi-Custom Architecture"
I guess i need a refresher. What exactly is your position and why is it so important for Sony to not utilize AMD tech?
 

ChiefDada

Gold Member
Looking over at the MLID docs, it's quite possible we've been underestimating the RT uplift. The docs state RT performance is "2x to 3x (and sometimes 4x) over standard PS5 while many of us (myself included) incorrectly interpreted as 2x-3x as fast as base PS5.

2x-3x faster = 3x-4x as fast as base PS5.

Going back to that DXR PT benchmark and using 6700XT as proxy, that would place Pro between 4060ti and 4070 in pure RT. In best case scenario (4x faster), we are above 3090ti territory for pure RT. Insane.

MAG30kb.png


I guess i need a refresher. What exactly is your position

That Sony can embed custom architecture within an AMD SOC.

and why is it so important for Sony to not utilize AMD tech?

Not a clue. I guess Cerny will tell us in the coming months
 

Bojji

Member
2x-3x faster = 3x-4x as fast as base PS5.

Going back to that DXR PT benchmark and using 6700XT as proxy, that would place Pro between 4060ti and 4070 in pure RT. In best case scenario (4x faster), we are above 3090ti territory for pure RT. Insane.

michael-jordan-laughing-gif-8.gif


Dude, I know you are excited about this console but this is getting ridiculous.

AMD is making what Sony wants but it's based on their internal development, they are using tech from their GPU R&D (used later in RDNA2, 3, 4... etc.). Sony isn't making any hardware for AMD SoC.
 

ChiefDada

Gold Member
michael-jordan-laughing-gif-8.gif


Dude, I know you are excited about this console but this is getting ridiculous.

AMD is making what Sony wants but it's based on their internal development, they are using tech from their GPU R&D (used later in RDNA2, 3, 4... etc.). Sony isn't making any hardware for AMD SoC.

The gif is fun but did I say anything that was actually incorrect in that quote?
 
Dude, I know you are excited about this console but this is getting ridiculous.

AMD is making what Sony wants but it's based on their internal development, they are using tech from their GPU R&D (used later in RDNA2, 3, 4... etc.). Sony isn't making any hardware for AMD SoC.

I think we need more nuance here, it's not just a matter of Sony plucking features from AMD's roadmap. Like Cerny said, they have specific goals they want to achieve with the Playstation hardware which may not be useful in the PC space. For example, when designing the I/O system of the PS5, I doubt any of the AMD engineers cared about things like "check in" and other such data reading stages, so heavy input and collaboration would have been required by people like Cerny who were designing the software stack which would take advantage of such hardware. I feel like i'm stating the obvious here, but I don't think Sony "design" the hardware per say, but they do have a big influence on how it is moulded. This was also true for ray-tracing on RDNA 2, apparently AMD wanted a beefier RT solution across RDNA 2 cards but at the time both Sony and Microsoft were being super tight with what their final silicon should be, and a beefier RT solution didn't make the cut.
 
Dude. AMD is making the hardware for Sony. The tensor cores are likely lifted straight out of RDNA4 just like the RT cores.

Even the cerny IO was made by AMD with AMD's engineers. Lisa Su talked about it several times. Cerny just tells them what he wants and AMD does it for him.
Remember when Sony asked a specific number of ACE engines for PS4 compared to XB1? And we know PS4 Pro, PS5 (and even PS4) have specific features in the GPU (not the I/O stuff that's in the APU) still not found in any other AMD GPUs.

Sony need 300 TOPs of 8bit for PSSR (which is A LOT, same as 4080), it doesn't mean all RDNA4 GPUs will have the same amount of WMMA silicon even if I expect they'll actually have the same amount, not in total but the same amount by SE.
 
Last edited:

Bojji

Member
The gif is fun but did I say anything that was actually incorrect in that quote?

I think you are interpreting it wrong so 1x = 1xPS5, 2x = 2xPS5 etc. So far everyone was reading it like that.

Second thing is that PS5 is not as fast as 6700XT, in pure TF it's even slower than 6700 and that:

I8FWMUU.jpeg


So we are looking at 11-12FPS based on graph you showed.

Now we have leaked papers.

cJSFn0F.jpeg


"TO 3x" so it's up to 3x faster. 4x is only reserved to some cases, is PT case like that? We don't know.

2.5x is probably average.

With this in mind:

5E6AFnX.jpeg
 

ChiefDada

Gold Member
I think you are interpreting it wrong so 1x = 1xPS5, 2x = 2xPS5 etc. So far everyone was reading it like that.

Yes, but it is 2x faster RT, not 2x performance of PS5 RT. Just like 45% faster GPU rendering doesn't mean 0.45x PS5 performance.

Second thing is that PS5 is not as fast as 6700XT, in pure TF it's even slower than 6700 and tha

Now we have leaked papers.

cJSFn0F.jpeg


"TO 3x" so it's up to 3x faster. 4x is only reserved to some cases, is PT case like that? We don't know.

2.5x is probably average.

With this in mind:

5E6AFnX.jpeg

Lol here we go again! Is English not your first language?

1. If a given metric is performing X amount better, then you have to add that multiple on top of the base amount. If a car A is traveling 1x faster (100% faster) than car B traveling at 60mph, then car A is traveling at 120mph.

2. How on earth can you label the 3x metric as "max" and then go on to say "4x in some cases"? Lol! The 4x is the max and the 3x represents the higher end of the average range.

Thinking Think GIF by Rodney Dangerfield
 
Yes, but it is 2x faster RT, not 2x performance of PS5 RT. Just like 45% faster GPU rendering doesn't mean 0.45x PS5 performance.



Lol here we go again! Is English not your first language?

1. If a given metric is performing X amount better, then you have to add that multiple on top of the base amount. If a car A is traveling 1x faster (100% faster) than car B traveling at 60mph, then car A is traveling at 120mph.

2. How on earth can you label the 3x metric as "max" and then go on to say "4x in some cases"? Lol! The 4x is the max and the 3x represents the higher end of the average range.

Thinking Think GIF by Rodney Dangerfield
No they mean performance in the context.
 

Bojji

Member
Yes, but it is 2x faster RT, not 2x performance of PS5 RT. Just like 45% faster GPU rendering doesn't mean 0.45x PS5 performance.



Lol here we go again! Is English not your first language?

1. If a given metric is performing X amount better, then you have to add that multiple on top of the base amount. If a car A is traveling 1x faster (100% faster) than car B traveling at 60mph, then car A is traveling at 120mph.

2. How on earth can you label the 3x metric as "max" and then go on to say "4x in some cases"? Lol! The 4x is the max and the 3x represents the higher end of the average range.

Thinking Think GIF by Rodney Dangerfield

1. 1x = the same
2x = 100% faster
3x = 200% faster

2. Sony themselves used "TO 3x", so that's max according to them. 4x is for special cases (whatever they are).
 

winjer

Gold Member
Isn't the design Sony's but the tech powering the SOC AMD's? Like one of those restaurants where you can make your own dish but the ingredients are provided by the restaurant.

No. Sony and Microsoft just requests a certain feature set.
AMD has a division that designs custom SoCs for such clients.
 

Bojji

Member
Cool, now what does "1x faster" =?



Great, now what does "2x faster" = ?



Awesome, now what does "3x faster" = ?



Tired Oh No GIF by Law & Order

I don't get what are you trying to do here.

If for example PS5 can get 10FPS in a game with RT, PS5 Pro can get 20-30FPS (2x-3x). You think it will be faster than 7900XTX and 3090ti so what do I know...
 

SlimySnake

Flashless at the Golden Globes
AMD is making what Sony wants but it's based on their internal development, they are using tech from their GPU R&D (used later in RDNA2, 3, 4... etc.). Sony isn't making any hardware for AMD SoC.
I agree with this. However, in regards to below:
Dude, I know you are excited about this console but this is getting ridiculous.

I had done some calculations based on the ray traced figures provided by Avatar devs for both the 4080 and the XSX. Even at the low end estimates direct from sony for the RT trace speeds, we are looking at 4070 levels of RT performance. at 4x, we will come very close to 4080, but thats unlikely and the rasterization boost of the 4080 is too high for the PS5 Pro GPU to make up, but the actual RT performance RELATIVE to the base PS5 might indeed by larger than we think due to the fact that the base PS5 was so weak at doing RT.

Saw this in the latest DF direct. There is a great ray tracing benchmark here for consoles vs pc that will help us figure out what the PS5 Pro rt improvements could be.

This is from the Avatar devs. Basically how fast it takes for RT to calculate on XSX, XSS and the RTX 4080. Note that this is just for the RT calculations so dont extrapolate this to overall performance.

Before I go on with the math, here are my results.

TLDR:
  • Best Case PS5 Pro RT performance in Avatar: 274 ms - 2.6x faster
  • Worst Case PS5 Pro RT Performance in Avatar: 373 ms - 1.91x faster
  • RTX 4080 RT performance in Avatar: 258 ms - 2.77x Faster

XjoRw1r.jpeg


basically, RTX 4080 is 5.4x faster when doing the actual RT trace which makes sense since 4080 has way better dedicated hardware while RDNA2 is just kinda trash. Now everything else like lighting is around 2x faster which is great for us because that falls in line with the standard performance increase for the RTX GPU.

We can actually do A LOT with this info. Especially since XSX and PS5 are virtually identical in this game performance wise.

For example, we can apply a standard 1.45x improvement to the bottom four rows since thats what Sony is promising in terms of raw GPU performance. That gets us down from 319 ms to 175 ms.

Now for the actual RT trace, we know the PS5 is 2-4x faster. Worst case scenario: that gets us 198 ms + 175 ms = 373 ms. Thats 91% better. Basically 2x what Sony promised when they said the GPU is 45% faster.

373 ms is still way behind the 4080 which is ok because no one is expecting 4080 performance from this. But we are actually around 4070 Super or 4070 Ti performance if we take the 13 tflops 6700xt as the PS5/xsx equivalent.

Now where things really get interesting is if we take Sony's 4x number as the best case scenario. So trace goes from 396 ms to 99 ms. Very close to the 73 ms number of the RTX 4080. Adding the cost of bottom 4 passes again and we get 99 +175ms = 274 ms. We are VERY close to the 258 ms 4080 numbers. Again, this is the best case scenario.

So its entirely possible that the 45% faster GPU could be 90%-166% faster in RT games.
 
Last edited:
Sony actually made custom hardware for PS5 but obviously NOT for the GPU.

The I/O architecture is basically all Sony

PSSR will be a proprietary software solution that uses AMD hardware, it makes no sense otherwise
 

midnightAI

Member
Announcing the PS5 Pro before releasing it will just unduly dry up PS5 sales, which would be really bad since they're trying to clear inventory.
I don't think it will.... Unless people get wind of a price drop when the Pro is released.... However, they could alleviate that by announcing a price drop in immediate effect as they announce the Pro
 

Bojji

Member
I agree with this. However, in regards to below:


I had done some calculations based on the ray traced figures provided by Avatar devs for both the 4080 and the XSX. Even at the low end estimates direct from sony for the RT trace speeds, we are looking at 4070 levels of RT performance. at 4x, we will come very close to 4080, but thats unlikely and the rasterization boost of the 4080 is too high for the PS5 Pro GPU to make up, but the actual RT performance RELATIVE to the base PS5 might indeed by larger than we think due to the fact that the base PS5 was so weak at doing RT.

Yeah, developers graphs are one thing and actual performance is another. Avatar is one of those games where 6700 was faster than PS5, so no secret magic rendering here:

mtpRWLA.jpeg


It performs close to PC code despite not using PS/MS on PC. This game also runs good on RDNA2/3 so it's not typical Nvidia vs. AMD difference in RT (like Cyberpunk).
 

ChiefDada

Gold Member
I don't get what are you trying to do here.

Help you understand what was written in the leaked docs we're all going off of.

If for example PS5 can get 10FPS in a game with RT, PS5 Pro can get 20-30FPS (2x-3x).

Based on the docs, 30-40fps.

You think it will be faster than 7900XTX and 3090ti so what do I know...

I'm not hypothesizing anything. These are Sony's claims. Take that up with them.

What I am doing differently from you is comprehending words appropriately and not leaving out certain adverbs directly stated in the leaked docs.
 

Lysandros

Member
It's not that PS5 can't be called RDNA2, it's just the PS5 chip was completed before XBSX. Which resulted in the XBSX getting a later version of RT and Rops.
What is different between a PS5 RDNA2 CU (officially called so by Sony and AMD)/intersection engine and a XSX CU/intersection engine besides the version number?
 

Loxus

Member
What is different between a PS5 RDNA2 CU (officially called so by Sony and AMD)/intersection engine and a XSX CU/intersection engine besides the version number?
Both PS5 and XBSX are early RDNA2 designs before AMD finalized RDNA2.
You can see the difference by looking at the CUs.

iImgo2M.jpeg


Regardless of how you guys feel, internally AMD, Sony and Microsoft had codename for these chips.
PS5:
Navi10 Lite = GFX1000/1001
Navi12 Lite = GFX1013/1014 (Major Revision)

XBSX:
Navi21 Lite = 1020

The GFX number can tell you where the chip lies in terms of development time frame.
KtQBUuy.jpeg


Notice RDNA1 GFX is in the 101X range while RDNA2 is in the GFX 103X range.

At the end of the day, the API is the determining factor the makes these chips different.
 
Top Bottom