• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS5 Pro Specs Leak are Real, Releasing Holiday 2024(Insider Gaming)

Mr.Phoenix

Member
initially I was pretty psyched for PS5 pro but this gen has been like warm for me at best.

I am fucking hyped for the new switch though, I will be getting one of those and my next main gaming system will likely be whatever the new steam deck is when they launch as I have been tinkering with my sons and it’s pretty damn good.
I don't know... I always find posts like these somewhat hard to believe. Or hell even borderline disingenuous.

Ah well, to each their own I guess. Just doesn't make sense to me... and the only way it does is if I am looking at it from the perspective that the poster is just a fan of a specific platform. That way it makes sense, but if that's the case, then why make a post like that here to begin with?
 

Synless

Member
I don't know... I always find posts like these somewhat hard to believe. Or hell even borderline disingenuous.

Ah well, to each their own I guess. Just doesn't make sense to me... and the only way it does is if I am looking at it from the perspective that the poster is just a fan of a specific platform. That way it makes sense, but if that's the case, then why make a post like that here to begin with?
You can think what you will. If I took a picture of the 600+ games and of the numerous consoles I own you would find it to be overwhelmingly Sony centric.

The post is just an opposite viewpoint of the excitement others here have. I have my PS5, I’m not confident the pro version will add anything substantial to what is already offered. I’m just saying of the two upcoming releases, I am more excited for one than the other.
 
You can think what you will. If I took a picture of the 600+ games and of the numerous consoles I own you would find it to be overwhelmingly Sony centric.

The post is just an opposite viewpoint of the excitement others here have. I have my PS5, I’m not confident the pro version will add anything substantial to what is already offered. I’m just saying of the two upcoming releases, I am more excited for one than the other.
How is that OT though? What does that have to do with PS5 Pro specs, considering that it was always meant to be a home console and not portable. If we are just using this thread to shit on various platforms and post our personal buying preferences, here is my take:
I recently got into Monster Hunter thanks to the Steam Deck. Rise runs at a flawless 60 fps on the Deck and I was having fun playing with the bow and gun. Then, I switched over to hammer and finally realized one of the critical flaws of the Steam Deck: the vibration sucks ass and is utterly impotent. Now playing through rise with the hammer on PS4 and absolutely shocked at the huge difference haptic feedback makes to the game feel. With the exception of the Xenoblade games, my investment in the Nintendo Switch has been utterly wasteful (Hated both Zeldas and wasted 60$ each on anemic crapware like Mario Party, Mario Tennis, Arms, etc.). I don't want to waste my money anymore. So, I will not be buying the Switch 2 for myself, though I will definitely be scalping the shit out of it on release.
So yeah, curiously excited about the PS5 Pro, don't care about the Switch 2, will think about the Deck 2 if Valve improves upon the vibration. Also, for all the talk of PC gamers having a backbone, the lengths that they go to defend Valve's shitty haptics in Deck and Deck OLED is insane. So, I do not see any improvements coming on that front, I might just buy a Windows handheld, probably the rumored Xbox handheld if it supports Steam and has better haptics.
 
Last edited:
I wonder if the previously mentioned 2.18GHz might clue us in as to where the base PS5's Continuous Boost scheme bottoms out when under heavy utilisation. I recall it being mentioned it only drops something like ~2% on the rare occasion it does drop. Perhaps it's the same here but the top end has just been pushed up; and it adheres a little more loosely.

Even having a patched title with lower clocks might complicate things.

I understand it's deterministic, so For eg. (under heavy utilisation/using complex instruction)

PS5: 2.23GHz with rare drops from the peak to 2.18GHz
PS5 Pro: 2.35GHz with more common drops from the peak as far as 2.23GHz, then drops occurring at the same rate as PS5 from there down to 2.18GHz

..?

That's what I think as well

2.18 Ghz = Worst case scenario
2.23 Ghz = Standard scenario
2.35 Ghz = Best case scenario
 

Mr.Phoenix

Member
You can think what you will. If I took a picture of the 600+ games and of the numerous consoles I own you would find it to be overwhelmingly Sony centric.

The post is just an opposite viewpoint of the excitement others here have. I have my PS5, I’m not confident the pro version will add anything substantial to what is already offered. I’m just saying of the two upcoming releases, I am more excited for one than the other.
And therein lies my issue with comments like those. First off, your excitement for a PS5pro vs Switch 2... has nothing to do with a thread like this. Secondly, of course, a whole new console "generation" like the Switch 2 should be more exciting than what is literally just a souped-up PS5 (PS5 pro) and lastly, I don't know what more substance you expect a PS5pro to add to the PS5... anything outside better looking and performing PS5 games is quite literally an unrealistic expectation. Thats literally what the PS5pro is supposed to be and do, something that takes your PS5 games, and makes them look a little better while performing better too.

You know... like having a 4060 GPU that you got with your pre-built PC and upgrading the GPU to a 4080.

But yeah, your preference or whatever you are excited about has nothing to do with this thread. Imagine me seeking out a Switch 2 specs thread and making a post saying, "I feel this is just going to be more of the same from Nintendo and doubt it would give me the kinda games I want, I am more excited for the upcoming PS5pro than I am for this new Switch"... see how out of place that looks?

Apple to oranges: AMD now uses dual issue compute to calculate Tflops
As does Nvidia. So technically, that's an apples-to-apples comparison. Technically.... but how the AMD and Nvidia GPUs perform, are just vastly different. Even when the specs suggest they should be very similar.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Kepler is not always accurate with his info, you can go and check for yourself, the tweets are still there.

With that being said, so far these are the SE count of RDNA4.
Navi44 - 16WGP (2SE, 32CU, 64ROPs)
Navi48 - 32WGP (4SE, 64CU, 128ROPs)

So the WGP per SE is still the same as RDNA3.
8WGP or 16CU per SE.
PS5 having 4SE is more probable than 2SE.
2SE with that many CU seems to be inefficient as well.

With higher end RDNA4 going by Navi44/48, it appears to be still 8WGP or 16CU per SE.
And it seems each SED houses 2 Shader Engines.
RDwiOHU.png


Navi41
1SED = 2SE/16WGP(32CU)
6SED = 12SE/96WGP(192CU)
1AID = 2×64bit or (4×32bit GDDR PHY)
2AID = 4×64bit GDDR PHY = 256bit

Navi40
1SED = 2SE/16WGP(32CU)
9SED = 18SE/144WGP(288CU)
1AID = 2×64bit or (4×32bit GDDR PHY)
3AID = 6×64bit GDDR PHY = 384bit

The concept of 1SED having 2SE is the same as Zen4C CCD having 2CCX.
Y1RKfYs.png

Here we can see the size of 2SE.
eE3QN4H.jpeg

Digital Foundry has new and VERY INTERESTING info about PS5 Pro GPU specs via DF Direct early access . I will not post their video or slide as I acknowledge they have to make a living but confirmed specs below:

1. 30 WGPs = 60 Active CUs
2. Configuration: 2 SEs/ 4 SAs (8-7 8-7)
2. 2.35 Ghz Max Boost Clock
3. GL2 Cache =4MB (Same as PS5)
4. GL1 Cache =256kb (PS5 =128kb)
5. GL0V Cache = 32kb (PS5 = 16kb) "Sony Specifically says this increase is to allow for better RT performance"

I'm still watching and will update you guys asap.
Compared to XSX we have 8 more active CUs (running at a considerably higher frequency).

XSX has more L2 cache (which is where you spill to when you want to write data from the compute units out to memory and share it with say CUs from another Shader Engine) with 5 MB, so it is not great to see 4 MB there, but then again PS5 Pro has more games dedicated RAM running at 576 GB/s vs XSX’s 560 GB/s for one of its memory pools (the other one running at 336 GB/s). Also, PS5 Pro should have a bit more efficient memory controller (not sure exactly what they did, just going along with the specs leaks) to make better use of memory bandwidth which is good as GPU love bandwidth ;).

Doubling the per shader engine cache (L1) and the CU’s L0 cache (which is also what the texture gets loaded into, good for the Texture / RT units) should help offset some of the additional burden the additional CUs will place (especially the L0 one, devs will be incentivised not to write out to L1 as much as possible as that just writes out to L2, then again you do have more bandwidth). More L2 would be nice, but they do have a bandwidth boost over the XSX even so maybe that balances things out well enough.
 

Bojji

Member
It was always just a question of clock speeds which I'll remind folks isn't 100% confirmed until Sony releases the official specs themselves. I've said many times that the clocks weren't going to be lower than the base system. Absolutely min was going to be 60 CUs with dual issue (RDNA3+) with no clock increase (2.23 Ghz) = 34.3 TFLOPS. I was hoping we'd get at least a 10% clock bump which would put it at 2.45 Ghz (~38TFLOPS). I'm still really curious why there isn't a fixed clock increase for either the CPU or GPU assuming it's using RDNA 4 as rumored :pie_thinking: .

I'd just remind folks that 34-36 TFLOPs with 60CUs is ~6800XT level. That is in the 80-90% range for a real world FPS increase in games (assuming GPU bound). Nearly the 2x raster target that had been rumored for a while now:


qh4NqcH.png


Now if someone could explain the +45% "rendering performance" metric....?

I've been ignoring this since I saw it. Not sure what it refers to but in terms of raw performance, I expect the Pro to perform 70-80% better than the regular PS5, which would put it in the ballpark of a 3080.

45% would only put it on the level of a 6800, even a bit slower, so this never made sense to me. 4 years and 2 generations won't lead to that small of a performance increase.


But 6800 level of performance in raster makes perfect sense.

33.5TF is slightly slower than 7700XT and 7700XT = 6800. And 6800 is...

hsjQkno.jpeg


This is compared to 6700, 44% - pretty much what Sony said.

Numbers checks out, expecting magical performance of 6800XT or even 3090 (lol) is setting yourself to disappointment. I bet it will be much faster than 6800 with RT but 90% of games are still raster only.

That's the reality (based on rumors).
 
Last edited:

IDWhite

Member
How so? I figured it would be more efficient since the proportional cache increase of 100% for lower level cache is higher than the 67% CU increase. Or are you specifically referring to L2?

No. they doubled the cache... just not on the level that some of us predicted. That is, we expected the L2 cache to be doubled. They doubled the L1 and L0 cache instead. That should tell you where they feel/know their bottlenecks are.

They keep L2 size the same, which is in between L1 and system memory. When a L1 miss occur (L0 are like registers) you need to access L2, so if you don't have a good optimized code to hit L1 very often you are goin to suffer a huge bottleneck when lots of CU are trying to access data on L2 cache.

Of course Sony knows better than anyone where to invest the silicon space, but that don't invalidate that they also need to do sacrifices to fit all the system on the smallest die size possible to control power consumption, heat and prices, and for those purposes they usually cut cache sizes because they need lot of space, and that doesn't mean they don't need it.
 
Last edited:

Fafalada

Fafracer forever
They keep L2 size the same, which is in between L1 and system memory. When a L1 miss occur (L0 are like registers) you need to access L2, so if you don't have a good optimized code to hit L1 very often you are goin to suffer a huge bottleneck when lots of CU are trying to access data on L2 cache.
Let's make something clear - caches are designed for >90% hitrates. Making them larger doesn't dramatically change the hitrates (unless the balance was really broken somewhere) - it just smooths out the 'bumps'. And where and how often such 'bumps' typically happen is discovered through profiling and statistics, which as you say - Sony and other companies have tons of data on to base such decisions from.
Eg. PS360 CPUs had massive (for their time) L2 caches but that made precious little difference because their memory subsystem design made them fall over on anything that missed L1. Even without going to main-memory you could get code running slower than 300mhz CPU of the previous gen.

Of course Sony knows better than anyone where to invest the silicon space, but that don't invalidate that they also need to do sacrifices to fit all the system on the smallest die size possible to control power consumption, heat and prices, and for those purposes they usually cut cache sizes because they need lot of space, and that doesn't mean they don't need it.
Yea but that's just stating the obvious. Caches themselves are a trade-off - if high performance memory was plausible/not cost prohibitive, hw makers would gladly skip the caches alltogether.
 

bitbydeath

Gold Member
But 6800 level of performance in raster makes perfect sense.

33.5TF is slightly slower than 7700XT and 7700XT = 6800. And 6800 is...

hsjQkno.jpeg


This is compared to 6700, 44% - pretty much what Sony said.

Numbers checks out, expecting magical performance of 6800XT or even 3090 (lol) is setting yourself to disappointment. I bet it will be much faster than 6800 with RT but 90% of games are still raster only.

That's the reality (based on rumors).
36TF is the new rumour.
 

Panajev2001a

GAF's Pleasant Genius
Let's make something clear - caches are designed for >90% hitrates. Making them larger doesn't dramatically change the hitrates (unless the balance was really broken somewhere) - it just smooths out the 'bumps'. And where and how often such 'bumps' typically happen is discovered through profiling and statistics, which as you say - Sony and other companies have tons of data on to base such decisions from.
Eg. PS360 CPUs had massive (for their time) L2 caches but that made precious little difference because their memory subsystem design made them fall over on anything that missed L1
The more clients you have for the L1 that might execute different pieces of code loading and writing data from different memory regions the more cache risks falling over (different CUs might work on different things… but sure that is the point of developers profiling and optimising their code :)).
Larger cache should help a lot if you have more CUs feeding off of it and inside each CU there is now 2x the local scratchpad to do work with without going out to cache.

Considering everything, if Sony did not go from 4 MB L2 (point people were making is that for code that writes a lot to memory L1 is completely bypassed and L2 is the bottleneck that has now even more producers than XSX has) to 5-6 MB is likely that they think it does not make much difference compared to the cost as they already made a commitment to a sizeable bandwidth improvement and L0 cache improvements.

I wonder how the PPE would have worked in those scenarios with less L2 cache, would it have been a lot worse (worse than bad hehe).
 

Gaiff

SBI’s Resident Gaslighter
But 6800 level of performance in raster makes perfect sense.

33.5TF is slightly slower than 7700XT and 7700XT = 6800. And 6800 is...

hsjQkno.jpeg


This is compared to 6700, 44% - pretty much what Sony said.

Numbers checks out, expecting magical performance of 6800XT or even 3090 (lol) is setting yourself to disappointment. I bet it will be much faster than 6800 with RT but 90% of games are still raster only.

That's the reality (based on rumors).
I wouldn't rely on RDNA3 PC parts to scale RDNA 3.5/4 console parts. I just don't see how increasing the CU count by 66.67%, perhaps increasing the clocks (at least the max boost is 5% higher), and increasing the bandwidth by 29% lead to just a ~45% overall performance. Of course, there are as-of-yet-unknown details such as the ROPs, L2 cache, or memory chip speeds but, that just sounds like too meager of an improvement for something that comes out 4 years later after two die shrinks.

If Heisenberg07's source is correct and the PS5 performs on par with a 4070/3080, then this means that the Pro has a huge ray tracing advantage even over NVIDIA parts to make up for its deficit in rasterization, which, while not impossible, sounds a bit unlikely. I think it'd make more sense for it to have rasterization and ray tracing on par with Ampere at least, perhaps even Lovelace for rt.
 

Lysandros

Member
No. they doubled the cache... just not on the level that some of us predicted. That is, we expected the L2 cache to be doubled. They doubled the L1 and L0 cache instead. That should tell you where they feel/know their bottlenecks are.
onQ123 mode on:
I would imagine the new double rate fp 32 capable CUs with enhanced RT throughtput would be even hungrier for inner GPU bandwidth. 15 CUs per Shader Array fighting for 128 Kbs of L1 cache would be a bloodbath compared to PS5's 9 CUs.The efficiency loss in real world compute throughput could be significant not to mention additional pressure from "128" ROPs. Then there is the side of the pure rasterization throughput which is tied to the number of prim units and rasterizers which are in turn tied to number of shader engines of course...

One way to remediate (at least to some degree) could be to increase GPU L1 cache to 256 Kbs per array like high end RDNA3 parts, but i don't know how practical it is considering APU die area and/or price implications.
 
Last edited:

Bojji

Member
36TF is the new rumour.

Every day brings more power to PS5 Pro...

But even with that 7700XT is:

aS4RHXt.jpeg


Nothing changes with 6800/7700XT comparison.

I wouldn't rely on RDNA3 PC parts to scale RDNA 3.5/4 console parts. I just don't see how increasing the CU count by 66.67%, perhaps increasing the clocks (at least the max boost is 5% higher), and increasing the bandwidth by 29% lead to just a ~45% overall performance. Of course, there are as-of-yet-unknown details such as the ROPs, L2 cache, or memory chip speeds but, that just sounds like too meager of an improvement for something that comes out 4 years later after two die shrinks.

If Heisenberg07's source is correct and the PS5 performs on par with a 4070/3080, then this means that the Pro has a huge ray tracing advantage even over NVIDIA parts to make up for its deficit in rasterization, which, while not impossible, sounds a bit unlikely. I think it'd make more sense for it to have rasterization and ray tracing on par with Ampere at least, perhaps even Lovelace for rt.

You can say the same thing about 6700 and 6800, 66.67% CU count upgrade and yet it's 44% faster.

I can believe that Pro will can on close to 4070/3080 with some heavy RT scenarios but I doubt it will be there with just raster, nothing about specs suggest that so far.
 

Gaiff

SBI’s Resident Gaslighter
You can say the same thing about 6700 and 6800, 66.67% CU count upgrade and yet it's 44% faster.
Sure, but it’s the exact same technology. Obviously, compute won’t scale linearly. The Pro not only has all those advantages but is two generations ahead.
I can believe that Pro will can on close to 4070/3080 with some heavy RT scenarios but I doubt it will be there with just raster, nothing about specs suggest that so far.
That would mean that AMD has RT significantly faster than even Lovelace pound-for-pound which I highly doubt.
 
Last edited:

Bojji

Member
Sure, but it’s the exact same technology. Obviously, compute won’t scale linearly. The Pro not only has all those advantages but is two generations ahead.

That would mean that AMD has RT significantly faster than even Lovelace pound-for-pound which I highly doubt.

It's probably "same ballpark" scenario, not exact same performance as 4070. Difference between 45% (6800/7700XT) faster and 60% faster (4070) isn't massive but even using RDNA3 TFs - 33.5TF GPU shouldn't be faster than 35.17 7700XT - it defies logic (I'm talking about raster only here of course).
 
So base is Max 2.23 and pro is 2.35
So lets say pro is Most of the time around 2.30 that would be amazing fastet than the base.CPU will also have a better Clock than the base Model.With the futures we still don’t know this will be a good upgrade or an amazing first console for people who buy that as there first PS5 price will be without the disc player 499 base model will be 399 both will make an win on each unit sold
 

HeisenbergFX4

Gold Member
I wouldn't rely on RDNA3 PC parts to scale RDNA 3.5/4 console parts. I just don't see how increasing the CU count by 66.67%, perhaps increasing the clocks (at least the max boost is 5% higher), and increasing the bandwidth by 29% lead to just a ~45% overall performance. Of course, there are as-of-yet-unknown details such as the ROPs, L2 cache, or memory chip speeds but, that just sounds like too meager of an improvement for something that comes out 4 years later after two die shrinks.

If Heisenberg07's source is correct and the PS5 performs on par with a 4070/3080, then this means that the Pro has a huge ray tracing advantage even over NVIDIA parts to make up for its deficit in rasterization, which, while not impossible, sounds a bit unlikely. I think it'd make more sense for it to have rasterization and ray tracing on par with Ampere at least, perhaps even Lovelace for rt.
I will add when I was told 4070 levels of real world performance this was long before anyone heard a peep about PSSR, I don't know how that changes the equation and haven't asked
 

Fafalada

Fafracer forever
Larger cache should help a lot if you have more CUs feeding off of it and inside each CU there is now 2x the local scratchpad to do work with without going out to cache.
Yea but this case is precisely where L1 being larger probably helps the most. Shader workloads are still generally coherent (compared to what we do on CPUs) which would make L2 cache not the primary bottleneck. But yea it's all guesswork on our end of course - without knowing the stats for the software that Sony/AMD have.

Considering everything, if Sony did not go from 4 MB L2 (point people were making is that for code that writes a lot to memory L1 is completely bypased and L2 is the bottleneck that has now even more producers than XSX has) to 5-6 MB is likely that they think it does not make much difference compared to the cost as they already made a commitment to a sizeable bandwidth improvement and L0 cache improvements.
If you have writes that go straight to memory you're bandwidth bound anyway? Caches are primarily a latency mitigation, saving on bandwidth is - not a common purpose (that's what scratchpads are for).

For in-order CPUs, like the X360 or PS3, cache is probably more important, than for modern Out-of-order.
Depends on cache hierarchy we're talking about. OOOE can't absorb memory latency either - that's orders of magnitude too slow. But it can mitigate for L1 misses where In-Order wouldn't, yes.
 

Gaiff

SBI’s Resident Gaslighter
It's probably "same ballpark" scenario, not exact same performance as 4070. Difference between 45% (6800/7700XT) faster and 60% faster (4070) isn't massive but even using RDNA3 TFs - 33.5TF GPU shouldn't be faster than 35.17 7700XT - it defies logic (I'm talking about raster only here of course).
We'll see but as I said, 45% sounds insanely low for two generations and a 4-year gap. I know PSSR and better RT are the big selling points but still, you need a strong baseline to work with.

I will add when I was told 4070 levels of real world performance this was long before anyone heard a peep about PSSR, I don't know how that changes the equation and haven't asked
I have little doubt this will be the case. 3080/4070 performance is pretty much exactly like the PS5 was positioned relative to the 2070S/1080 Ti.

I simply think that using precedents and looking at the market is more revelatory than specs sheets. Perhaps Sony has figured out something about dual-issue? Perhaps there is some major bottleneck in most DX12 games preventing compute from scaling more efficiently? It could be anything. Whatever the case, Sony is satisfied with those specs, so either they're great to begin with or PSSR is doing some major heavy lifting. I just don't see how they'd squeeze out just 45% better performance out of those parts.

As I said though, we'll see. I still believe 4070/3080 level in rasterization and a similar level in ray tracing. Perhaps not quite as strong as the 4070 but maybe better than the 3080 in ray tracing.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Yea but this case is precisely where L1 being larger probably helps the most. Shader workloads are still generally coherent (compared to what we do on CPUs) which would make L2 cache not the primary bottleneck. But yea it's all guesswork on our end of course - without knowing the stats for the software that Sony/AMD have.
I am wondering what the typical temporal coherence is across an entire shader engine (across many CUs).

If you have writes that go straight to memory you're bandwidth bound anyway? Caches are primarily a latency mitigation, saving on bandwidth is - not a common purpose (that's what scratchpads are for).
I wonder if the local data share (LDS) and other on-chip scratchpads are increased per CU too.

RDNA docs quote the L1 as being read only. Shared across the shader array to make L2 accesses easier (they all go through it). So writing to L1 would not be able to be seen by other CUs until they reach L2 and then other CUs will face a miss and fetch the data from L2. I guess you minimise data rapidly shared across CUs.

L0 is write through and L1 is read-only (writes to L1 invalidate the cache line and the data is pushed to L2, shaders can specify a mode that bypasses L1 for writes though).
 
Last edited:

Banjo64

cumsessed
You can think what you will. If I took a picture of the 600+ games and of the numerous consoles I own you would find it to be overwhelmingly Sony centric.

The post is just an opposite viewpoint of the excitement others here have. I have my PS5, I’m not confident the pro version will add anything substantial to what is already offered. I’m just saying of the two upcoming releases, I am more excited for one than the other.
You’re not alone bud. Been a PS main for years but their current direction is boring and sub-par.

Yes yes, their style of games is selling better than ever and normies are lapping it up. Great for them, great for Sony. Does nothing for me though.
 

Lysandros

Member
I would be careful of DF's analysis on the clock frequency, I still remember they were struggling to wrap their heads around PS5's variable clocks which I think even annoyed Mark Cerny.
100%, they have a very shaky history on this, especially on the Great Dictator's side but also Richard's. The whole DF team always tend to make worst/most pessimistic assumptions when it comes to playstation hardware.
 
Last edited:
Double L1 cache makes totally sense with the new CU architecture and this is exactly what I was expecting. The most likely reason of XSX underperforming in many games compared to PS5 (or even comparable PC GPUs) is because it being L1 cache starved compared to PS5. XSX has supposedly (as MS never disclosed the specs here unsuprisingly) the same amount of L1 cache (and clocked slower) compared to PS5 with more CUs / TFs to feed.
 
Last edited:

Lunarorbit

Member
I don't know... I always find posts like these somewhat hard to believe. Or hell even borderline disingenuous.

Ah well, to each their own I guess. Just doesn't make sense to me... and the only way it does is if I am looking at it from the perspective that the poster is just a fan of a specific platform. That way it makes sense, but if that's the case, then why make a post like that here to begin with?
I feel the same way though. I like the ps5 but when I look at the last few games I've played most are last gen or not pushing the hardware that hard: marvel knights, dredge, nier automata, rdr2, bioshock....

Returnal is dope but that was a launch window game. Kinda like psvr2; I really wanted it but after all is said and done I can wait.
 

Lysandros

Member
Digital Foundry has new and VERY INTERESTING info about PS5 Pro GPU specs via DF Direct early access . I will not post their video or slide as I acknowledge they have to make a living but confirmed specs below:

1. 30 WGPs = 60 Active CUs
2. Configuration: 2 SEs/ 4 SAs (8-7 8-7)
2. 2.35 Ghz Max Boost Clock
3. GL2 Cache =4MB (Same as PS5)
4. GL1 Cache =256kb (PS5 =128kb)
5. GL0V Cache = 32kb (PS5 = 16kb) "Sony Specifically says this increase is to allow for better RT performance"

I'm still watching and will update you guys asap.
Those two modifications (inherited from RDNA3) yet again show that there is more to RT performance than just the number of intersection engines/CUs. Cache amount/latency/bandwidth are all important factors.
 

winjer

Member
Depends on cache hierarchy we're talking about. OOOE can't absorb memory latency either - that's orders of magnitude too slow. But it can mitigate for L1 misses where In-Order wouldn't, yes.

Out of order execution, works in tandem with branch prediction and prefetching.
So it does hide cache and memory latency by getting the data before it has to execute it.
 

German Hops

GAF's Nicest Lunch Thief
Double L1 cache makes totally sense with the new CU architecture and this is exactly what I was expecting. The most likely reason of XSX underperforming in many games compared to PS5 (or even comparable PC GPUs) is because it being L1 cache starved compared to PS5. XSX has supposedly (as MS never disclosed the specs here unsuprisingly) the same amount of L1 cache (and clocked slower) compared to PS5 with more CUs / TFs to feed.
Yep.

This is basically Cerny's secret sauce. Games running on the Pro will look astonishing.
 

ChiefDada

Gold Member
Can those of you smarter than me (virtually all of you lol) explain how we should compare chiplet based 7800xt with integrated PS5 Pro? Isn’t it the case that since the console is integrated, the clocks should provide more performance pound for pound? So a 2.35ghz PS5 Pro should yield better results than 2.35ghz RDNA3 all else equal?
 
Can those of you smarter than me (virtually all of you lol) explain how we should compare chiplet based 7800xt with integrated PS5 Pro? Isn’t it the case that since the console is integrated, the clocks should provide more performance pound for pound? So a 2.35ghz PS5 Pro should yield better results than 2.35ghz RDNA3 all else equal?
Those cards shouldn't be compared directly. Look at the post above yours. PS5 Pro should (and will) be compared with RDNA4, not RDNA3.
 

Bojji

Member
We'll see but as I said, 45% sounds insanely low for two generations and a 4-year gap. I know PSSR and better RT are the big selling points but still, you need a strong baseline to work with.

That's why people (like me) were surprised that this power jump looks to be this small - PS4 Pro on paper was over 100% faster than PS4. But maybe it's understable because there are almost no meaningful die shrinks to bring power consumption down, this console will probably go up to 250-300W, stronger one would be over that and you have to cool that shit down.

We will see the results, I think PSSR is the best thing about this console and finally IQ should be decent to great in most games.

Can those of you smarter than me (virtually all of you lol) explain how we should compare chiplet based 7800xt with integrated PS5 Pro? Isn’t it the case that since the console is integrated, the clocks should provide more performance pound for pound? So a 2.35ghz PS5 Pro should yield better results than 2.35ghz RDNA3 all else equal?

7800XT is not chiplet based I think - 7900GRE, 7900XT and 7900XTX are.
 
Last edited:

Radical_3d

Member
Now if someone could explain the +45% "rendering performance" metric....?
Not all is raster. My theory is that as we progress with the Emperor’s New Clothes that is raytracing in todays technology you end up with inefficiencies, hence the 45% increase.
 

Evolved1

make sure the pudding isn't too soggy but that just ruins everything
I just want current games in quality mode but at 60 fps, is that much to ask considering the new rumours?
i think mostly yes unless their AI upscaling is actually really good.
 
Last edited:
Top Bottom