• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS5 Pro Specs Leak are Real, Releasing Holiday 2024(Insider Gaming)

Gaiff

SBI’s Resident Gaslighter
Really? You held out so long, what's another 6 months? I traded mine in a month or so with the cash earmarked for PS5 Pro.
So you'll be without a gaming console for 6 months? I would have probably waited a few months, around 2 months before its anticipated date pehraps. I was about to get a PS5 but just decided to wait for the Pro. I got a PC though so it's not much of a problem.
 
Last edited:

Pedro Motta

Member
Motion blur is hot garbage and everyone I know who games on PC turns that shit off along with the other trash post-processing effects like Chromatic Aberration, Film Grain, and Lens Flare.
I game on PC and I leave those things on, cuz I’m not a cranky blind old man hahaha
 

ChiefDada

Gold Member
So you'll be without a gaming console for 6 months? I would have probably waited a few months, around 2 months before its anticipated date pehraps. I was about to get a PS5 but just decided to wait for the Pro. I got a PC though so it's not much of a problem.

Yeah man I feel like it's worth the wait to play these games in the absolute best quality a console can offer. I'm going tonbe in console gaming nirvana in 6 months.

Motion blur is hot garbage and everyone I know who games on PC turns that shit off along with the other trash post-processing effects like Chromatic Aberration, Film Grain, and Lens Flare.

posts brian GIF
 

Ashamam

Member
eah man I feel like it's worth the wait to play these games in the absolute best quality a console can offer. I'm going tonbe in console gaming nirvana in 6 months.
Yeah I'm basically not buying anything at the moment to build up a backlog of games that will benefit from a Pro. As a bonus they will be cheaper by then as well.
 

S0ULZB0URNE

Member
It's the percentage performance loss moving from 1440p to 4K. So if it's 0% then you lose no performance, while if it's 33.3%, you lose a third of the performance etc.

You originally said that the performance "dies" at higher resolutions with RDNA 2 and that this "isn’t in line with other architectures". My point is that RDNA 2 scales similarly to RDNA 1 and Turing as per the benchmark data I presented. So it's not as if RDNA 2 went backwards in any way. Rather Ampere improved Nvidia's performance scaling vs Turing and this put AMD in a worse competitive situation. So I am not trying to deny that AMD scaling vs. Ampere was worse than RDNA 1's scaling vs. Turing. I am saying this was not a failure of RDNA 2, unless you think every advancement your competitor makes that you don't is a "failure".

And with RDNA 3, based on the benchmark data, we seem to have a situation where the 7900 XTX achieves scaling parity with Ada/Ampere, the 7900 XT is between Ada/Ampere and RDNA 2, and the 7800 XT is in line with RDNA 2. (Check out the below meta review where we see the 6800 XT hold its ground against the 7800 XT)

)



The comparison is not between the 2080 Ti and the 5700 XT but between RDNA 2 and RDNA 1/Turing. Now it's fair to object that RDNA 1 was only used in the 5700 XT which is midrange card, so not designed for 4K. But at 1080p you will be CPU limited on RDNA 2, so that's not ideal either. If we go back to the Radeon VII which has a lot more bandwidth, we still see it behind RDNA 2 in terms of scaling.


Yes, because Ampere scales better than RDNA 2 and Turing.



Higher resolutions are less CPU dependent.
This Pro should focus on higher resolutions.
 

Bojji

Member


Higher resolutions are less CPU dependent.
This Pro should focus on higher resolutions.


Except it's literally aiming to render games in 1080p (then reconstructed to 4k)...

I don't know what Jay said in this video but if game is CPU limited to (for example) 55FPS in 1440p it will also be limited to that FPS in 1080p/720p etc. In 4K you are more likely to be GPU limited so if your GPU is not fast enough it might be not able to get to that (55FPS) limit - but with enough GPU horsepower you will still be CPU limited in 4K, 8K and so on...
 

Gaiff

SBI’s Resident Gaslighter
Except it's literally aiming to render games in 1080p (then reconstructed to 4k)...

I don't know what Jay said in this video but if game is CPU limited to (for example) 55FPS in 1440p it will also be limited to that FPS in 1080p/720p etc. In 4K you are more likely to be GPU limited so if your GPU is not fast enough it might be not able to get to that (55FPS) limit - but with enough GPU horsepower you will still be CPU limited in 4K, 8K and so on...
It’s JayzTwoCents. Just ignore it.
 
Last edited:

S0ULZB0URNE

Member
Except it's literally aiming to render games in 1080p (then reconstructed to 4k)...

I don't know what Jay said in this video but if game is CPU limited to (for example) 55FPS in 1440p it will also be limited to that FPS in 1080p/720p etc. In 4K you are more likely to be GPU limited so if your GPU is not fast enough it might be not able to get to that (55FPS) limit - but with enough GPU horsepower you will still be CPU limited in 4K, 8K and so on...
No it isn't.

A 33.5TF custom gpu isn't going to be GPU limited running games made for a 10.3TF PS5 at higher resolutions.
 

FireFly

Member


Higher resolutions are less CPU dependent.
This Pro should focus on higher resolutions.

I am not sure what that has to do with what I said. However with PSSR developers can use the extra horsepower for adding additional graphical effects, while leaving AI upscaling to worry about the resolution.
 

Bojji

Member
No it isn't.

A 33.5TF custom gpu isn't going to be GPU limited running games made for a 10.3TF PS5 at higher resolutions.

It's 45% more powerful according to Sony, that 33.5TF number is relevant only in comparison with RDNA3 GPUs. Games are also going to have higher settings on Pro, standard raster settings (shadow quality, ao quality etc.) or ray tracing settings. Higher resolution is also on the table or not, maybe they will limit themselves to 1080p and have more power to use elsewhere if 1080p -> 4k upscaling looks good enough.

Pro will be CPU limited in places where standard PS5 is CPU limited, this laughable CPU clock bump won't change anything, but so far there are not many games with easily detected CPU problems.
 

S0ULZB0URNE

Member
It's 45% more powerful according to Sony, that 33.5TF number is relevant only in comparison with RDNA3 GPUs. Games are also going to have higher settings on Pro, standard raster settings (shadow quality, ao quality etc.) or ray tracing settings. Higher resolution is also on the table or not, maybe they will limit themselves to 1080p and have more power to use elsewhere if 1080p -> 4k upscaling looks good enough.

Pro will be CPU limited in places where standard PS5 is CPU limited, this laughable CPU clock bump won't change anything, but so far there are not many games with easily detected CPU problems.
That's some major hoops you want people to go through in this explanation.

That 33.5TF vs 10.3TF is super relevant when comparing GPU's and the Pro being a newer architecture makes the difference even more significant.

Quit making up them choosing 1080p on the Pro, as this is trolling.
 

Bojji

Member
That's some major hoops you want people to go through in this explanation.

That 33.5TF vs 10.3TF is super relevant when comparing GPU's and the Pro being a newer architecture makes the difference even more significant.

Quit making up them choosing 1080p on the Pro, as this is trolling.

When it comes to IPC raster performance there is no difference between RDNA1->2->3 (they get higher performance from clock uplifts), so newer architecture means nothing here unless RDNA3.5/4 brings something new to the table, biggest difference will be addition of hardware support for VRS (probably useless), mesh shaders and SFS missing from full RDNA2 in PS5.

They changed how they measure TF with RDNA3 so this 33.5TF number means nothing compared to PS5. You can see that 16.17TF RDNA2 GPU (6800) = 35.17TF RDNA3 GPU (7700XT)

4cM5NnE.jpeg


Developers can obviously use whatever resolution they want but let's take game that runs in 1080p on PS5 in performance mode, with power of PS5 Pro they can:

- increase resolution by ~50% to somewhere around 2300x1300 (and then use PSSR)
- use 1080p -> 4k PSSR reconstruction, get very good 4k like image and use power that is left to enhance other things (like framerate or graphical settings)

I have no doubt that there will be different "tiers" for PSSR so upscaling from 1440p, 1253p, 1080p, maybe 900p (?) depending on how heave game is. It even supports dynamic resolution.

qbPh6NZ.jpeg


They focus on that 1080p -> 4K upscale so this should be very popular.
 
biggest difference will be addition of hardware support for VRS (probably useless), mesh shaders and SFS missing from full RDNA2 in PS5.
PS5 also has the hardware for Mesh Shaders, SFS and VRS are features which are hardly used by developers. UE5 and 4 use their own texture streaming system (Virtual Textures) as do many other game engines.

Biggest Difference will come from PSSR and the higher CU count = 45% more rendering performance.
 

Bojji

Member
PS5 also has the hardware for Mesh Shaders, SFS and VRS are features which are hardly used by developers. UE5 and 4 use their own texture streaming system (Virtual Textures) as do many other game engines.

Biggest Difference will come from PSSR and the higher CU count = 45% more rendering performance.

Few games use VRS, even less use MS and I don't know about any game with SFS. I said it was the biggest difference when it comes to raster hardware compared to PS5 but I doubt it will change much (just like Xbox is not benefiting much from them). PS5 doesn't support MS in "DX12" way, developers can use primitive shaders in very similar fashion but only game so far that uses it performs much better on Xbox that have MS so maybe it's not as performant... (or maybe it's but Remedy dropped the ball, who knows).

Biggest difference between PS5 and Pro are RT capabilities and many people focus on that but I think most forget that VAST majority of PS5 games are raster only. So this 45% performance uplift is pretty much what we will get (at least at first) in most games.
 
Last edited:

ChiefDada

Gold Member
When it comes to IPC raster performance there is no difference between RDNA1->2->3 (they get higher performance from clock uplifts), so newer architecture means nothing here unless RDNA3.5/4 brings something new to the table,

RDNA 3 failed to make much out of dual issue compute due to subpar compiler optimization, based on my understanding. That shouldn't be nearly as much of an issue in the console arena. Additionally, architectural changes addressing RDNA3 shortcomings have been confirmed by the likes of keplar which is why I think we should consider Pro 33tf more seriously than RDNA3
 

Bojji

Member
RDNA 3 failed to make much out of dual issue compute due to subpar compiler optimization, based on my understanding. That shouldn't be nearly as much of an issue in the console arena. Additionally, architectural changes addressing RDNA3 shortcomings have been confirmed by the likes of keplar which is why I think we should consider Pro 33tf more seriously than RDNA3

Yeah but this conflicts with that 45% performance uplift from Sony papers, there is no room here for 3x more performance that TF numbers would suggest. AMD wanted to have higher TF number and be closer to Nvidia in this aspect so they multiplied everything by 2x, there is not much else to see here.

45% puts it around 6800 in raster performance

IguC9CP.jpeg
 

S0ULZB0URNE

Member
When it comes to IPC raster performance there is no difference between RDNA1->2->3 (they get higher performance from clock uplifts), so newer architecture means nothing here unless RDNA3.5/4 brings something new to the table, biggest difference will be addition of hardware support for VRS (probably useless), mesh shaders and SFS missing from full RDNA2 in PS5.

They changed how they measure TF with RDNA3 so this 33.5TF number means nothing compared to PS5. You can see that 16.17TF RDNA2 GPU (6800) = 35.17TF RDNA3 GPU (7700XT)

4cM5NnE.jpeg


Developers can obviously use whatever resolution they want but let's take game that runs in 1080p on PS5 in performance mode, with power of PS5 Pro they can:

- increase resolution by ~50% to somewhere around 2300x1300 (and then use PSSR)
- use 1080p -> 4k PSSR reconstruction, get very good 4k like image and use power that is left to enhance other things (like framerate or graphical settings)

I have no doubt that there will be different "tiers" for PSSR so upscaling from 1440p, 1253p, 1080p, maybe 900p (?) depending on how heave game is. It even supports dynamic resolution.

qbPh6NZ.jpeg


They focus on that 1080p -> 4K upscale so this should be very popular.
You were the one that separated RDNA 2 and Pro's RDNA 3.5/4 in the last post and now you are trying to say there is no difference 😖

It's a newer more powerful and efficient CUSTOM architecture which nullifies all the PC part comparison yaw try and post.

I see 33.5 TF's fp 32 and 67 TF's fp 16



Nothing points to 1080p being the focus with all the extra overhead granted by the rasterization increases the GPU brings and it's silly to suggest.
 
Last edited:

ChiefDada

Gold Member
Yeah but this conflicts with that 45% performance uplift from Sony papers, there is no room here for 3x more performance that TF numbers would suggest. AMD wanted to have higher TF number and be closer to Nvidia in this aspect so they multiplied everything by 2x, there is not much else to see here.

45% puts it around 6800 in raster performance

IguC9CP.jpeg

I honestly think that 45% figure represents unoptimized where the developer literally does nothing. In other words, the "ultra boost mode". We shall see.
 

Gaiff

SBI’s Resident Gaslighter
RDNA 3 failed to make much out of dual issue compute due to subpar compiler optimization, based on my understanding. That shouldn't be nearly as much of an issue in the console arena. Additionally, architectural changes addressing RDNA3 shortcomings have been confirmed by the likes of keplar which is why I think we should consider Pro 33tf more seriously than RDNA3
Huh, yes, it will be much of an issue.
 

ChiefDada

Gold Member
Personally I think people should stop thinking about the 33.5TF number

Looking Season 6 GIF by This Is Us

Don't think about it because actual results will end up looking far below typical expectations for a 33tf gpu? Or because end results are so awesome with PSSR that we should no longer be thinking in terms of TF, a la Mark Cerny messaging.


Your statement really could go either way. You sneak devil you😆
 

Gaiff

SBI’s Resident Gaslighter
Why you say this? Developers apply deep optimization for consoles all the time, particularly 1st and 2nd party.
Because they'll need to repackage operations and do a whole lot of work to actually take advantage of dual-issue compute and nobody even knows how this translates to game performance at all. The PS5 is still the primary console platform and it's doubtful anyone will be doing a whole to reprogram games just for the Pro. It's not a small amount of work. It's a lot more work and we have yet to see if this returns benefits. There's also the issue of the extra compute throughput possibly not making a difference if memory performance is the bottleneck. It's not exactly that compilers aren't optimized, they're just bad at this task. A person would be much better at spotting dual-issue compute possibilities than compilers but this would also make development a lot more time-consuming for as-of-yet-unknown benefits. Even in synthetic compute benchmarks, the results don't scale linearly or even close.

It's not very much an issue of consoles making taking advantage of dual-issue compute easier, they don't. It's an issue of, "Will anyone bother to do the work?" and is it actually worth it?
 
Last edited:
Once people start to see those games using PSSR with 4K like IQ at 60fps (or 120fps!) they are going to forget about those numbers. And once they are starting to see what Insomniac are going to do with the new RT architecture, lots of crows are going to be eaten and plenty of people will be surprised once again by Cerny accomplishment.

Those numbers (like the 45%) were only meant for Developers, not PR purposes. The same way Cerny presentation of the PS5 was and dumb people with an agenda got stuck at "RT audio". Here people are so keen to believe at this honest 45% number but are not keen to believe at the up to 4x faster RT.
 
PS5 doesn't support MS in "DX12" way, developers can use primitive shaders in very similar fashion but only game so far that uses it performs much better on Xbox that have MS so maybe it's not as performant... (or maybe it's but Remedy dropped the ball, who knows).

We don't have full details on how the PS5 software handles Mesh Shaders, however it should support them fully on a hardware level. Remedy developers said they didn't have to make any specific optimisations for the PS5 in regards to their Mesh Shader implementation which is interesting. They only mentioned the Meshlet sizes were different across the platforms.

Alan Wake isn't the only game that uses Mesh Shaders by the way, so does Avatar FOP and the performance difference on that is negligible between the PS5 and Series X ,even though theoretically the Series X is supposed to perform better (Mesh Shaders scale well with compute).
 
I think the system is going to end up being very nice.

Yeah, sure. The CPU upgrade is arguably pointless, but maybe there aren't as many things that are super CPU intensive.

What can AI do to enhance the power of consoles beyond what they'd normally be capable of is what I'm excited to see.

What tasks can it off-load or enhance that takes pressure off of both the GPU and the CPU?

It's theoretically possible.
 
Don't think about it because actual results will end up looking far below typical expectations for a 33tf gpu? Or because end results are so awesome with PSSR that we should no longer be thinking in terms of TF, a la Mark Cerny messaging.


Your statement really could go either way. You sneak devil you😆

I don't think it's too hard to predict what the performance of the PS5 Pro will be, I'm sure many here have already made extremely accurate guesses.

I wouldn't be too keen on the whole dual issue compute thing, I'm sure it certain workloads it'll pull through but the Pro will ultimately be limited by its memory bandwidth, which no where near 50% bump let alone 2x or 3x.
 
Gran Turismo 7 runs in native 4k at 55~90fps on the PS5 Amateur

Come on Sony show us the first game with Path-tracing on consoles with Gran Turismo 7 at 1080p (upscaled to 4k with Pssr) /60fps
 
Last edited:

bitbydeath

Member
Once people start to see those games using PSSR with 4K like IQ at 60fps (or 120fps!) they are going to forget about those numbers. And once they are starting to see what Insomniac are going to do with the new RT architecture, lots of crows are going to be eaten and plenty of people will be surprised once again by Cerny accomplishment.

Those numbers (like the 45%) were only meant for Developers, not PR purposes. The same way Cerny presentation of the PS5 was and dumb people with an agenda got stuck at "RT audio". Here people are so keen to believe at this honest 45% number but are not keen to believe at the up to 4x faster RT.
This!
We know they’re aiming for 4K/60FPS+, 8K/30FPS. How they achieve it matters not.
 

hinch7

Member
Once people start to see those games using PSSR with 4K like IQ at 60fps (or 120fps!) they are going to forget about those numbers. And once they are starting to see what Insomniac are going to do with the new RT architecture, lots of crows are going to be eaten and plenty of people will be surprised once again by Cerny accomplishment.

Those numbers (like the 45%) were only meant for Developers, not PR purposes. The same way Cerny presentation of the PS5 was and dumb people with an agenda got stuck at "RT audio". Here people are so keen to believe at this honest 45% number but are not keen to believe at the up to 4x faster RT.
Quite excited to see this from a console. A lot are going to be mighty impressed seeing the improvements in image quality alone.

Maybe not so impressive for people who have owned a relately high end GPU from last generation Nvidia (and now midrange) but still a massive upgrade from the stock PS5 experience.
 
Last edited:

PaintTinJr

Member
I don't think it's too hard to predict what the performance of the PS5 Pro will be, I'm sure many here have already made extremely accurate guesses.

I wouldn't be too keen on the whole dual issue compute thing, I'm sure it certain workloads it'll pull through but the Pro will ultimately be limited by its memory bandwidth, which no where near 50% bump let alone 2x or 3x.
Dual issue will get used in all inference AI calculations AFAIK, but I agree - with game development even for exclusives being so generic to get easy porting to the PC - the dual issue will likely go under utilised by 3rd parties, and even Sony 1st party studios going by how poorly Sony themselves have tapped out the OG PS5 hardware so far.

But....in the event that was just a poor leadership issue from Jim Ryan, then I could easily see the AI inference even being used to rapidly build approximate BVH's in real-time for moving objects so as to lever dual issue at bottlenecks in tasks that wouldn't have been able to exploit the dual issue benefit previously and thereby let the hardware punch far above it weight by smarter software, like we used to get in spades before the lazy generic engine PS4 cycle games washed away most of the benefits of console hardware IMO.
 

Kataploom

Gold Member
No it isn't.

A 33.5TF custom gpu isn't going to be GPU limited running games made for a 10.3TF PS5 at higher resolutions.
This is off topic but reading your comment I kinda feel like when we went from WOTLK to a cataclysm in World Of Warcraft, everyone suddenly had tons of health above the average builds of previous expansion, but in this case it's TF lol. Newer GPU gens might be inflating that number lol
 

Bojji

Member
RDNA 3 failed to make much out of dual issue compute due to subpar compiler optimization, based on my understanding. That shouldn't be nearly as much of an issue in the console arena. Additionally, architectural changes addressing RDNA3 shortcomings have been confirmed by the likes of keplar which is why I think we should consider Pro 33tf more seriously than RDNA3

I think only real shortcoming of RDNA3 compared to hype before release was overall (raster) performance, it was not in the same place as AMD slides and they are usually accurate (like with RDNA2). I heard that they weren't able to achieve target clock speeds? This obviously doesn't translate to Pro, we know that it will have clock on the low side.

You were the one that separated RDNA 2 and Pro's RDNA 3.5/4 in the last post and now you are trying to say there is no difference 😖

It's a newer more powerful and efficient CUSTOM architecture which nullifies all the PC part comparison yaw try and post.

I see 33.5 TF's fp 32 and 67 TF's fp 16



Nothing points to 1080p being the focus with all the extra overhead granted by the rasterization increases the GPU brings and it's silly to suggest.


There is no difference between first three versions so 3.5 version or 4th version is more likely to show no difference in this department. They are focusing on improving RT performance and (so far) raster IPC stays more or less the same.





I honestly think that 45% figure represents unoptimized where the developer literally does nothing. In other words, the "ultra boost mode". We shall see.

Yeah, I don't doubt that it's just what they will get when running their PS5 games on Pro, it's the same as putting bigger GPU in PC. Consoles aren't as complicated as they were, there is no secret sauce here just pure raw power.

If they don't use RT they can always try to use things that were missing from PS5 to improve performance further - SFS and VRS. Problem is, developers have those things on Series X from day and are not using them. Maybe this will change...

We don't have full details on how the PS5 software handles Mesh Shaders, however it should support them fully on a hardware level. Remedy developers said they didn't have to make any specific optimisations for the PS5 in regards to their Mesh Shader implementation which is interesting. They only mentioned the Meshlet sizes were different across the platforms.

Alan Wake isn't the only game that uses Mesh Shaders by the way, so does Avatar FOP and the performance difference on that is negligible between the PS5 and Series X ,even though theoretically the Series X is supposed to perform better (Mesh Shaders scale well with compute).

I heard about Avatar.

PS5 doesn't support Mesh Shaders as MS from DX12U spec, but developers can use Primitive shaders to achieve similar results.

vmsEjCb.jpeg



To put it bluntly, Primitive Shader and Mesh Shader have many similarities in functionality, although there are differences in implementation form.


This is off topic but reading your comment I kinda feel like when we went from WOTLK to a cataclysm in World Of Warcraft, everyone suddenly had tons of health above the average builds of previous expansion, but in this case it's TF lol. Newer GPU gens might be inflating that number lol

They are. First nvidia inflated TF with Ampere where compared to Turing it was 1 Turing TF = 0.72 Ampere TF


But at least system of measurement was the same and majority of that TF "power" was usable.

But AMD just went crazy and invented some bullshit system, now you have to multiply by 4x instead of 2x used in every other GPU family. 7700XT

Z1BgLc8.jpeg


"Old" system of measuring TF: 3456 x 2 x 2544 = 17.58TF
"New" system of measuring TF: 3456 x 4 x 2544 = 35.17TF

It's 99% - bullshit.
 

PaintTinJr

Member
I think only real shortcoming of RDNA3 compared to hype before release was overall (raster) performance, it was not in the same place as AMD slides and they are usually accurate (like with RDNA2). I heard that they weren't able to achieve target clock speeds? This obviously doesn't translate to Pro, we know that it will have clock on the low side.



There is no difference between first three versions so 3.5 version or 4th version is more likely to show no difference in this department. They are focusing on improving RT performance and (so far) raster IPC stays more or less the same.







Yeah, I don't doubt that it's just what they will get when running their PS5 games on Pro, it's the same as putting bigger GPU in PC. Consoles aren't as complicated as they were, there is no secret sauce here just pure raw power.

If they don't use RT they can always try to use things that were missing from PS5 to improve performance further - SFS and VRS. Problem is, developers have those things on Series X from day and are not using them. Maybe this will change...



I heard about Avatar.

PS5 doesn't support Mesh Shaders as MS from DX12U spec, but developers can use Primitive shaders to achieve similar results.

vmsEjCb.jpeg








They are. First nvidia inflated TF with Ampere where compared to Turing it was 1 Turing TF = 0.72 Ampere TF


But at least system of measurement was the same and majority of that TF "power" was usable.

But AMD just went crazy and invented some bullshit system, now you have to multiply by 4x instead of 2x used in every other GPU family. 7700XT

Z1BgLc8.jpeg


"Old" system of measuring TF: 3456 x 2 x 2544 = 17.58TF
"New" system of measuring TF: 3456 x 4 x 2544 = 35.17TF

It's 99% - bullshit.

In a world of AI acceleration they really didn't "went crazy and invented some bullshit system" as you say. When you are doing massive matrix multiplication, like you do in AI training or AI inference, being able to do twice the calculations per clock does yield that benefit consistently.

Against the backdrop of BVH8 (Pro) structures vs BVH4(PS5) structures that should also yield twice as much granular intersection testing per clock per level, leading to a lot more than twice the efficiency in RT if I'm not mistaken primarily because of memory accesses.
 
Top Bottom