Source is Locuza (same as Navi 10 anyways).
Navi 10 only had 64
Source is Locuza (same as Navi 10 anyways).
Are we still doing this?This 18% is not taking into account, the fluctuations of the PS5's gpu clock speed. The xsx having a static 1825mhz gpu clock will have some advantages beyond the raw compute advantage.
Indeed. My mistake.Navi 10 only had 64
Indeed. My mistake.
Doing what? The ps5 will lower gpu frequency to reduce power on demanding games. The XsX is designed where the GPU can remain at 1825mhz.Are we still doing this?
It reduces the frequency by a few percentages at most. Even a 5% reduction (I highly doubt it would downclock that much) would still keep it at over 2,1 GHz.Doing what? The ps5 will lower gpu frequency to reduce power on demanding games. The XsX is designed where the GPU can remain at 1825mhz.
Yeah you don't understand FLOPS at all.Doing what? The ps5 will lower gpu frequency to reduce power on demanding games. The XsX is designed where the GPU can remain at 1825mhz.
Not that it really matter given how scalable engines are these days.
But peoole should not claim the PS5s compute performance is a constant, its not, 10.28tflops is its maximum caperbility but not its constant. It will fluctuate between about 10 - 10.28tflops, but xsx can run at 12.15tflops constantly.
Yes indeed.It reduces the frequency by a few percentages at most. Even a 5% reduction (I highly doubt it would downclock that much) would still keep it at over 2,1 GHz.
That is if it is supplied with a very, very high amount of work, something that would probably affect the Series X in a negative way as well.
Cerny said he expects the PS5 gpu to spend most of its time at or close to 2230mhz.Yeah you don't understand FLOPS at all.
The chance of a dev creating code that can hit the peak number of flops on PS5 or Xbox Series X is really low.
Cerny said he expects the PS5 gpu to spend most of its time at or close to 2230mhz.
So your saying the ps5 is always around 10.28tflops but the xsx is not always at 12.15tflops?That comment has nothing to do with what you said before about Xbox Series X always being a 12TFLOP console.
So your saying the ps5 is always around 10.28tflops but the xsx is not always at 12.15tflops?
I'm saying that the numbers are the "Theoretical Peak Performance" numbers & they don't change depending on if you hit that peak or not.
Even if you render a pong line the theoretical peak performance number will still be the same.
Yes of course power draw is dependent on load. But to say the PS5s gpu is 10.28tflops is not accurate. Its the first time a console has had this scalable clock feature.
And no console will run at peak power all the time, but we acknowledge the xbox one was 1.31tf, the PS4 was 1.84 and the seriesx is 12.15. it not the same with the PS5 because of this new scalable frequency method.
Are you seriously going to try and say the PS5s varible clock is the same as the last gen consoles and xbox series consoles? Because it isnt, the system architect has explained this in detail.That number has always been the "Theoretical Peak" to say that it's not accurate to say PS5 GPU is 10.28tflops is crazy.
Are you seriously going to try and say the PS5s varible clock is the same as the last gen consoles and xbox series consoles? Because it isnt, the system architect has explained this in detail.
While the ps5 theoretical peak is 10.28tflops, its not the same as the xsx theoretical peak of 12.15tf.I didn't say any of the stuff you just said
You should seek helpWhile the ps5 theoretical peak is 10.28tflops, its not the same as the xsx theoretical peak of 12.15tf.
You should seek help
Says the guy who kept On suggesting that the xsx tflops were FTP16
You should seek help.Nope that's not what I said I was explaining that if Xbox One X had fp16 it's peak FLOPS number would be the fp16 & I let people get upset over their own misunderstanding lol
1 year later and we're still having the same redundant argument. Ones a 10.28TF machine, the others a 12TF machine. Neither are going to practically ever ESPECIALLY constantly run or be near the peak, because those totals are all theoretical. Game engines don't get written that way because well, the consoles would basically just shut off or heavily throttle. So, in the end none of this matters, especially to the developers. Go play more games, honestly you'll have a way more fun time doing that then you will going in circles with this bunch of crap.
Whatever makes you feel more comfortable, my statement still stands.12.15tf
Whatever makes you feel more comfortable, my statement still stands.
TFs are only important for coding scenarios that are compute-driven, which isn't as important for games as it would be for say mining, or raw data processing on servers.Yes indeed.
Cerny said it will run at 2230mhz most of the time, where a few percent (lets say 2%) clock reduction is necessary on that worse case game. 2% is like 45mhz, so it would be running at 2185mhz under these heavy loads which is 10.07tflops which is 2.08tfops weaker then xsx and 20.6% weaker then xsx. So the PS5s gpu has a range of less power compared to the XSXs it would be more accurate to say the PS5s gpu is 18-20% weaker. It could be more depending on the boundaries of what cerny meant by "few percent".
Ooook then 12.15TF. Again whatever makes you feel better. Again, my statement still stands. It's almost like both consoles were made with specific visions and goals in mind.Its not about feeling more comfortable, its about stating the truth.
I thought we had this figured out last year. The point of the variable clock is to more often come close to the theoretical max performance. No system is ever going to hit its theoretical max performance with any sane code in a game, but with this feature they try to use the gpu and cpu closer to their maximal possible performance because often you dont need both to run at maximum frequency, because the gpu is still waiting for the cpu to do its things. In that moment the gpu can clock down and allow some power to go to the CPU, so that the cpu is faster and give its finished calculations to the gpu and vice versa. Thats the whole point of variable frequency. Its not to lower frequency because the system is running hot, its to upclock the system beyond what would otherwise be possible with a chip like this. The variable frequency is the whole reason the PS GPU can run this fast. They are basically getting more out of the same design, that otherwise might only be able to run a few hundred Mhz lower with a constant frequency.
Both CPU and GPU running at a fixed frequencies does not improve performance, it only wastes power and creates heat. Switching the power between CPU and GPU allows Sony to reach these high GPU frequencies to begin with and allowed them to save some silicon, since they can now operate closer to the maximum possible Teraflop.
Whether or not this is enough to beat the XSX I dont know, but I'm pretty sure the difference between the two will be so small, that absolutely no one will be able to tell during normal gameplay and only on zoomed in freeze frames, which is way this discussion is pointless for anyone who is not deeply interested in the exact technical details and only wants to shout out tera flop numbers as if those mean anything.
TFs are only important for coding scenarios that are compute-driven, which isn't as important for games as it would be for say mining, or raw data processing on servers.
Certain things like pixel fillrate, geometry culling and triangle rasterization rate are not bound to the CUs explicitly, so the design with higher clocks tends to win out in those cases, which happens to be PS5. Texture fillrate is trickier because those are bound by CUs in RDNA2 so technically speaking the more CUs active the higher Texture and texel fillrate (and due to how RDNA 2 is designed, BVH traversal intersection rates) would be.
However, those things are still determined in some way by clock speeds and are also bound to what saturation levels are being loaded to the CUs across the GPU. In Series X's case you'd need 44 CUs regularly saturated with work on their TMUs (4 each) to match the texture/texel fillrate of PS5 (321.2 Gtexels/S). That means a game would need to regularly make 8 more CUs active on Series X to match the same throughput for texture/texel fillrate on PS5. So that is one of the downsides of having lower GPU clocks.
Series X does make up for that in a way with larger GDDR6 memory bandwidth but you also need to remember this is shared between GPU (560 GB/s) and CPU/audio(336 GB/s). So, if for a given percentage of frame time in a second (let's say 15%) the CPU & audio are using the GDDR6, then that's 15% of a second where the Series X memory is running at 336 GB/s, not 560 GB/s. So effective bandwidth in that scenario is actually closer to 560 GB/s * .85 = 476 GB/s. However, that's probably a more extreme scenario, since most CPUs of Series X equivalent on PC use about 50 GB/s of DDR4 bandwidth IIRC, but the audio could potentially use another 20 GB/s on top of that (if it's around what PS5 offers with Tempest Engine), so typical GPU bandwidth on Series X is probably around 490 GB/s in most cases.
That's still higher than PS5's memory bandwidth (448 GB/s), especially if CPU and audio usage is taken into account (which would bring PS5's GPU bandwidth closer to 383 GB/s assuming CPU usage and Tempest Audio usage), but Series X has to confine its GPU data to a 10 GB pool. Therefore if there's GPU-related data that might be sitting in the CPU/audio pool of 6 GB, and has to be moved to the 10 GB pool, hopefully that data is on the same GDDR6 module or else there'll be access latency penalty for moving the data around within the GDDR6 memory pool (which would happen anyway with just needing to move data from the 6 GB pool to the 10 GB pool; I know that this isn't hard-coded so theoretically the GPU could use the 6 GB pool for graphics data as well, but the application contextually switching from the two pools due to bandwidth differences is probably nowhere near an easy thing to maximize use of by devs I'd imagine).
With PS5 there's no need for that type of data management because its memory is fully unified and not virtually partitioned as two different banks at different bandwidths. That does help to save on latency and ensure effective bandwidth is what the raw numbers state, the only thing I am curious about is what type of penalty there is for that type of more thorough data management on Series X. If it's within a margin of error, say 2% penalty, that's still a potential further loss of 9.52 GB/s (476 GB/s) to 9.8 GB/s ) (490 GB/s) bandwidth, bringing those figures down to between 466.48 GB/s - 480.2 GB/s of likely actual GDDR6 bandwidth for Series X (under typical real-use cases where the CPU and audio are also being used.
While I could throw SSD transfers into that as well (any data going to or from the SSD on either system eats at the available memory bandwidth), that isn't too important considering they both have the same physical footprint of GDDR6 memory. However, the PS5 can decompress data at a higher rate meaning if a game for example needs 8 GB of new texture data, that could be done quicker on PS5 (under one second) while on Series X you'd need a bit more than one second (since its peak for texture decompression is 6 GB/s). For PS5 if it's a particularly well compressed texture that 8 GB could be provided in half a second into system RAM (since 8 GB/s is less than half of 17 GB/s). Additionally there are things related to data decompression and caching of SSD data on PS5 that on Series X the CPU has to do a bit of work on, so that means less effective CPU bandwidth for game-specific tasks.
In some ways the Series X therefore benefits from having lower effective rates in some areas like geometry culling, because that means it needs less CPU time to generate the commands to the GPU for creating the polygons, but that's still counterbalanced by other things on its own side such as what I just mentioned, and on PS5 such as cache scrubbers which are not present on the Series systems (which help with cutting down the amount of trips needed to system GDDR6 RAM, and avoiding the access latency penalty that comes with that), etc. So yeah, in terms of pure FLOPs the PS5 loses out if saturation is pushed on both it and Series X, but that's clearly only one fraction of the whole pie and not the most important when it comes to gaming performance, either.
I'm guessing Microsoft are envisioning a big shift in the near future to fully programmable mesh shading (which TBF, is something the PS5 only partially has with its Primitive Shaders), and some early benchmarks have shown huge gains in throughput performance there, but that's also hinged on a pretty big design paradigm shift when it comes to the 3D pipeline, and possibly not one that benefits every type of game. Even there it's not like the PS5 is a generation behind; while most of whatever Sony's customizations there were likely based on an update AMD themselves did, there isn't a massive gulf in capability between that and Mesh Shaders, tho overall it is one of the areas Series X has an advantage (at least in terms of potential use-cases).
Hopefully that clears some things up, altho I also want to stress that both PS5 and Series X are very future-proofed in terms of this generation is concerned. I just don't think you're going to see any scenario where the latter is clearly blowing out the former in performance over the course of the generation. Just expect more of what we're generally seeing right now, with maybe a slight bias towards Series X depending on Mesh Shader adoption rates. But yeah, don't get any hopes up for any PS2/OG Xbox or early PS3/360 levels of performance gulfs this gen. Even PS4/XBO levels of gulfs might be pushing it.
Wonder how many people/fanboys who cry about me being an "Xbox fanboy" or write "big passages of nothing "are gonna try saying that again after this. Those folks-and they know who they are-can go hold their L's in the corner. I'm not in this for some console war bullshit or being on "a side", especially considering I like all three and have always shown that and will continue to do so. Keep that to yourselves and kick rocks.
Ooook then 12.15TF. Again whatever makes you feel better. Again, my statement still stands. It's almost like both consoles were made with specific visions and goals in mind.
That is not what he said AFAIK, you are welcome to point me to the exact quote.Yes indeed.
Cerny said it will run at 2230mhz most of the time, where a few percent (lets say 2%) clock reduction is necessary on that worse case game. 2% is like 45mhz, so it would be running at 2185mhz under these heavy loads which is 10.07tflops which is 2.08tfops weaker then xsx and 20.6% weaker then xsx. So the PS5s gpu has a range of less power compared to the XSXs it would be more accurate to say the PS5s gpu is 18-20% weaker. It could be more depending on the boundaries of what cerny meant by "few percent".
Cerny also stresses that power consumption and clock speeds don't have a linear relationship. Dropping frequency by 10 per cent reduces power consumption by around 27 per cent. "In general, a 10 per cent power reduction is just a few per cent reduction in frequency," Cerny emphasises.
Not really, frankly I could have cared less if either had them. The .28 was just easier to remember than the .15 due to how the Xbox was marketed. I also don't necessarily think stating a more demanding games is more accurate than saying a more terribly coded game considering the later is what would make more of a difference *See New World*. Frankly the idea that the frequency drops or changes for us considering we aren't the ones making the games is basically impossible to know because we don't know what's being demanded by the engine per scene per frame. That just makes anything we say or any assumptions we make pure speculation.Its not about feeling better. I mean did including the decimal places for PS5 and not xsx make you feel better?
Found it.You should seek help.
Nail on the head. We play games, not numbers. It's fun to know aspects of things, but lately or especially the past year atleast it's seemed much worse than previous years that I've browsed. It's incredibly tiring to say the least.I can’t believe we are back on this. Forget TFs, forget model SOC power consumption, smartshift, etc.
Series X will win the most of the resolution battles. Its been proven. May be some occurances where PS5 comes out on top, but all I can say is that the differences between both machines are not big enough for me nor most of the user base to notice. Its getting tiring seeing the same shit on these boards.
Ok got it. Everything is speculation.Not really, frankly I could have cared less if either had them. The .28 was just easier to remember than the .15 due to how the Xbox was marketed. I also don't necessarily think stating a more demanding games is more accurate than saying a more terribly coded game considering the later is what would make more of a difference *See New World*. Frankly the idea that the frequency drops or changes for us considering we aren't the ones making the games is basically impossible to know because we don't know what's being demanded by the engine per scene per frame. That just makes anything we say or any assumptions we make pure speculation.
The honest answer to that is we really wouldn't know. Every game engine uses aspects different, it's why you get discrepancies between versions of games already. One engine might use more compute thus taking advantage of more CUs while another might not. Every engine doesn't work the same, simple as that. It's also entirely likely though that if they did it the old way at that clock it would cost more to produce and cool. Which that again is just speculation because I don't work for Sony or AMD or TSMC so. Frankly putting a simple % regardless the side isn't accurate, whether you're on the line of It's 18%< or 18%> both are wrong.Ok got it. Everything is speculation.
The varible frequency method was implemented in order to get more performance within the power consumption and price budget.
But thats not really the point i was trying to make. If the PS5 was made the "old way" and had a clock of 2230mhz that was not varible it would perform better then PS5 released, but they didnt do that because it was not within the power consumption/heat/price budget.
I mean if it means that much to people call the difference between the GPUs ”18%" but its not accurate.
The honest answer to that is we really wouldn't know. Every game engine uses aspects different, it's why you get discrepancies between versions of games already. One engine might use more compute thus taking advantage of more CUs while another might not. Every engine doesn't work the same, simple as that. It's also entirely likely though that if they did it the old way at that clock it would cost more to produce and cool. Which that again is just speculation because I don't work for Sony or AMD or TSMC so. Frankly putting a simple % regardless the side isn't accurate, whether you're on the line of It's 18%< or 18%> both are wrong.
There's a good quote I tend to always remember with game development and it tends to always be pretty spot on. "No one statistic is a measure of power of a console, there are too many variables, and no one calculation to produce a result. It varies per game, per engine, per firmware, per development team, and per patch, it always has and it always will."
That's assuming that every dev will catch everything though, also again see New World being just a awfully coded game. The reality is, yes there are performance benefits to both models. However any trade offs we really don't have access to definitive proof of what creates issues or uses more unless we're the ones making those games or coding those engines. Point is or truth be told both are great hardware for what they are, both have trade offs which is a given. I'm kinda burnt out already on this topic so this will likely be the last post in this thread for me, unless I decide to chime in. I just at this point recommend ppl stop chasing narratives of one console being x over another. We play games, not numbers. Again each do something better than the other, each do things different than the other. These aren't bad things. From a hardware standpoint there really isn't some massive gap but the ability to narrow it down to what it actually is, is frankly something we don't have the ability to gauge. Both in the end though are great hardware that are going to give you great AND similar experiences with very little difference or trade-offs. Enjoys the games really, there's a boat load of great ones coming to both I'm sure.We know that a higher clock rate provides more performance on the same chip.
If the gpu did not have to reduce frequency when that "worse case game" was played it would perform better.
The devs would design around it though, in the real world it may mean the resolution may be slightly higher or slightly more frames when the framerate is not locked.
12.147!12.15tf
I think we're actually misunderstanding what's meant by "locked clocks" here. It's not that the Series X will always be at 1825 GHz for the CPU (or 2230 MHz for the PS5 GPU), because there are going to be normal game scenarios that don't need clocks that high.I was just disputing that people keep saying theres an 18% difference in compute between the xsx and ps5 GPU, i dont think its accurate because the PS5 has to reduce its clocks by a few percent when running certain demanding games.
Maybe a better to way explain it is if the ps5 was like the PS4 and had a static clock of 2230mhz it would perform slightly better then the PS5 that was released because it would not need to lower its clock in certain situations.
XSX GPU has about 18% higher compute, texture, and raytracing power. CU scales compute, texture, and raytracing hardware. Raytracing denoise runs one compute, BVH transverse runs on compute and raytracing intersection testing runs with hardware accelerated.That's the thing, XSX doesn't have a '20% more pure power' or 'power (as a whole)', it has 18% more compute power (along with texel fill rate) over PS5 while actually being in deficit by around 20% in other GPU 'power' metrics tied to fixed function units due to frequency difference. Thus expecting a consistent difference of 20% in resolution or FPS is illogical to begin with.
And going by Cerny's statement of "when the triangles are small it's more difficult to feed the CUs with useful work" future next-gen titles with more complex geometry can actually favor PS5's 'deep' design more. There was also a post by Matt Hargett on Twitter alluding that optimized code (with higher cache hit rate) will benefit PS5's faster cache subsystem more. Furthermore, i really don't think that PS5's Geometry Engine and I/O complex are even remotely close to being maxed out.
I would not compare compute and ray trace capabilities by the number of CUs because the CUs and TMUs+Ray are designed differently. Thus performance and efficiency may be also different.XSX GPU has about 18% higher compute, texture, and raytracing power. CU scales compute, texture, and raytracing hardware. Raytracing denoise runs one compute, BVH transverse runs on compute and raytracing intersection testing runs with hardware accelerated.
XSX GPU has 5 MB L2 cache at 1825 Mhz.
PS5 GPU has 4 MB L2 cache up to 2230 Mhz
The main purpose of the next-generation geometry pipeline is to scale with CU count!
RTX Ampere has a large increase in compute power e.g. next-generation geometry pipeline, PC DirectStorage decompression, and raytracing denoise. RTX has hardware-accelerated BVH transverse.
The chance of a dev creating code that can hit the peak number of flops on PS5 or Xbox Series X is really low.
Faster at what?Its not that hard people. Xbox series X gpu is faster. It's that simple.
I remember seeing someone claiming the register occupancy is usually 60-80%. Though that doesn't tell the whole story eiter.
Regardless, the only way to get close to 100% FLOPs is with a power virus without a framerate limiter, like furmark. But that wouldn't pass Sony's or Microsoft's compliance tests for publication anyway.
Faster at what?
On compute-limited scenarios the Series X should be up to 20% faster, but on fillrate-limited scenarios the PS5 is up to 20% faster. Which happens more often is probably dependent on game, engine, scenario, etc.
ALU's will allways be last bottleneck even at 4K you can't fill all those ALU's with tasks at given frame that's why Async compute was created to better utilize CU's they always will be BW and power starved. So CU amount matters at the point of which you can utilize/parallelize them effectively, you have to be pretty amazing programming ninja to utilize them at high efficiency on average.Games made in the current time for the current consoles. I have no interest in looking how a game functions that is builded for the PS4 because why even bother buying a PS5 at that point. Its about todays games. And in todays games CU's are ultilized without effort simpel by the fact they are running at higher resolutions which will already consume those cu's. 36 or 52 cu's aren't that many at the end of the day specially for 4k when u look at PC hardware that sits in a class above it, its kinda lowish.
People here pretend and try to find evidence to support there narrative that CU only matters in certain scenario's while in reality every single game that's made today specially at 4k will use those cu's without effort. Now will u notice the difference? that all depends on what parity the developer is searching for. Which i come back to my 30 fps remark.