• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

EDGE: "Power struggle: the real differences between PS4 and Xbox One performance"

Vizzeh

Banned
eSRAM, CPU can't access eSRAM.

Really? Dunno why I haven't seen more emphassis on that in latency discussion, obv GDDR5 has been compared in similar latency to DDR3 but if esram can't access the CPU then there's no benifit as it will only matter to the CPU since the GPU is more latency tolerant and more about the bandwidth.
 

KKRT00

Member
According to Edges sources it's 50% difference at the moment. Raw performance numbers put the PS4 at just under 40% more powerful. Out of curiosity, what is your evidence to support the notion that it won't even be a 30% difference in real world circumstances despite what the OP and several other insiders have said?

There is 500gflops difference between both machines and 500gflops is 35% of Xbone power, so PS4 is up to 35% more powerful.
Giving deficiencies of multithreading scaling 30% would be the most You can get out in most cases.

===
It could be slightly bigger than 50% with PS4's superior Compute functionality being fully taken advantage of.

No. For example if game would use 700gflops for compute on both machines and PS4 would be 30% more efficient in compute than Xbone, its still only 15% more efficient utilization in overall power. With all other deficiencies, it would be max 42-43% higher performance on PS4 side. But in real game they'll just decrease precision of calculations or amount of particles by 20-30% and they'll get performance parity and in most cases You wont even notice the difference, because You wont be counting thousands of particles individually :)
 

Amir0x

Banned
Don't bother Amirox.

Not one person who talks about 'the power of the cloud' has been able to give an answer to this simple and obvious question so far.

But shit, even using Forza 5 as the example, wouldn't that prove CLOUD isn't doing shit? It's 60fps, yes, but it is clearly sacrificing on the visual spectrum in many ways: it lacks dynamic lighting, its reflections are the cheap, as-seen-in-PS360-gen type, tons of things it's simply not doing that even DriveClub is.

So, can he point to what areas specifically he thinks benefited from POWER OF THE CLOUD™? Because if anything this would prove that CLOUD is not doing much to help
 

TheD

The Detective
There is 500gflops difference between both machines and 500gflops is 35% of Xbone power, so PS4 is up to 35% more powerful.
Giving deficiencies of multithreading scaling 30% would be the most You can get out in most cases.

Their is no damn scaling problems with graphical loads!
Albert was just talking out his arse!

And you are also ignoring the 2x ROPs and much more usable bandwidth of the PS4.
 

Skeff

Member
There is 500gflops difference between both machines and 500gflops is 35% of Xbone power, so PS4 is up to 35% more powerful.
Giving deficiencies of multithreading scaling 30% would be the most You can get out in most cases.

More than 500 gflops, its >40% using just the tflops.
 
There is 500gflops difference between both machines and 500gflops is 35% of Xbone power, so PS4 is up to 35% more powerful.
Giving deficiencies of multithreading scaling 30% would be the most You can get out in most cases.

We've been told by devs working on both consoles that the speed difference is 40-50%.

Yet you have used your own math to tell us they are in fact wrong and the difference is 30%. What PS4 and XB1 devkits are you using to benchmark and get 30%

Oh hold on...

Stop doing it to yourself.
 
Hmm. It's 5 AM here, I've been up working in the hospital all night, and perusing this thread in the brief moments of quiet I've had, for some reason, I had flashbacks to another 'always on' console that never got its moment in the sun . . . Honestly, looking back, some of this was downright prophetic. I think the game sales thing was maybe 1 degree of separation from the 'power of the cloud' nonsense. Shit, I should sue.



Please forgive the woefully out of date forum references and "L337 BRO SP34K". This is from ~ 2006, as I recall.

The future am here? Is that ******s console? lol

edit his name is censored
 

Finalizer

Member
Please forgive the woefully out of date forum references and "L337 BRO SP34K". This is from ~ 2006, as I recall.

dafuq am i looking at

No seriously, this is real? 'Cause now I'm remembering the Phantom console.

phantom_console_lg_2.jpg
 

Chobel

Member
There is 500gflops difference between both machines and 500gflops is 35% of Xbone power, so PS4 is up to 35% more powerful.
Giving deficiencies of multithreading scaling 30% would be the most You can get out in most cases.

You're kidding right, so you're comparing the total flops in both systems 1.94TFlops and 1.44 Tflops. That's not how it works. For someone who knows a lot of technical stuff you seem to forget there's something called amdahl's law. CPU don't take 50% of processing time. (usually A lot less)

Also don't forget Xbox One OS reserves 10% of GPU.
 
But shit, even using Forza 5 as the example, wouldn't that prove CLOUD isn't doing shit? It's 60fps, yes, but it is clearly sacrificing on the visual spectrum in many ways: it lacks dynamic lighting, its reflections are the cheap, as-seen-in-PS360-gen type, tons of things it's simply not doing that even DriveClub is.

So, can he point to what areas specifically he thinks benefited from POWER OF THE CLOUD™? Because if anything this would prove that CLOUD is not doing much to help

Silly Ami. They haven't turned on The Cloud™ yet. Just wait for launch day when they can pump that secret sauce directly into our homes.
 

bonus_sco

Banned
Mind explain more about the bolded part? why do you need to swizzle?

Depends on what you're doing.

If you're sending data as a texture to the GPU for GPGPU work the GPU is likely to want to access the "texels" in the rows above and below the element the current thread is working on. To help the texture cache perform optimally you "swizzle" the data. So you might have the first 16 texels from the top row, followed by the 16 texels from the second row. That way the texture cache does less fetching. An example would be a GPGPU blur filter.

Converting to and from this tiled format is swizzling.

If you then want to access the same data on the CPU in a linear order you de-swizzle. And the CPU cache can pre-fetch data in the correct order and have less cache misses.

If you benchmark and find that different tiling formats benefit the CPU and GPU on the Xbox One you can use a coprocessor to copy the memory to & from ESRAM or two different DRAM locations and swizzle at the same time "for free".
 
Wow, I can't believe anyone who posts on GAF has actually fallen for the cloud bullshit. It's dedicated servers and online storage. No more, no less.
 

Raist

Banned
But shit, even using Forza 5 as the example, wouldn't that prove CLOUD isn't doing shit? It's 60fps, yes, but it is clearly sacrificing on the visual spectrum in many ways: it lacks dynamic lighting, its reflections are the cheap, as-seen-in-PS360-gen type, tons of things it's simply not doing that even DriveClub is.

So, can he point to what areas specifically he thinks benefited from POWER OF THE CLOUD™? Because if anything this would prove that CLOUD is not doing much to help

Some time down the line if people start asking questions I'm kind of expecting that MS will go "well we had to give up on always online because of some angry nerds and thus we couldn't use the.power of the cloud anymore."
 
Hmm. It's 5 AM here, I've been up working in the hospital all night, and perusing this thread in the brief moments of quiet I've had, for some reason, I had flashbacks to another 'always on' console that never got its moment in the sun . . . Honestly, looking back, some of this was downright prophetic. I think the game sales thing was maybe 1 degree of separation from the 'power of the cloud' nonsense. Shit, I should sue.



Please forgive the woefully out of date forum references and "L337 BRO SP34K". This is from ~ 2006, as I recall.
Ok im finna leave this board for a few days and play some gta...
 

Chobel

Member
Depends on what you're doing.

If you're sending data as a texture to the GPU for GPGPU work the GPU is likely to want to access the "texels" in the rows above and below the element the current thread is working on. To help the texture cache perform optimally you "swizzle" the data. So you might have the first 16 texels from the top row, followed by the 16 texels from the second row. That way the texture cache does less fetching. An example would be a GPGPU blur filter.

Converting to and from this tiled format is swizzling.

If you then want to access the same data on the CPU in a linear order you de-swizzle. And the CPU cache can pre-fetch data in the correct order and have less cache misses.

If you benchmark and find that different tiling formats benefit the CPU and GPU on the Xbox One you can use a coprocessor to copy the memory to & from ESRAM or two different DRAM locations and swizzle at the same time "for free".

Thank you for the explanation.
 

KKRT00

Member
More than 500 gflops, its >40% using just the tflops.

0.5 out 1.4tf is 35,7%. Dont how You count over 40%...

===
We've been told by devs working on both consoles that the speed difference is 40-50%.

Yet you have used your own math to tell us they are in fact wrong and the difference is 30%. What PS4 and XB1 devkits are you using to benchmark and get 30%

Oh hold on...

Stop doing it to yourself.

What devs? What games? Name them. Show me real example.

All we have now are raw numbers and raw numbers dont give 50% advantage.

===
Their is no damn scaling problems with graphical loads!
Albert was just talking out his arse!

And you are also ignoring the 2x ROPs and much more usable bandwidth of the PS4.

But there are scaling deficiencies in GPUs. There are small, but they exist. Thats why having times more CU doesnt give You two times more raw performance in games, yeah i know there are other factor, but still.

===
But whatever, think what You want. I want argue over few percents, its waste of time.
Have fun with next 50 pages of the same stuff.
 

Riky

$MSFT
Isn't it true that a simple Tflops comparison doesn't tell the whole story though? I seem to remember that in raw Tflops the PS3 was twice as powerful as the 360, which is a 100% difference, more than double what we are talking about here.
 

astraycat

Member
There is 500gflops difference between both machines and 500gflops is 35% of Xbone power, so PS4 is up to 35% more powerful.
Giving deficiencies of multithreading scaling 30% would be the most You can get out in most cases.

===


No. For example if game would use 700gflops for compute on both machines and PS4 would be 30% more efficient in compute than Xbone, its still only 15% more efficient utilization in overall power. With all other deficiencies, it would be max 42-43% higher performance on PS4 side. But in real game they'll just decrease precision of calculations or amount of particles by 20-30% and they'll get performance parity and in most cases You wont even notice the difference, because You wont be counting thousands of particles individually :)
There are possible scenarios where the PS4 can perform over 40% faster than the XB1. Here are a couple of examples:

1. Heavy bandwidth load from main memory -- if your graphics pipe is starved due to some extremely bandwidth hungry algorithm such as voxel cone tracing, it's possible for the PS4 to perform at up to 3x the speed of XB1, due to its bandwidth to main memory.

2. Deferred MSAA with a very triangle dense scene -- it's unlikely that more than one (if any) 1080p MSAA render target will fit into ESRAM. Additionally, if the limiting factor is that there are simply too many fragments for the ROPs to process, then it's possible for the PS4 to perform at 2x the speed of the XB1, as it has 2x the ROPs.

There are probably more, but I think this is enough for a quick sketch of the possibilities.
 
Isn't it true that a simple Tflops comparison doesn't tell the whole story though? I seem to remember that in raw Tflops the PS3 was twice as powerful as the 360, which is a 100% difference, more than double what we are talking about here.

PS3 and Xbox 360 had substantially different architectures. It would be like comparing diesel engines to a petrol engine.

This time we have two petrol engines, only one is a 1 liter and the other is a 1.5 liter with a turbo.
 

TheD

The Detective
0.5 out 1.4tf is 35,7%. Dont how You count over 40%...

===


What devs? What games? Name them. Show me real example.

All we have now are raw numbers and raw numbers dont give 50% advantage.

===


But there are scaling deficiencies in GPUs. There are small, but they exist. Thats why having times more CU doesnt give You two times more raw performance in games, yeah i know there are other factor, but still.

===
But whatever, think what You want. I want argue over few percents, its waste of time.
Have fun with next 50 pages of the same stuff.

Their is no scailing problems of note with normal graphics calculations (the stuff done by the ALUs).
The other factors (lack of everything else scaling to the same % as the ALUs) is what could cause a non linear scaling, you can not just brush that aside!

It is just MS FUD.
 
But shit, even using Forza 5 as the example, wouldn't that prove CLOUD isn't doing shit? It's 60fps, yes, but it is clearly sacrificing on the visual spectrum in many ways: it lacks dynamic lighting, its reflections are the cheap, as-seen-in-PS360-gen type, tons of things it's simply not doing that even DriveClub is.

So, can he point to what areas specifically he thinks benefited from POWER OF THE CLOUD™? Because if anything this would prove that CLOUD is not doing much to help

I hope a dedicated, non biased person decides to drive off the track every single race. Then let's see what his drivatar does in forza5. PS: my point is that the whole drivatar thing is probably gonna measure your skill by times mostly, or maybe a mix of ghosting with AI pathfinding. I wanna see if the drivatar really goes off the track like the real player does.
 

kinggroin

Banned
Ceezar is right cloud makes the 50% difference swing back so that one is about 20% more powerful at max frames per second. Also the fact that it has three OS to render while playing games am still be 60fps skews things alot
 

astraycat

Member
I hope a dedicated, non biased person decides to drive off the track every single race. Then let's see what his drivatar does in forza5. PS: my point is that the whole drivatar thing is probably gonna measure your skill by times mostly, or maybe a mix of ghosting with AI pathfinding. I wanna see if the drivatar really goes off the track like the real player does.
I must know what the drivatar will do. I shall have to pick Forza as my first XB1 game then.

Will it tell me to kill myself in shame if my drivatar is better than I am?
 

bonus_sco

Banned
I hope a dedicated, non biased person decides to drive off the track every single race. Then let's see what his drivatar does in forza5. PS: my point is that the whole drivatar thing is probably gonna measure your skill by times mostly, or maybe a mix of ghosting with AI pathfinding. I wanna see if the drivatar really goes off the track like the real player does.

Drivatar won't go off the track in the same places as players. That's not what it's meant to do.

Drivatar is interesting and I'm genuinely interested in the types of features the cloud might allow.

http://research.microsoft.com/apps/mobile/showpage.aspx?page=/en-us/projects/drivatar/forza.aspx
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
http://research.microsoft.com/apps/mobile/showpage.aspx?page=/en-us/projects/drivatar/forza.aspx[/url]

Although I understand that the version Drivatar in Forza 5 will use the collected set of human driving profiles from many drivers to compute AI driving profiles, the FAQ on this (old) site still makes me chuckle.

Q: Can I use my Drivatar in online races? Can I swap / trade Drivatars online?

A: Unfortunately not. Drivatars are offline buddies only, they are too shy for a public arena.
 
Right, but that's an extra on the Xbox. The DDR3 is coherent if you want it to be. The ESRAM should be used as a GPU scratch pad.
The whole point of hUMA is that GPU and CPU can look at the same data with the same pointers, certainly speeding work and potentially allowing new algorithms. Since on One only the DDR3 can be coherent, the GPU and CPU have to share only 68 GB/s. And if the data you want to share with the CPU happens to originate on the eSRAM scratchpad, you have to take the time to copy it over to main memory first. PS4 has neither of these restrictions.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
Wow, I can't believe anyone who posts on GAF has actually fallen for the cloud bullshit. It's dedicated servers and online storage. No more, no less.

I am really annoyed by Microsofts shitty PR on this topic. Cloud-based infrastructures are indeed a great thing, not because they provide some sort of magical power equivalent to local hardware (they don't), but because they provide known resources at a significantly lower price and with radically more flexible business models. This alone makes the difference between a game having an online infrastructure and not having an online infrastructure.

But instead, they had to go into full bullshit mode.
 

Chobel

Member
Drivatar won't go off the track in the same places as players. That's not what it's meant to do.

Drivatar is interesting and I'm genuinely interested in the types of features the cloud might allow.

http://research.microsoft.com/apps/mobile/showpage.aspx?page=/en-us/projects/drivatar/forza.aspx

What I'm wondering is how is MS profiting from Forza 5? If Drivatar can't be calculated on Xbox one and needs the cloud then it must be eating a lot of resources. Anybody playing Forza 5 (online) will use more than $5/month services from Azure for Drivatar calculations so XBLG cost won't be enough to make a profit. MS is not charity so, what I am missing here?
 

bonus_sco

Banned
The whole point of hUMA is that GPU and CPU can look at the same data with the same pointers, certainly speeding work and potentially allowing new algorithms. Since on One only the DDR3 can be coherent, the GPU and CPU have to share only 68 GB/s. And if the data you want to share with the CPU happens to originate on the eSRAM scratchpad, you have to take the time to copy it over to main memory first. PS4 has neither of these restrictions.

But...

If the data is in a layout which is sub-optimal for either the CPU or the GPU it will be more optimal to use a move engine to swizzle the data, carry out other work on both to hide the latency then complete the processing you wanted to do on the data.

This makes the Xbox harder to program for in general but gives a good amount of scope for optimization.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
What I'm wondering is how is MS profiting from Forza 5? If Drivatar can't be calculated on Xbox one and needs the cloud then it must be eating a lot of resources.

It is not about computing power, but about having a much larger sample of data to derive AI profiles from. Instead of just having the driving profile of the "local" player you have the profiles of all players in the world to analyze. Computing power is not really important since the computation of profiles is not time-critical. You can perform it literally "over night", and thus downscale the amount of cloud-base compute resources greatly. The only thing that needs to scale on the backend side is, of course, the storage for profile information. The only thing that needs to scale with the number of users is the frontend that receives new data from individual players, but that is certainly something that only needs to be done, at max, once after each race. The latter is not more difficult than running a website.
 

Riky

$MSFT
PS3 and Xbox 360 had substantially different architectures. It would be like comparing diesel engines to a petrol engine.

This time we have two petrol engines, only one is a 1 liter and the other is a 1.5 liter with a turbo.

Which was my point. Just comparing Tflops figures doesn't take into account a myriad of other things, just like it didn't with a 2Tflop PS3 compared to a 1Tflop 360.
 
But...

If the data is in a layout which is sub-optimal for either the CPU or the GPU it will be more optimal to use a move engine to swizzle the data, carry out other work on both to hide the latency then complete the processing you wanted to do on the data.

This makes the Xbox harder to program for in general but gives a good amount of scope for optimization.
But even perfect optimization this way would still only get you to 68 GB/s. There's simply no way to use hUMA at any higher bandwidth.
 
But...

If the data is in a layout which is sub-optimal for either the CPU or the GPU it will be more optimal to use a move engine to swizzle the data, carry out other work on both to hide the latency then complete the processing you wanted to do on the data.

This makes the Xbox harder to program for in general but gives a good amount of scope for optimization.

Moving stuff from/to DDR3 is going to be a bloodbath. You'll need to direct the swizzling (more complexity) and it will drain bandwidth (a lot of DDR3 and a bit of ESRAM).
 

Vizzeh

Banned
It's a wiki, so I'm guessing someone saw the FCC filing and got carried away.

actually:

http://www.anandtech.com/show/6976/...wering-xbox-one-playstation-4-kabini-temash/4

"While Kabini will go into more traditional notebook designs, Temash will head down into the tablet space. The Temash TDPs range from 3.9W all the way up to 9W. Of the three Temash parts launching today, two are dual-core designs with the highest end A6-1450 boasting 4 cores as well as support for turbo core. The A6-1450’s turbo core implementation also enables TDP sharing between the CPU and GPU cores (idle CPUs can be power gated and their thermal budget given to the GPU, and vice versa)."

It seems to be describing a Turbo mode, that is why IGN are saying 2.75 boost clock and 1.6 BASE clock, so it can upclock its cores IF all cores are not running... this seems legit??
 

Busty

Banned
PS3 and Xbox 360 had substantially different architectures. It would be like comparing diesel engines to a petrol engine.

This time we have two petrol engines, only one is a 1 liter and the other is a 1.5 liter with a turbo.

This might be one of the best explanations for the difference between these two consoles I've read on GAF.
 
Which was my point. Just comparing Tflops figures doesn't take into account a myriad of other things, just like it didn't with a 2Tflop PS3 compared to a 1Tflop 360.

Except the CPU and GPU architectures with the PS4 and Xbox are literally the same. People need to stop saying "bbbbbbbut Xbox 360 and PS3". There, the CPU and GPUs (RSX vs. Xenon for crying out loud) were wildly different, not to mention memory architecture.

Both systems this time is extremely similar, only that one company has a significantly larger GPU with significantly more GPGPU modifications as well as a simplifier memory architecture. They're both using Jaguar CPUs; they're both got very similar GCN GPU architecture; the PS4 has more everything in their GPU. Not to mention, Sony actually has the development side to put out tools, documentation and drivers that don't suck.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
It seems to be describing a Turbo mode, that is why IGN are saying 2.75 boost clock and 1.6 BASE clock, so it can upclock its cores IF all cores are not running... this seems legit??

The FCC listed the highest frequency in the overall system, not specifically the frequency of a processor, hence,that 2,75Ghz refers to memory, WiFi, or something else among these things.
 
Top Bottom