Because they aren't desperate for a (minor) advantage and they've already used the "most powerful console" card. Why would they?Why wouldn't Sony be touting this?
Because they aren't desperate for a (minor) advantage and they've already used the "most powerful console" card. Why would they?
Because they aren't desperate for a (minor) advantage and they've already used the "most powerful console" card. Why would they?
I love these types of threads.
In my ignorance I can never work out if every participant is an expert in their field or whether random factoids are simply pulled from the nearest passing ass.
Everybody can't be right, can they?
Because MS has been pushing their CPU as more powerful. If your competitor is lying about your product in a way that fights your marketing strategy, you set the record straight. What reason do they have for not simply tweeting, "The CPU is ### GHz?"
Since this is the only real data we have, we have to go with it. It's just confusing to me why Sony would let MS have that advantage when they've released info on so many other things.
Sony have already established the idea amongst the hardcore than PS4 has more power. There is no need to address this snippet of information since it is irrelevant to the bigger picture.
The PS3 was even more at $600 and the games for it until MGS4 hit were not that appealing. Yet it still sold better than the XBox 360 on average per year.
You complain too much to be 'impartial'. So drop the act, it's pretty transparent.
You really don't like any one talking "ill" of xone huh?
You try to play the neutral poster guy but your posting habits resembles someone who sees Sony "fanboys" in his sleep.
Your posts extend beyond this thread. You jump in wherever a bad word is spoke of Xbox. You don't seem to run to the defense of Sony or Nintendo.
So the PS4 CPU is clocked at 2GHz ?, wow if true, I thought 1.6 was confirmed by official Sony documents.
I quoted you word for word - you didn't "simply" say it was interesting. There was a little more to it than that and doubt was certainly expressed.???
I'm not doubting what he said at all. I simply said that it's interesting.
Many people buying it as a DVD player doesn't mean *everyone* bought it for that reason, or was even aware. And there's certainly a question of how long that remained even widely suggested of the PS2, past it's first year when standalone DVD players started outpacing the PS2's cost.And yes, many people got the PS2 due to it being a cheap DVD player. DVD movie sales greatly went up after the PS2 was released.
Because they aren't desperate for a (minor) advantage and they've already used the "most powerful console" card. Why would they?
You seem to be clinging to the notion that PS4 is clocked at 1.6 GHz while Xbone is clocked at 1.75 GHz. You're claiming Sony would "need" to clock at 1.6 GHz to save money, but that argument would apply equally to MS, and we know they're at 1.75 GHz.They are the EXACT SAME CPU modules clocked at different speeds.
The only logical explanation for the benchmark on the PS4 outperforming the Xbox One is that the Xbox One's operating systems are greedier and reserve more of the hardware for themselves.
Again, this was covered in the OP. The test is not affected by bandwidth or other external concerns; it's purely based on the performance of the CPU itself.Perhaps there is also a slight advantage to the PS4 when writing to and reading from RAM.
Can I ask again how you're determining that? Both the iPad 2 and iPhone 5 are dual-core machines, with the latter on a newer architecture and a 30% higher clock. How does a 66% performance advantage for the iPhone 5 show us the test is per-core rather than per-CPU?… but this performance graph is for single core only, the gulf between the ios cpus prove that.
You seem to be clinging to the notion that PS4 is clocked at 1.6 GHz while Xbone is clocked at 1.75 GHz.
You're claiming Sony would "need" to clock at 1.6 GHz to save money, but that argument would apply equally to MS, and we know they're at 1.75 GHz.
More to the point, this test seems to indicate the PS4 is not clocked at 1.6 GHz; the math doesn't work out otherwise. It seems fairly clear the PS4 is clocked at 1.75 or 2 GHz — depending on how the chips are being tested — and I'm currently leaning towards the latter.
Again, this was covered in the OP. The test is not affected by bandwidth or other external concerns; it's purely based on the performance of the CPU itself.
Every source I can find online claims the PS4 is clocked at 1.6GHz and the Xbox One is clocked at 1.75GHz.
Wat.
How can there be a source if Sony has never given the CPU speed. Those sources of yours are giving there opinion on what they BELIEVE the CPU speed is based off comparing it to similar chips.
Every source I can find online claims the PS4 is clocked at 1.6GHz and the Xbox One is clocked at 1.75GHz. Since the CPU modules on the APU are THE SAME SILICON as the Xbox One's CPU, they will preform better than the PS4's in a vacuum (without considering OS, Memory, etc.) by a small margin. If indeed, these facts are wrong (please provide a source if so), then you may be right.
your pulling numbers out of your ass and they mean nothing...you dont know what yeild numbers either company was happy with or aiming for...you have no idea what thermal characteristics each side was looking for...or for that matter what the clock speed Sony was aiming for is...none of us do...As for my 'argument,' it would apply more to the Xbox One than the PS4 because the Xbox One APUs are larger and more costly to produce, especially due to the ESRAM. My point was that Microsoft and Sony both took into consideration AMDs chip yields. For example, if 80% of APUs were able to be clocked at 1.75GHz, that yield might have been good for Microsoft but Sony wanted a 90% chip yield and opted for the lower clock speed to save costs.
The 'math,' does not prove anything. A benchmark is not conducted inside of a vacuum. Unless the benchmark is not running on an operating system and does not use memory (only using the CPU registers and LU Caches), then it cannot possibly be testing only the CPU performance, and from all the data I have seen, since the Xbox One's CPU module is THE SAME SILICON as the PS4's CPU module and is at a higher clock, it will outperform the PS4's CPU by a small margin.
How? As an engineer, I would like to know the exact methodology behind this test. Does it not run on an operating system? How does it not use RAM? Was the test small enough to run entirely in the LUCache, and if so, how did the operating system not interfere? I did not see that information in the OP, so I am skeptical of this assumption.
what we DO know is the PS4's CPU outperformed the Xbone's...For all we know, all this benchmark is proving is that FreeBSD operating system kernel used on the PS4 outperforms the Windows operating system kernel used on the Xbox One. Biiiiig surprise there...
There is a caveat to all of this. If the Xbox One has disabled more CPU cores than the PS4 in order to improve chip yields, this could also mean the PS4's lower clock-rate CPU will outperform.
Why wouldn't Sony be touting this?
Yes, you can get more out of the PS4's CPU than you can the Xbox's.
At the very most, there are only minor tweaks which will not affect performance one way or another. I designed computer circuitry in college, and no modern CPU is designed by hand anymore. The circuitry for CPUs is created using a programming language called Verilog or a similar programming language. If there are any differences between the PS4 and Xbox One silicon, it is the equivalent of modifying a few lines of Verilog.i also like your assumption that that the silicon is identical...modular or not, the APU's are of significant difference that it would not shock me one bit to find out that the CPU architecture has been tweaked...
Read above. They both have Jaguar cores. The Verilog for both processors is either identical, or 99.9% the same.was it not said by AMD that the very agreements between Sony and MS were completely different? with AMD licensing technology to MS for them to design their own silicon?...
If you would have read what I said, I did not outright refute this; however, I am very skeptical. I'm going to quote myself...in a vacuum your ASSUMPTION would hold true, however it is you that has the burden of proof here...there is evidence in the OP that suggest the PS4 CPU runs at a higher clock speed...you refute this...you need to provide the information to support your claim...
Why wouldn't Sony be touting this?
At the very most, there are only minor tweaks which will not affect performance one way or another. I designed computer circuitry in college, and no modern CPU is designed by hand anymore. The circuitry for CPUs is created using a programming language called Verilog or a similar programming language. If there are any differences between the PS4 and Xbox One silicon, it is the equivalent of modifying a few lines of Verilog.
Read above. They both have Jaguar cores. The Verilog for both processors is either identical, or 99.9% the same.
These are the valid questions I have which have not been answered, and will not be answered.
I'm done arguing with you. I have designed computer circuitry and you have not. My assumptions made are safe assumptions for anyone who knows the industry to be making.
Once again-- I just searched and every article I can find claims the PS4's CPU is 1.6GHz, including wikipedia. I'm not going to say that it isn't; however, I think it would be safe to assume 1.6GHz until proven otherwise.
Also, this 1.75GHz and 2GHz rumor has been started due to an apple to oranges comparison. Due to differences in RAM and Operating Systems, the Xbox One benchmark results cannot be compared directly to the PS4s. If one apple benchmarks at 1.25x the rate of an orange, that does not mean I have 1.25 apples.
At the very most, there are only minor tweaks which will not affect performance one way or another. I designed computer circuitry in college, and no modern CPU is designed by hand anymore. The circuitry for CPUs is created using a programming language called Verilog or a similar programming language. If there are any differences between the PS4 and Xbox One silicon, it is the equivalent.
Read above. They both have Jaguar cores. The Verilog for both processors is either identical, or 99.9% the same.
If you would have read what I said, I did not outright refute this; however, I am very skeptical. I'm going to quote myself...
"As an engineer, I would like to know the exact methodology behind this test. Does it not run on an operating system? How does it not use RAM? Was the test small enough to run entirely in the LUCache, and if so, how did the operating system not interfere?"
These are the valid questions I have which have not been answered, and will not be answered.
I'm done arguing with you. I have designed computer circuitry and you have not. My assumptions made are safe assumptions for anyone who knows the industry to be making.
Once again-- I just searched and every article I can find claims the PS4's CPU is 1.6GHz, including wikipedia. I'm not going to say that it isn't; however, I think it would be safe to assume 1.6GHz until proven otherwise.
Also, this 1.75GHz and 2GHz rumor has been started due to an apple to oranges comparison. Due to differences in RAM and Operating Systems, the Xbox One benchmark results cannot be compared directly to the PS4s. If one apple benchmarks at 1.25x the rate of an orange, that does not mean I have 1.25 apples.
As mentioned, the 1.6GHz figure you're seeing is being repeated from the original VG Leaks report. AFAIK, Sony have never confirmed the actual clock.Every source I can find online claims the PS4 is clocked at 1.6GHz and the Xbox One is clocked at 1.75GHz. Since the CPU modules on the APU are THE SAME SILICON as the Xbox One's CPU, they will preform better than the PS4's in a vacuum (without considering OS, Memory, etc.) by a small margin. If indeed, these facts are wrong (please provide a source if so), then you may be right.
If you think MS would be more motivated to downclock for yield — and I agree — then why would you assume Sony were the ones who actually did so?As for my 'argument,' it would apply more to the Xbox One than the PS4 because the Xbox One APUs are larger and more costly to produce, especially due to the ESRAM. My point was that Microsoft and Sony both took into consideration AMDs chip yields. For example, if 80% of APUs were able to be clocked at 1.75GHz, that yield might have been good for Microsoft but Sony wanted a 90% chip yield and opted for the lower clock speed to save costs.
Well, there are certainly benchmarks designed to run entirely within a chip's cache, and since those caches are managed by the chip rather than the OS, I don't understand why the OS would have any effect on the test. I could see compiler differences affecting the test, but if Allegorithmic were getting incongruous results — say, if the lower-spec'd chip were outperforming the higher-spec'd one — I would assume they would correct the issue rather than publishing flawed results.The 'math,' does not prove anything. A benchmark is not conducted inside of a vacuum. Unless the benchmark is not running on an operating system and does not use memory (only using the CPU registers and LU Caches), then it cannot possibly be testing only the CPU performance, and from all the data I have seen, since the Xbox One's CPU module is THE SAME SILICON as the PS4's CPU module and is at a higher clock, it will outperform the PS4's CPU by a small margin.
Err, I don't think either disables any CPU cores at all, though rumor has it XBone reserves two for the OS, while PS4 reserves only one. Both systems disable two GPU cores though, leaving 12 active on XBone and 18 active on PS4. Perhaps that's what you're thinking of? But again, looking at the Tegra 4 and A6 results, this seems to be testing the performance of a single core, meaning OS reservations/interference aren't relevant here.There is a caveat to all of this. If the Xbox One has disabled more CPU cores than the PS4 in order to improve chip yields, this could also mean the PS4's lower clock-rate CPU will outperform.
Thanks for the explanation. Also, yay, I'm smart!!The software in question is some kind of algorithmic texture generation system. Since the results it outputs are measured in tens of megabytes per second, and the whole point of progressive art generation is not having large art assets to begin with, it's unlikely bandwidth plays a large role in the results.
Well, the point of this thread is to highlight that all of those reports were apocryphal to begin with. Sony has never officially confirmed what the final clockspeed of their CPU is, and the evidence everyone points to refers an alpha kit in February and at the time the Xbox One CPU also ran at 1.6Ghz, and the PS4 only had 4GB of RAM. Things change, and this benchmark certainly lends strong evidence to the conclusion that the PS4 CPU is clocked higher than everyone, including Microsoft, assumed.
It can't be apples to oranges, since the benchmark has both doing the same exact work.
The software in question is some kind of algorithmic texture generation system. Since the results it outputs are measured in tens of megabytes per second, and the whole point of progressive art generation is not having large art assets to begin with, it's unlikely bandwidth plays a large role in the results.
We already know Microsoft's OS reservation on the Xbox One is 2 or the 8 cores, so there should not be a concern of the OS stealing time from the other 6 in this benchmark. I suppose we could say the compiler is to blame, but given both are using AMD CPUs you would think the compilers for both rely heavily on the same underlying AMD provided technology.
The only other theory is that the hypervisor in the Xbox One is terribly inefficient. If we assume, as you have, that the PS4 is still clocked at 1.6Ghz, with the Xbox One at 1.75Ghz, that means the PS4 is nearly 30% faster clock for clock in this test. Frankly, I find it far easier to believe the PS4 clockspeed was set at 2Ghz for final hardware than to believe the Xbox One's virtual machine setup is so dismally inefficient.
Ultimately, we don't have enough information to say exactly why the Xbox One's CPU is slower, but we do now have evidence showing that's the case. It doesn't matter how this is achieved, the results are what is important. And the only reason this minor difference is even particularly notable is that Microsoft was claiming a CPU advantage on this very forum, and they've earned every bit of dirt that's been kicked in their face this year.
The software in question is some kind of algorithmic texture generation system. Since the results it outputs are measured in tens of megabytes per second, and the whole point of progressive art generation is not having large art assets to begin with, it's unlikely bandwidth plays a large role in the results.
I was gonna do that, but I forgot.Why doesn't someone just ask Yoshida on Twitter what the CPU is clocked at ?, he will prob answer as he is a total boss .
These are all good points-- especially about the hypervisor, and it could be that the PS4 is clocked greater than 1.6GHz; however, I would caution away from relying on benchmarks for determining clock speed. Until I see an official announcement, I am going to continue to believe it is 1.6GHz.
As for compilation, the code on PS3 was most likely compiled with a gcc compiler, while the code on the Xbox One was compiled with an MSBuild compiler; however, this is just speculation.
One point however--
If each individual read and write is slower, the CPU could pipeline the operations differently and the Xbox One could preform the work less efficiently than the PS4, even though it has the same silicon and a higher clock speed. Moreover, the Xbox One may be compensating for less efficient RAM by relying slightly more on branch prediction and doing more work to accomplish the same goal; however, this shouldn't make a terribly large difference.
I actually believe 1.6 was only ever confirmed by MS for the One's CPU post up clock.
All part of game journalists willingness to make stories out of half-information.
Yeah, all from a single rumor that came out 10 months before launch, but it's gospel until Sony say otherwise, benchmarks be damned. =/The thing I find funny is claim that you will continue to believe the 1.6 until official numbers are released. 1.6 is not an official number but an assumed figure. So you are essentially saying that you will continue to believe an assumption over new information verified by mathematics until an official figure is released.
Every source I can find online claims the PS4 is clocked at 1.6GHz and the Xbox One is clocked at 1.75GHz. Since the CPU modules on the APU are THE SAME SILICON as the Xbox One's CPU, they will preform better than the PS4's in a vacuum (without considering OS, Memory, etc.) by a small margin. If indeed, these facts are wrong (please provide a source if so), then you may be right.
As for my 'argument,' it would apply more to the Xbox One than the PS4 because the Xbox One APUs are larger and more costly to produce, especially due to the ESRAM. My point was that Microsoft and Sony both took into consideration AMDs chip yields. For example, if 80% of APUs were able to be clocked at 1.75GHz, that yield might have been good for Microsoft but Sony wanted a 90% chip yield and opted for the lower clock speed to save costs.
The 'math,' does not prove anything. A benchmark is not conducted inside of a vacuum. Unless the benchmark is not running on an operating system and does not use memory (only using the CPU registers and LU Caches), then it cannot possibly be testing only the CPU performance, and from all the data I have seen, since the Xbox One's CPU module is THE SAME SILICON as the PS4's CPU module and is at a higher clock, it will outperform the PS4's CPU by a small margin.
How? As an engineer, I would like to know the exact methodology behind this test. Does it not run on an operating system? How does it not use RAM? Was the test small enough to run entirely in the LUCache, and if so, how did the operating system not interfere? I did not see that information in the OP, so I am skeptical of this assumption.
For all we know, all this benchmark is proving is that the FreeBSD operating system kernel used on the PS4 outperforms the Windows operating system kernel used on the Xbox One. Biiiiig surprise there...
There is a caveat to all of this. If the Xbox One has disabled more CPU cores than the PS4 in order to improve chip yields, this could also mean the PS4's lower clock-rate CPU will outperform.
MS played that card and it wasn't even true.
Most powerful console? No!Wait, that wasn't true?
Wait, that wasn't true?
Most powerful console? No!
It's true if you preordered a PS4.
These are all good points-- especially about the hypervisor, and it could be that the PS4 is clocked greater than 1.6GHz; however, I would caution away from relying on benchmarks for determining clock speed. Until I see an official announcement, I am going to continue to believe it is 1.6GHz.
As for compilation, the code on PS3 was most likely compiled with a gcc compiler, while the code on the Xbox One was compiled with an MSBuild compiler; however, this is just speculation.
One point however--
If each individual read and write is slower, the CPU could pipeline the operations differently and the Xbox One could preform the work less efficiently than the PS4, even though it has the same silicon and a higher clock speed. Moreover, the Xbox One may be compensating for less efficient RAM by relying slightly more on branch prediction and doing more work to accomplish the same goal; however, this shouldn't make a terribly large difference.
Maybe whatever the hell this is boost the CPU up by 25% when running the Substance Engine because it's some kinda co-processor or accelerator?
needs more expounding .were getting into compiler differences now?
dude, PS4 is using -funroll-loops
Xbone needs to read up on the documentation, it's -O3 the letter, not -03 the number.
context? is that the extra chip they have for handling video compression on the fly downloading etc?
Before anyone calls me a fanboy for the Xbox One-- I'm a PS4 owner and do not have an Xbox One; however, I plan to get one when Microsoft drops the price and takes Kinect out of the box.
Wow, this discussion is still going on? I hate to burst everyone's bubbles; however, the Xbox One and PS4 CPU module on the APU is most likely THE SAME HARDWARE clocked at different speeds. AMD would not design two different CPUs with the exact same purpose. And for those of you who will say something like, "but the PS4 has more graphics cores," or "but the Xbox One has ESRAM;" know this-- the APUs are modular in their design and just because one APU has something different does not mean the individual modules comprising the APU are any different.
Most people don't understand, but CPUs are expensive to make. AMD and intel only design a few different CPU designs every year, if that. All of the different clock speeds are actually the same hardware. After the CPU is manufactured, assuming it is not dead on arrival, it is tested to see what clock speed it will reliably run at. Also, since the demand for lower-end CPUs tends to be greater than the demand for higher-end CPUs, often times CPUs which in test are able to preform better are underclocked.
http://www.tomshardware.com/picturestory/514-27-intel-cpu-processor-core-i7.html
"Based on the test result of class testing processors with the same capabilities are put into the same transporting trays. This process is called "binning," a process with which many Tom's Hardware readers will be familiar. Binning determines the maximum operating frequency of a processor, and batches are divided and sold according to stable specifications."
The PS4 CPU module is probably clocked at a slightly lower speed to improve chip yields. Being able to have many different CPUs of different clock speeds is true for desktop CPUs; however, there will not be a 1.75GHz PS4 and a 1.6GHz PS4, obviously. Therefore, any APUs which are unable to reliably run at the PS4's clock speed will have to be discarded. This is a large expense, because the APU also houses the graphics processing which would explain why Sony was prudent with their CPU clock speed. I repeat: They are the EXACT SAME CPU modules clocked at different speeds.
The only logical explanation for the benchmark on the PS4 outperforming the Xbox One is that the Xbox One's operating systems are greedier and reserve more of the hardware for themselves. Perhaps there is also a slight advantage to the PS4 when writing to and reading from RAM.
needs more expounding .were getting into compiler differences now? And also how ram affects cpus etc ... just saying this post needs a lot more explanation (not saying youre wrong but for meaningful discussion more technical details need to be put forth)
So, basically PS4 has either a) Higher Ghz OR b) more cores enabled?
#Team7Cores