So MS now tries to fool it's own devs with some PR? Why can't you accept this information as a given? Where is your proof that this is pure PR bullshit?
102 * 2 != 192 for starters...
So MS now tries to fool it's own devs with some PR? Why can't you accept this information as a given? Where is your proof that this is pure PR bullshit?
Flops are flops, now they're supposed to be different??
Here's a good one:
DDR3 bandwidth =/= GDDR5 bandwidth, therefore cannot be compared either... amirite?
Flops are flops, now they're supposed to be different??
Here's a good one:
DDR3 bandwidth =/= GDDR5 bandwidth, therefore cannot be compared either... amirite?
Fist of all, flops are the just the number of basic floating point operations per second. That just roughly means the number of numbers you can crunch each second. Computing floating point numbers is not everything that the gpu does, but if you are comparing the same operation, there is no such thing as exchange rate, cuz you are comparing the same thing.
Nope. Architectures are different.
That's tough to say. That's something a person would IMO have to look at benchmarks of varying, comparable AMD/nVidia cards to try and form a ratio to try and get an idea. Take 680 benchmarks vs 7970 benchmarks. AMD/ATi seems to have improved the performance of their GPUs compared to comparable class GPUs from nVidia, but they still lag behind when looking at the FLOP rating.
The 7970 is rated at ~3.8 TFLOPS and the 680 at ~3.1 TFLOPs. Yet in most benchmark comparisons I've seen (including the ones in the following link), the 680 still edges out the 7970 in most tests.
http://www.tomshardware.com/reviews/geforce-gtx-680-review-benchmark,3161-7.html
And as I've seen it nVidia's FLOP rating has been considered "real world performance" or more accurate.
Then of course the GPUs in the consoles won't be on par raw power-wise with a 7970 or 680.
We're left wondering how unoptimized are these demos when using a 680 and depending on what the console GPUs are (PS4's seems to be the best so far), how long will it take to fully optimize usage with their GPUs.
I don't think it's a meaningless question, by going with an average over a lot of games/graphics benchmarks you can at least arrive at a ballpark number.
Since the Kepler (the 600 series), I'd say the exchange ratio is about 4 : 3 or slightly better, in favor of NV. Before Kepler, it was closer to 5 : 3 in favor of NV.
As for recent developments, Kepler went broader in terms of SIMD (and thus lost some efficiency in exchange for more FLOPs per transistor count) while AMD ditched VLIW which increased their resource utilization for most workloads.
This also means that a modern non-VLIW 2.4 TFLOP AMD GPU will be more than 10x as fast in most realistic scenarios as the 240 GFLOP Xenos in 360.
Why is this spin? I don't see any concrete evidence to say that, just speculations.
where is your proof this isnt tho?
That's not really how the burden of proof works.
Which kind of flops would you say are the best kind of flops?
102 * 2 != 192 for starters...
May 2013
June 2013 with this new update Xbox one has
192GB/s eSRAM
68GB/s Main RAM
30GB/s bandwidth between the CPU and GPU
=290GB/s bandwidth
So Xbox one has 290GB/s bandwidth, just following their math people.
Flops are flops, now they're supposed to be different??
Here's a good one:
DDR3 bandwidth =/= GDDR5 bandwidth, therefore cannot be compared either... amirite?
Now that close-to-final silicon is available, Microsoft has revised its own figures upwards significantly, telling developers that 192GB/s is now theoretically possible.
So MS now tries to fool it's own devs with some PR? Why can't you accept this information as a given? Where is your proof that this is pure PR bullshit?
So MS now tries to fool it's own devs with some PR? Why can't you accept this information as a given? Where is your proof that this is pure PR bullshit?
still finding it odd DF are the only ones reporting this
Im guessing he is referring to the general response from various users saying this is PR Bullshit..
Which would be ridiculous to think that MS would bullshit their own devs in regards to what the machine could do.. I mean do any of you know the implications of something like that?
I bought some flops off the street the other day... good shit man. 2TBs will lay you out for an evening.
Accepted.
lol that's not math, that's a fairy tale. It's like saying I can benchpress 200 pounds, squat 300 pounds, and deadlift 500 pounds, THAT MEANS IM STRONG ENOUGH TO LIFT 1000 POUNDS.
Were they double precision? That shit is intense.
I'll let others talk for me:
Short answer is that different GPU architectures make the flop comparison not very useful. So you'd have a point if the two consoles had different GPU architectures, but they don't, so you don't.
Don't be upset.
Can people just be happy that there is a possibility that the xbone is a little bit better than before? Can't that only benefit everyone?
That's not really how the burden of proof works.
Reading that article it seems that the only information coming out of Microsoft is this:
They appear to have arrived at that number by combining peak theoretical read/write speed, so I think you guys are right about them downgrading the ESRAM to 96GB/s, else they would surely be telling developers 204MB/s? the 88% has probably come from the EuroGamer editor seeing the 102>192 increase without knowing about the rumoured downgrade?
102 * 2 != 192 for starters...
Were they double precision? That shit is intense.
What the fuck are we talking about? You are coming up with random math as an argument? Seriously?
No? It's explained in the next paragraph. But jumping-to-conclusions Is easier it seems.
DF stating the new peak memory bandwith beeing 192GB/s while maintaing 800mhz clockspeed just doesn't add up...
I don't understand some people make an effort to discredit the information, it seems that the good news bothers them.
Well, for start... give me a name of ONE DEV who received this info. I don't see any. So we know nothing. I only see "theoretically possible" things that, by logic and math, sounds like bullshit and a reliable media as Digital Foundry shouldn't be on board of this until THEY PROBE the thing for themselves as they usually do.
Hm... Let me think. Looking at the accurate articles from DF in the last years... I'm inclined to trust them.
Looking at the track record of maltrain... Nothing.
Why should I swallow some random comment from a gaffer over a detailed article from a reliable website?
nothing at all is explained in that article... I mean nothing. this entire thread is assumption based, as is the article itself aside from the 192GB/s number (which is second hand to begin with). Let's not kid ourselves otherwise.
It's not like Ps4 and Xbone gpus are exactly the same either. Ms deliberated decided to trade gpu processing power for more onchip memory. That alone gives differences in architecture that could unbalance the flops to performance ratio.his statement was wrong... but his intent is still true. Nvidia compared to AMD is pointless in numbers like FLOPs because of architecture differences.. but that's not what's happening here. This is basically comparing a 7770 to a 7850.. i.e. same basic architecture.. and when comparing say the Radeon family (usually in a price to performance comparison) the FLOPs within the family are an easy "at-a-glance" comparison tool. of course most sites at that point just use FPS regardless.
And I find it odd that DF are linked to continuously for their mythbusting of MS's cloud claims and then when they report something postive they're absolutely definitely in MS's pocket and they're totally lying about it. *shrug* But without any proof or even evidence, the claim is pure, unrefined FUD.
This is not "random math" - but thank you for the accusation...
102GB/s was the embedded memory peak bandwith up until today.
800mhz*128bit * 1 (read or write opreation) =102,4GB/s
800mhz*128bit * 2 (1 read and 1 write opreation) =204,8GB/s
DF stating the new peak memory bandwith beeing 192GB/s while maintaing 800mhz clockspeed just doesn't add up...
Apparently, there are spare processing cycle "holes" that can be utilised for additional operations
where is your proof this isnt tho?
lol that's not math, that's a fairy tale. It's like saying I can benchpress 200 pounds, squat 300 pounds, and deadlift 500 pounds, THAT MEANS IM STRONG ENOUGH TO LIFT 1000 POUNDS.
You were the one talking about that AMD Flops are different from Nvidia Flops... No one doubted that comparing Flops to deduct actual performance is a stupid thing because its mostly about how effective the actual architecture is.
What exactly are you suggesting then, that the esram is useless? If it can't be used for even a routine such as "moving a hand", what do you think it can be used for?
Please elaborate.
Let's go through this logically, I'll lay it out so everyone can understand.
Previously MS engineers though read/write was only unidirectional, and the APU to ESRAM bandwitdth was pegged at 102GB/s, the reality was that it was bidirectional, regardless of what they thought which meant it was actually 204GB/s which lines up perfectly to 800MHz for the GPU.
Now we have information saying that MS engineers have discovered that information is bidirectional (not that it is now, just that they found out it is) and the consolidated read/write bandwidth is 192GB/s which is 96GB/s in each direction. That figure is lower than the old 102GB/s figure and it implies a GPU clock of 750MHz.
So yes, 192 is higher than 102, but it is not comparable as the latter is unidirectional bandwidth and the former is bidirectional. The fact that MS engineers didn't know or realise that you could run read/write operations simultaneously is irrelevant because it was still possible, this is not a new addition, more a new discovery. Think of it like a scientific discovery, just because an apple fell on Newton it doesn't mean he invented gravity, it existed before that, he just discovered it.
So we've actually gone from 204GB/s to 192GB/s or on the old measure, 102GB/s to 96GB/s, it's not that hard to understand. Leadbetter has this one wrong and he should try to correct it.
Apparently, there are spare processing cycle "holes" that can be utilised for additional operations
They never said the ESram is completely bi-directional.
esram especially if its only 32mb is irelevant these days.
esram especially if its only 32mb is irelevant these days
It could be used for a ping pong game running one algorithm based on a single C++ class file. But for complex gaming having1000s of things on screen , it is basically useless
I am sure that before responding to me , you do realize that whenever a thread unloads its resources or is killed by kernel , then there is cache penalty and we are talking a meagre 32mb which cant be used concurrently by all routines.
umm.. the two GPUs are identical architectures.. absolutely. The only "not identical" part are the number of compute units on each GPU and the included on-die eSRAM. Both of those have no bearing on the actual performance of the GPU. It's still X total CU's running at Y clock speed under the same architecture.It's not like Ps4 and Xbone gpus are exactly the same either. Ms deliberated decided to trade gpu processing power for more onchip memory. That alone gives differences in architecture that could unbalance the flops to performance ratio.
Please explain to me how a "partialy" 88% bidirectional bus works...
Let's go through this logically, I'll lay it out so everyone can understand.
Previously MS engineers though read/write was only unidirectional, and the APU to ESRAM bandwitdth was pegged at 102GB/s, the reality was that it was bidirectional, regardless of what they thought which meant it was actually 204GB/s which lines up perfectly to 800MHz for the GPU.
Now we have information saying that MS engineers have discovered that information is bidirectional (not that it is now, just that they found out it is) and the consolidated read/write bandwidth is 192GB/s which is 96GB/s in each direction. That figure is lower than the old 102GB/s figure and it implies a GPU clock of 750MHz.
So yes, 192 is higher than 102, but it is not comparable as the latter is unidirectional bandwidth and the former is bidirectional. The fact that MS engineers didn't know or realise that you could run read/write operations simultaneously is irrelevant because it was still possible, this is not a new addition, more a new discovery. Think of it like a scientific discovery, just because an apple fell on Newton it doesn't mean he invented gravity, it existed before that, he just discovered it.
So we've actually gone from 204GB/s to 192GB/s or on the old measure, 102GB/s to 96GB/s, it's not that hard to understand. Leadbetter has this one wrong and he should try to correct it.
Please explain to me how a "partialy" 88% bidirectional bus works...