• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

TRUTHFACT: MS having eSRAM yield problems on Xbox One

Status
Not open for further replies.
Because he refuses to actually address that inconsistency in his posts, and he has a history of questionable claims as pointed out by M°°nblade.

I read all of his posts and he seemed to respond pretty reasonably excluding that outburst he had. Sure he may be Microsoft employee and just sent to do damage control. But he could also be a Microsoft employee who actually knows what's going on in this situation.
 

abic

Banned
I read all of his posts and he seemed to respond pretty reasonably excluding that outburst he had. Sure he may be Microsoft employee and just sent to do damage control. But he could also be a Microsoft employee who actually knows what's going on in this situation.

I read all of his posts too and I disagree.

Never in one case did he respond reasonably and that outburst was just terrible.

That you are considering in both cases that he is a Microsoft employee is just worse.
 
Kind of have to side with SenJutsu. Sure he may be wrong in the end but he did take the time and possible risks of sharing some valuable information. He didn't have to share anything and if people don't want to believe him that fine but don't continue to question his credibility or search recent posts of his like its a witch hunt.

Honestly i dont know why anyone would want to share leaked information on here if Gafs response is always: if you are wrong you get a ban

No one here said that, he was the one that placed himself in that position. It was like some sort of mini meltdown.
 

MogCakes

Member
I read all of his posts and he seemed to respond pretty reasonably excluding that outburst he had. Sure he may be Microsoft employee and just sent to do damage control. But he could also be a Microsoft employee who actually knows what's going on in this situation.

Could be. But him expecting us to take his word for it and getting angry that people aren't, coupled with that contradiction and his angry reply to M°°nblade calling him out and me pressing him on it, really don't help his case.
 

Rafy

Member
Isn't anyone reminded of the PS3 situation with this? You'd think they'd learn after that. Making customized chips never ends well without proper testing and an acceptable price. I am starting to think that Pachter might be right about the price, if this APU is heavily customized, as MS claims, then I don't really think it will be cheap. I hope I am wrong on this, I just hope we don't have another fiasco like the RROD again...
 
Just as a sidenote, next time someone says games culture is super masculine and competitive?

This is the sort of environment they mean.
 
No one here said that, he was the one that placed himself in that position. It was like some sort of mini meltdown.

Yeah I saw that but it just seems to happen all too often on here. I remember a recent thread that popped up with someone claiming he had a list of games for Nintendo direct or something. People were already mentioning that he may be banned for being wrong. Reiko, mentioned in this thread, was also banned for some wrong info most likely. Not really a friendly place to share leaked information when everyone is questioning you.
 

joshcryer

it's ok, you're all right now
Didnt someone say a couple weeks back that some xbone multi games would end running at 30FPS and 60FPS on PS4?

I wonder if it was related to this.

Nah, that was just basic spec comparisons. You could target 60FPS on X1 but like some PS3 ports of 360 targeted games you might get drops. So it's more logical to target 30FPS and avoid the frame drops. Basically the known / published spec difference was seen to be large enough that there'd be that discrepancy.

The whole "downclocking" thing may have played into some of that speculation but on the forums it was just a basic spec comparison.
 

AngryMoth

Member
So which system do you guys expect will be more often chosen as lead platform for 3rd parties. Might they go with xbone so they have a easier port to the ps4, as opposed to making a game for the higher spec system and then having problems getting it to run on less powerful hardware? Will this effect how big the difference there is between the 2 versions?
 
I'm going with the people who have legit sources in the past, some wrong calls or not. Especially since it matches up with the calls of MS being behind from earlier. Matt's chiming in didn't say which was true, but one is.

The denial of bad rumors in MS' case this ramp up to nextgen have all wound up with egg of their faces. I don't see it ending now with multiple sources AND Matt. YMMV.


Yes it's a rumor, but a very believable one at this juncture IMO.

As far as I'm concerned, when some of your rumours are true and others not, that makes it an educated guess nor a legit source. Sometimes you get it right, other times not.

The line between believable and not believable seems to be drawn between those on either side, predictably. What can only be said for certain is that Microsoft are experiencing issues, to take it to underclocking an already gimped GPU is the very definition of dubious and only smacks of FUD at this point.

So yeah, if our sources are only sometimes right then I'm more than ready to suggest this is one of the times they aren't.
 
Can we talk worst and best case scenarios here? One thing that is agreed is Microsoft have a manufacturing problem, one that could be extremely serious or simply problematic dependant on who you ask.

Worse case scenario : Console is downclocked, gimping the generation, but releasing sametime as competition.

Plus sides :- The generation wont be fought on visuals alone and the WiiU has a greater chance of remaining relevant (this is important for the health of the industry). Also, The xbone will be whisper quiet and launch hardware can comfortably last the generation.

Negatives:- Urgh. Just Urgh.


Best case scenario:- yields are low, but microsoft will stagger launch in the first hoilday with a view to make up ground in 2014.

Plus side:- Less chance of a sim city launch, Scarcity creates demand, Microsoft better equiped for a longer generation.

Negatives:- Automatically lose significant mindshare to sony, 3rd parties make less on launch hardware, May not be lead platform for long, Gamers less forgiving.

That about right?


In the long run im not sure if this is the greatest issue microsoft faces. I still think this console will be significantly more profitable than the PS4, especailly at first and i honestly beleive thats all microsoft cares about.
 
I'm talking Wii U here in case you didn't notice, and the make-up is a well known fact: 32MB eDRAM + 2MB eDRAM + 1MB SRAM. We've seen the die after all.

I saw no real clarity in the attempted analysis of that die. Was wondering if Nintendo actually released something solid while I've been paying attention to other things.
 
Hmmmm

Currently we know that 10% of GPU resources are dedicated to the OS for the Xbox One. Even with a GPU down clock I can't see MS reducing the number of GPU FLOPS dedicated to the OS so I think its reasonable to expect that the # of FLOPS dedicated to the OS would stay fixed regardless of a down clock.

10% of 1.229TFLOPS = 0.123TFLOPS dedicated to the OS.

A 100-200MHz reduction in the GPU clock speed would bring it down to 0.922-1.075TFLOPS.

Subtract that from 0.123TFLOPS and you got 0.799-0.952TFLOPS left for games.

Which is piss poor, if true.
 

Septimius

Junior Member
Consider taking your own advice.

It's really a problem that I tell a guy to shove his junior-hostile shit, in response to him telling a junior to go to hell because he/she's a junior? I hate it when people resort to trying to make people look bad simply because of a title. Being a junior is completely irrelevant to the quality of your posts, and your knowledge of GAF. I wouldn't, however, be upset if someone was upset with someone for a real reason. Which is why I said what I said.
 
I saw no real clarity in the attempted analysis of that die. Was wondering if Nintendo actually released something solid while I've been paying attention to other things.

Memory stuff has been pretty well deciphered at this point. It's just the design of the GPU itself that's a cause for confusion. It looks nothing like any part it could be based on.
 

blade85

Neo Member
Xbox 360 - Roughly 360 Gflops
Xbox 1 - Roughly 1 Tflops

Conspiracy!

Anywho, lets assume everything mentioned in these roumers is true...even so the system will still be capable of pulling off some very impressive stuff as a closed platform.
 

wsippel

Banned
Wasn't it you telling me the other week that it was 1t-SRAM?
1T-SRAM is eDRAM. It might or might not be 1T-SRAM, which is just marketing term for pseudostatic DRAM developed by MoSys, but it's probably pseudostatic either way. It most likely has to be. Renesas has its own flavor of pseudostatic DRAM called LLDRAM. The 3DS uses pseudostatic RAM as well, except in that case, it's Fujitsu FCRAM. No matter what it's called, it's all essentially the same thing: DRAM with hidden refresh cycles.


I saw no real clarity in the attempted analysis of that die. Was wondering if Nintendo actually released something solid while I've been paying attention to other things.
The memory make-up is pretty much the one thing we do understand, but it's not quite clear how wide the individual busses are, so the bandwidth is still a mystery.
 
M°°nblade;61353121 said:
Exactly. There's no point for Senjutsu to hide behind 'opinions' because I never adressed any opinion part of his comments. Maths are not an opinion, neither is having a source.

I don't care whether he's pro xbox or not. But I do care whether it affects the truthfulness of his statements.

This again. For the last time. I didn't care to address it because we've discussed the subject to death already in up to two other threads. You think it isn't a matter of opinion with regards to how you choose to calculate the differences between the two GPU. I, in fact, do. I think the PS4 GPU being the stronger of the two must be the first variable in any calculation when looking at the difference in raw compute power between the two, that way you ensure a greater chance to properly view the One GPU as the 1.2TFLOP GPU that it is without introducing other biases into the result. When you start from the more powerful part and then calculate down to 1.2 TFLOPS, you get 33%. I think it's important to do it this way for one simple reason: When you do it the other way around, I feel that more factors than just peak theoretical compute power come into play, such as what represents 50% of a 1.2TFLOP GPU, which I don't think is as important when looking at the differences between peak theoretical performance.

You can call it accurate either way, because the math does indeed work out both ways. I acknowledge that going from lower to higher gives you 50%, but people seem to not be interested in acknowledging the opposite as valid in anyway, which gives you 33%. You just choose to accept one over the other, as I choose to accept one over the other.

I think the top down approach makes more sense for looking at true differences in peak compute performance. This isn't just a thing I like to do for console gpus, I do it for many other things, too. I think the bottom up approach makes more sense for showcasing how much the weaker part would have to be improved to match the stronger part, which factors in more than just the peak theoretical performance of each part, something I think goes beyond the scope of the exercise in the first place, which is why I frown upon the practice of doing it that way. To you and some others this difference may seem completely insignificant, or you may think there's no difference in meaning at all between the two approaches, but I don't see it that way, hence that qualifies as much more than a simple argument about math. We aren't arguing math, we are arguing over a preferred methodology, hence my opinion.

Now forgive me if I choose not to engage on this tired and run down issue again in the future. I'm not changing my view, and you aren't changing yours.
 
Memory stuff has been pretty well deciphered at this point. It's just the design of the GPU itself that's a cause for confusion. It looks nothing like any part it could be based on.

Cool. I haven't looked at the thread for a long time. Honestly, it was a bit of a clusterfuck last time I was there, so I was taking everything with a huge grain of salt.
 

Proelite

Member
Can we talk worst and best case scenarios here? One thing that is agreed is Microsoft have a manufacturing problem, one that could be extremely serious or simply problematic dependant on who you ask.

Worse case scenario : Console is downclocked, gimping the generation, but releasing sametime as competition.

Plus sides :- The generation wont be fought on visuals alone and the WiiU has a greater chance of remaining relevant (this is important for the health of the industry). Also, The xbone will be whisper quiet and launch hardware can comfortably last the generation.

Negatives:- Urgh. Just Urgh.


Best case scenario:- yields are low, but microsoft will stagger launch in the first hoilday with a view to make up ground in 2014.

Plus side:- Less chance of a sim city launch, Scarcity creates demand, Microsoft better equiped for a longer generation.

Negatives:- Automatically lose significant mindshare to sony, 3rd parties make less on launch hardware, May not be lead platform for long, Gamers less forgiving.

That about right?


In the long run im not sure if this is the greatest issue microsoft faces. I still think this console will be significantly more profitable than the PS4, especailly at first and i honestly beleive thats all microsoft cares about.


A slight downclock isn't going to cause drastic differences like in the scenarios you detailed.

PS4 going from 192 GB/S to 176 GB/s was as trivial as this downclock.
 
Just as a sidenote, next time someone says games culture is super masculine and competitive?

This is the sort of environment they mean.
because said competition is inherently a masculine thing?

I don't get how you can turn a thread about eSRAM of all things into a gender discussion, but you done did it. Congrats.
 
So am I wrong if there's a big chance the Xbox One will be delayed for a bit due to this problem?

Underclocking the CPU seems like a last resort choice if it turns out they won't be able to get this out near the holiday season.
 
Cool. I haven't looked at the thread for a long time. Honestly, it was a bit of a clusterfuck last time I was there, so I was taking everything with a huge grain of salt.

Also as wssippel just said buses and bandwidth.

There's still a modicum of guesswork being made on what every transistor does, or what it all amounts to though.
 

Rashid

Banned
Even before this new rumor, just comparing relative GPUs available on the market showed a doubling of frame rates was possible. Add today's "news" and you may seriously want to recant.

I was going on the teraflop counts of 1.2 for xbone and 1.8 for ps4. For games at 30fps like battlefield, the xbone might get 30fps and the ps4 45fps. Now, I assume most developers would prefer a locked 30fps rather then the awkward 45fps, I heard 45fps causes juddering because it's at a weird ratio to refresh rates on most tv's. So they'd some more bells and whistles on the ps4 version and have better graphics at the same fps rather then the same graphics at an odd fps.
 
A slight downclock isn't going to cause drastic differences like in the scenarios you detailed.

PS4 going from 192 GB/S to 176 GB/s was as trivial as this downclock.

Yup. And that cap was probably a result of proactively addressing yield issues to insure enough supply.

Even if this is 100% true, it doesn't make it impossible for the Xbone to have games that will impress consumers enough to get them to buy the box, it just widens the performance gap of the two machines that we already knew about a bit more.

It's not worth telling juniors to go to hell or making ban bets over. Nor is it immediately going to make PS4 games have double the resolution/framerate of their Xbone counterparts.
 
If this follows through though I have a new comparison between the three consoles.

WiiU = M2 (similar poly crunching capability as already released consoles along with a significant boost in texturing ability and a more modern featureset)
XOne = DC (a noticeable improvement over the prior generation but still lacking the grunt of...)
PS4 = GC (most efficient and easy to handle design of it's respective generation while still being powerful)

And this has been another episode of The Thundering Monkey useless comparison minute!
 

jaosobno

Member
This shit can't be real, can it?

yesjacknicholson.gif
 

kitch9

Banned
This again. For the last time. I didn't care to address it because we've discussed the subject to death already in up to two other threads. You think it isn't a matter of opinion with regards to how you choose to calculate the differences between the two GPU. I, in fact, do. I think the PS4 GPU being the stronger of the two must be the first variable in any calculation when looking at the difference in raw compute power between the two, that way you ensure a greater chance to properly view the One GPU as the 1.2TFLOP GPU that it is without introducing other biases into the result. When you start from the more powerful part and then calculate down to 1.2 TFLOPS, you get 33%. I think it's important to do it this way for one simple reason: When you do it the other way around, I feel that more factors than just peak theoretical compute power come into play, such as what represents 50% of a 1.2TFLOP GPU, which I don't think is as important when looking at the differences between peak theoretical performance.

You can call it accurate either way, because the math does indeed work out both ways. I acknowledge that going from lower to higher gives you 50%, but people seem to not be interested in acknowledging the opposite as valid in anyway, which gives you 33%. You just choose to accept one over the other, as I choose to accept one over the other.

I think the top down approach makes more sense for looking at true differences in peak compute performance. This isn't just a thing I like to do for console gpus, I do it for many other things, too. I think the bottom up approach makes more sense for showcasing how much the weaker part would have to be improved to match the stronger part, which factors in more than just the peak theoretical performance of each part, something I think goes beyond the scope of the exercise in the first place, which is why I frown upon the practice of doing it that way. To you and some others this difference may seem completely insignificant, or you may think there's no difference in meaning at all between the two approaches, but I don't see it that way, hence that qualifies as much more than a simple argument about math. We aren't arguing math, we are arguing over a preferred methodology, hence my opinion.

Now forgive me if I choose not to engage on this tired and run down issue again in the future. I'm not changing my view, and you aren't changing yours.

Dudes re-writing the rules of maths now.

Impressive stuff.
 
You're missing half a sentence there somewhere, a bit of difference in what?

Sorry, im just saying its crazy to think that MS hatted better running games because they were the base and they ported to ps3, yielding less than desirable results. In the case of this gen I firmly believe sony will have a better version all the time because its much easier to strip down than to "port up"
 

Neo C.

Member
Delay the console is the lesser evil, methinks. MS bet on the wrong horse while Sony got it lucky with GDDR5 RAM.
 

joshcryer

it's ok, you're all right now
When you start from the more powerful part and then calculate down to 1.2 TFLOPS, you get 33%.

X1 has a 33% less powerful GPU than PS4.

PS4 has a 50% more powerful GPU than X1.

If we're "starting from the more powerful system" the 50% number makes more sense, so I think you've just convinced yourself in this logic, man. You can't form a sentence with "more" comparing the two systems because the math inherently relies on greater than / less than concepts.

Not trying to start a fight, here. I agree it's basically arbitrary.
 

Septimius

Junior Member
This again. For the last time. I didn't care to address it because we've discussed the subject to death already in up to two other threads. You think it isn't a matter of opinion with regards to how you choose to calculate the differences between the two GPU. I, in fact, do. I think the PS4 GPU being the stronger of the two must be the first variable in any calculation when looking at the difference in raw compute power between the two, that way you ensure a greater chance to properly view the One GPU as the 1.2TFLOP GPU that it is without introducing other biases into the result. When you start from the more powerful part and then calculate down to 1.2 TFLOPS, you get 33%. I think it's important to do it this way for one simple reason: When you do it the other way around, I feel that more factors than just peak theoretical compute power come into play, such as what represents 50% of a 1.2TFLOP GPU, which I don't think is as important when looking at the differences between peak theoretical performance.

You can call it accurate either way, because the math does indeed work out both ways. I acknowledge that going from lower to higher gives you 50%, but people seem to not be interested in acknowledging the opposite as valid in anyway, which gives you 33%. You just choose to accept one over the other, as I choose to accept one over the other.

I think the top down approach makes more sense for looking at true differences in peak compute performance. This isn't just a thing I like to do for console gpus, I do it for many other things, too. I think the bottom up approach makes more sense for showcasing how much the weaker part would have to be improved to match the stronger part, which factors in more than just the peak theoretical performance of each part, something I think goes beyond the scope of the exercise in the first place, which is why I frown upon the practice of doing it that way. To you and some others this difference may seem completely insignificant, or you may think there's no difference in meaning at all between the two approaches, but I don't see it that way, hence that qualifies as much more than a simple argument about math. We aren't arguing math, we are arguing over a preferred methodology, hence my opinion.

Now forgive me if I choose not to engage on this tired and run down issue again in the future. I'm not changing my view, and you aren't changing yours.

You're arguing how to represent a number.

The PS4 is 150% the power of the Xbone. (50% faster than)
The Xbon is 66% of the power of the PS3. (33% slower than)

They're equal. It's representation. There's no differentiation in math. The math is exactly equal. This is not about math, it's about favorable representation. It's PR. You're continuing your crusade to save the Xbone's image. This is why your 'inside information' is meaningless. And just for the record, representing things upwards is the easiest mathematical way to represent a differentiation. Saying "I have 30% less than you" is an inherently confusing statement, at least more so than "I have 50% more than you".

And I think your third paragraph is just a big justification for doing what you do. We should argue that we should use 66% and 150%, respectively. That's simplifying things.
 

Gotchaye

Member
A slight downclock isn't going to cause drastic differences like in the scenarios you detailed.

PS4 going from 192 GB/S to 176 GB/s was as trivial as this downclock.

Seems like it might matter more to the extent that the GPU or whatever is being downclocked is where the system's choking. Plausibly that extra 16 GB/s of bandwidth isn't going to make a difference because nothing ever needs more than 176 GB/s, but it's really easy to imagine uses for GPU flops well in excess of what either of these consoles have.

Dudes re-writing the rules of maths now.

Impressive stuff.

Holy crap.
 
SenjustsuSage was the guy arguing for the 8GB DDR3 + esRam combo versus unified 8GB GDDR5, saying something about latency being a big advantage for the former.

So in short - I don't have any confidence in anything he is saying.
 
Dudes re-writing the rules of maths now.

Impressive stuff.

Seems anandtech doesn't entirely disagree with my disdain for math. Agree to disagree, the math works both ways. :)

http://www.anandtech.com/show/6972/xbox-one-hardware-compared-to-playstation-4/2

Microsoft can’t make up the difference in clock speed alone (AMD’s GCN seems to top out around 1GHz on 28nm), and based on current leaks it looks like both MS and Sony are running their GPUs at the same 800MHz clock. The result is a 33% reduction in compute power, from 1.84 TFLOPs in the PS4 to 1.23 TFLOPs in the Xbox One. We’re still talking about over 5x the peak theoretical shader performance of the Xbox 360, likely even more given increases in efficiency thanks to AMD’s scalar GCN architecture (MS quotes up to 8x better GPU performance) - but there’s no escaping the fact that Microsoft has given the Xbox One less GPU hardware than Sony gave the PlayStation 4.

SenjustsuSage was the guy arguing for the 8GB DDR3 + esRam combo versus unified 8GB GDDR5, saying something about latency being a big advantage for the former.

So in short - I don't have any confidence in anything he is saying.

Incorrect and completely misleading. I acknowledged the entire time that the PS4 is stronger and had a simpler more easier to utilize design. I was pointing out that the architectural makeup of the Xbox One isn't as terrible and as weak as people were implying, and that it would be able to produce great games. I argued that ESRAM isn't a drastic departure from what developers were use to on the Xbox 360, and that for all the screaming and hollering over Microsoft making a complicated to develop for system, that the Xbox One would be even easier to develop for than the Xbox 360, and when you factor in the similarity in GPU and CPU architecture between the One and the PS4, that's yet another thing that should make life easier for developers. Pretty much the way people do things on here is that they completely attempt to misrepresent what I've said. Nonetheless, comes with the territory, carry on. :)
 

Nikodemos

Member
Delay the console is the lesser evil, methinks. MS bet on the wrong horse while Sony got it lucky with GDDR5 RAM.
Not certain of this. Given the current 'long generation', I suspect that a worldwide launch delay longer than a couple weeks will cause terrible losses. MS would be better served to simply abandon Japan and accept defeat across Europe (including UK) to concentrate where its console's particular features would work best, namely the USA (therefore accept the supply restrictions and divert it to where it would be most needed).
 

Mxrz

Member
So the Xbone 32X, 2014 or 2015, you think?

Yeah I saw that but it just seems to happen all too often on here. I remember a recent thread that popped up with someone claiming he had a list of games for Nintendo direct or something. People were already mentioning that he may be banned for being wrong. Reiko, mentioned in this thread, was also banned for some wrong info most likely. Not really a friendly place to share leaked information when everyone is questioning you.

There's having wrong info and then there's acting like a smug douche while touting your magical sources as some end-all-be-all counter to everything.
 
Status
Not open for further replies.
Top Bottom