• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DigitalFoundry: X1 memory performance improved for production console/ESRAM 192 GB/s)

Status
Not open for further replies.
I think this quote speaks for the most part. Optimizing games for the Xbone will be a much harder task then PS4 due to the ESRAM implementation. In other words, PS4 will be much easier to develop for compared to Xbone.


I am pretty sure that BF 4 won't be 1080p on Xbone. 60 FPS and 1080p is certainly not possible.


Can and will be possible on both PS4 and Xbone, the difference between PS4 and Xbone is not as large as most people expect, and even more of a factor is that launch games are still being very conservative.
 
Well,
800Mhz x 128 bits = 102 GB/s
According to the article the flux capacitor found by MS allows read and write at the same time(intel, hire these guys) and so theorical max bandwidth should be 102 x 2 = 204
But the news say is 194 with take us to a 96GB/s read and 96GB/s write and so to 750MHz x 128 bits.

Downgrade or what?


Thats the speculation thats going on over at beyond3d. Sounds like a downgrade in clock. And... err... a Flux capacitor bi-directional bus and magic fairy dust.
 

-Amon-

Member
Reading the article it looks like MS is trying to say that even they have no clear idea of what their hardware is capable of ...
 
that doesn't make any sense if cache + ddr3 was as effective as gddr5 every gpu maker would be using it

They could if they were willing to sell a product with tremendous performance decrease for already released titles and that required all future titles to have optimizations that would only benefit their hardware.

That or if they developed magical drivers that would seamlessly for the developer do all the work required.
 
How can you not 100% know what your hardware is capable of if you're the one building the box?

it's fairly simple. It's all about yields. You won't know what temperatures your chips can run stably at until after you're manufacturing them. Some will run stable at higher performance levels than others. Generally with something like this, it's more about how good your manufacturing process is.

Say Microsoft hoped 90% of the chips would be stable at 176 GB/sec, once they scaled up manufacturing they discovered that 90% of the chips were actually stable at 192 GB/sec.
 

strata8

Member

Remember the 256GB/s bandwidth figure from earlier? It turns out that that's not how much bandwidth is between the parent and daughter die, but rather the bandwidth available to this array of 192 floating point units on the daughter die itself. Clever use of words, no?

http://www.anandtech.com/show/1689/2

I don't know man, but I think Anand knows more about this than you do.
 

chubigans

y'all should be ashamed
Thats the speculation thats going on over at beyond3d. Sounds like a downgrade in clock. And... err... a Flux capacitor bi-directional bus and magic fairy dust.

It would be hilarious if Leadbetter inadvertently confirmed the downclock rumor at an article trying to dismiss it.
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
Are we still playing that game where we think DF are in Microsoft's pocket? Because I thought everyone got bored of that years ago.

What does boredom have to do with the truth? Everyone has a bias, it is important to keep it in mind when you read a "technical" article that is about a company he seems invested in.

It is like reading gun control article written by a parent who's child was shot, it is an important fact to keep in mind.
 

Snubbers

Member
Wow, some good news for MS..

Whilst peak bandwidth is meaningless, the MS architecture is not without it's own merits, and it's obvious from listening to insiders opinions that it does help improve system efficiency, not to the point anyone thinks it'll even match the PS4, but certainly way more then people who cling to #CU's + GDDR5 vs DDR3 as the only quantifier of performance think..

I always thought MS should have gone safe with things, but actually, with Intel now offering EDRAM as a efficiency boost (providing devs use it correctly), there does seem a little method in this asymmetrical madness...
 
That actually makes sense. I don't get how they'd just now figure out it can simultaneously read and right. Can PS4's memory simultaneously read and write?

Or they changed the memory controller between the leaked docs and now?
Im not an hardware guy so what i now say could be bullshit.
 
What does boredom have to do with the truth? Everyone has a bias, it is important to keep it in mind when you read a "technical" article that is about a company he seems invested in.

It is like reading gun control article written by a parent who's child was shot, it is an important fact to keep in mind.

What 'fact' do you have to say DF is biased towards MS?

Anyways, I never thought memory bandwidth was going to be an issue on the Xbone in the first place. The architecture was one we know to be quite workable in consoles. It still wasn't as good (or simple to code for) as the PS4s, but it wasn't crazy or bad. It's still likely true to say that the PS4 won't have any problems at all seeing a game ported to it that was designed around the memory architecture of the Xbone. This was not always the case when going from Xbox 360 to PS3.

Going from something designed around the PS4's memory architecture to the Xbones will be more work, but I don't think many games will be leading on PS4 and being ported to Xbone as an afterthought.
 
This is a terrible article that is truly grasping at straws to try to paint the XB1 in a light where it might be considered (in some bizarre parallel universe) more capable than it actually is. Truly a low point for Digital Foundry, and I must admit that their recent articles regarding next-gen consoles have caused me to lose a lot of respect for them.

Very little written makes much sense.

I mean, why would the designers of the XB1 HW not know that the eSRAM is capable of both reading and writing to and from memory in certain peculiar cases?

Why does this only apply to production HW if it is merely a facet of the "unused cycles" and not anything like a change in clock speed in production HW (thus creating more cycles in a given time period)?

Why would this somehow present developers with a significant increase in memory performance across all operations, when in actual fact this concurrent read/write ability only occurs in very specific cases?

And why would this imply anything, either negative or affirmative, about the potential yield problems that MS may or may not be having with their APU? It's actually entirely irrelevant.

I must wonder who DF's sources are for this info.. I understand they have an even track record, but even if this is all true, and come down from a real update given to XB1 developers, DF's interpretation of it in this article is just abysmal. Really really bad.

Either way, this is nothing at all to be excited about. An increase in eSRAM bandwidth in the last thing that MS would need to close the gap between their console and PS4. Had this been about a change in main memory bandwidth (which is all but impossible with 256-bit DDR3) then it would have been much more interesting.
 
Yes? Improvements are usually a good thing, right?

Yes they are. But people in this thread are acting like this one bandwidth improvement ( theoretical, final?) defines and changes everything.

DF just threw a slice of cheese into a cage full of hugry rats.
 

DoomGyver

Member
All this talk about the downcocking rumor due to low yields, now we're being told that it is more powerful than what we first believed. Man, I get the feeling MS is setting themselves up for another RROD fiasco. At least it has a big fan, and plenty of case room.
 

Shayan

Banned
wow people believing this !!!

the 32mb esram is basically not even adequate for large algorithms and these days even a routine like moving a hand could take more than 32mb. If there are 10 routines with each wanting to access the esram ram pool, then the only way of acessing it would be when one process has been unloaded or killed. Xbone's ram's bandwidth for major ops would remain 68g/s . No way of fooling people with this 192g/s CLOUD LIKE number.

Cant believe they are combined the esram and ddr3 ram to fool people
 
Wow, some good news for MS..

Whilst peak bandwidth is meaningless, the MS architecture is not without it's own merits, and it's obvious from listening to insiders opinions that it does help improve system efficiency, not to the point anyone thinks it'll even match the PS4, but certainly way more then people who cling to #CU's + GDDR5 vs DDR3 as the only quantifier of performance think..

I always thought MS should have gone safe with things, but actually, with Intel now offering EDRAM as a efficiency boost (providing devs use it correctly), there does seem some method in this asymmetrical madness..

Not necessarily good news, it may be the result of a downclock. Going to be interesting to see how this plays out.


To your last paragraph though, MS did go the safe route. They knew from day one that they wanted 8gbs of ram. DDR3 was the only safe way to make that happen when they were designing the unit. Sony wanted unified memory with gddr5 and first targeted 2gbs... then 4gbs... and then right before the reveal found out that 8gbs was feasible. There is no way that Cerny and co could have seen this coming - affordable, mass produced 512mb sticks of gddr5 weren't supposed to be ready yet. So they shocked everyone, including Microsoft and first party devs, when they announced 8gbs. But the reason they got there was dumb fucking luck. Microsoft though they were going to have a memory advantage from day one and got unlucky.
 
"I don't know how Microsoft managed to fit more bandwidths into a box the same size. Where did they put them?

I'm glad they have anyway because they are my favourite and now their console is more powerful than the PS4."

-This Thread
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
What 'fact' do you have to say DF is biased towards MS?

Not DF, Leadbetter. His posting history as 'grandmaster' at B3D and his frequent editorializing in his technical articles at DF. IMO it is not even hidden, it is easy to see if you read what he writes. The one time he totally just gave up any illusion of objectivity was the FFXIII vs article, that was lol worthy he sounded like he was crying.
 

borghe

Loves the Greater Toronto Area
Doesn't 192GB/s >176GB/s

It also refutes the claim that MS has downclocked.

uggh..

please, stop just looking at numbers and trying to guess through the rest.

Sony - unified memory runs at 176GB/s
MS - unified memory runs at 68GB/s. on-die 32MB eSRAM cache runs at 102GB/s.

MS is now saying that when combining read and write operations in the same clock cycle that can go up to 192GB/s. DF's info says that real world performance looks to be closer to 133GB/s. However we are AGAIN talking just about the 32MB eSRAM cache on die.. This does not mean that the XBONE's memory architecture now magically runs at 192GB/s, 133GB/s, or even 102GB/s. It means that where it works to pre-load data through the cache it will be read to the GPU at a faster bandwidth, but the unified system memory is still half as fast..

trying to compare the two directly next to each other is pointless. we probably truly won't even START to see the real differences in the architectures until 2015 or 2016 at the earliest.
 

CorrisD

badchoiceboobies
Has MS actually released the specs of XB1 yet?

Nothing detailed past basic amounts as far as I am aware, at least not officially to the general public, 8 Core CPU, 8GB RAM, 32MB eSRAM, and then typical WiFi, HDD, USB etc., there's no reason to think the leaked specs beforehand aren't the real ones seeing as everything else has lined up with them.
 

Jagerbizzle

Neo Member
wow people believing this !!!

the 32mb esram is basically not even adequate for large algorithms and these days even a routine like moving a hand could take more than 32mb. If there are 10 routines with each wanting to access the esram ram pool, then the only way of acessing it would be when one process has been unloaded or killed. Xbone's ram's bandwidth for major ops would remain 68g/s . No way of fooling people with this 192g/s CLOUD LIKE number.

Cant believe they are combined the esram and ddr3 ram to fool people

What exactly are you suggesting then, that the esram is useless? If it can't be used for even a routine such as "moving a hand", what do you think it can be used for?

Please elaborate.
 

Cidd

Member
Lol @ people thinking bandwidth will close the gap in power, Lets forget the 5GB vs 6/7GB RAM and a 50% more powerful GPU, but nah that Bandwidth...
 

Dunlop

Member
This is a terrible article that is truly grasping at straws to try to paint the XB1 in a light where it might be considered (in some bizarre parallel universe)

I think they are in kahoots with Adam Sessler also...

I swear every freaking story that has any kind of non condeming news for the XB1 is painted a shill or fluff piece.

Someone needs to change the year of the PS3 jpeg to who is in MS's pocket.

I'd like to hear what actually developers say about this
 
So it might actually be due to a downclock? I don't know how much of this stuff works, but that would be very unfortunate. Also, massive levels of fail on DF's part if they mistook a downclock for an upgrade.

But I guess this could mean downclock, but still better ESRAM capacity than previously thought?

Anyway, let's hope there is no downclock.
 
Its definitely the GPU difference. Devs can and will work around the ram differences. The GPU power difference they cannot.

They can possibly do something like use the same amount of GPU cores as the xbone for rendering and use the extra cores for advanced physics and particle effects or just for faster rendering. Its the GPU all the way.


Ding ding ding!


Everyone is focusing on the wrong thing, GDDR5 is just a different solution to make sure the system doesn't run into any bottlenecks. If both systems are balanced in a way that neither has any bottlenecking, having GDDR7 @8000Mhz adds nothing.

If a race car needs to get from point A to point B as fast as possible, and you have open highway with no traffic, then it's all up to the speed of the car (CPU/GPU) to make the difference, adding more lanes for the slower car wouldn't help at all. Xbox One's GPU has less shader cores so the GPU in the PS4 has a slight edge in performance, you will be getting better numbers but it's not that big of a difference, probably even less of a difference between Xbox and GameCube, the latter being compliant with an older generation of DX/OGL.

Anyone expecting a drastic difference between PS4 and X1 will be drastically disappointed :p
 

Snubbers

Member
I mean, why would the designers of the XB1 HW not know that the eSRAM is capable of both reading and writing to and from memory in certain peculiar cases?

I don't think it's as daft as it seems... if you look at the memory timings, you see masses of clock cycles just 'waiting' in the various stages of the transaction, those are required for many reasons, one of those is stability.. I can imagine they'd have set the timings at design time to conservative values, only to find when they get final silicon that noise levels are better then expected, (or other factors affecting stability) and so can shave a few clock cycles here and there with the timings..

I'm basing this of currently working with ARMs talking to many different peripherals as memory mapped IO, where we design worse case timings until we get the final hardware and often find we can tighten up the timings be a reasonable margin..
 
I am no expert but while the small amount of ESRAM seems like it will come in handy I dont get why people think it can magically make up the difference between DDR3 and GDDR5.
 

Jagerbizzle

Neo Member
Lol @ people thinking bandwidth will close the gap in power, Lets forget the 5GB vs 6/7GB RAM and a 50% more powerful GPU, but nah that Bandwidth...

Dude, what the you do you think devs are going to be using the extra ram for in the near term? >5 gig being used in anything other than a completely wasteful manner will not start happening until way way way later in this generation.
 
I think fundamentally you have misunderstood or simply not read the article.

The stated reason for the "increase" in memory performance is because of phantom "unused cycles" and a previously unknown ability of the eSRAM to read and write simultaneously (albeit and rather inexplicably only production HW can do this *roll eyes*), when previously it was only considered to be able to do one or the other.

This has nothing to do with yields.

Fair enough. However, it's very common for a hardware manufacturer to not know what their performance is going to be due to yields. I was wrong on the specifics here, but I took the question as a general one. The notion that it shows some kind of problem or incompetence on Microsoft's part is nonsense. Hopefully that's clear.

It's worth stating as you did, though, that this instance has nothing to do with yields. My post was misleading, espescially to specifically mention the ram bandwidth.
 
Not really my fault you took offence from a neutral statement and decided to be condescending.

Condescending? I specifically I said I didn't know, but it looked like you were wrong. And you pull one source to say that Anandtech knows more about it than I do?

Well no shit anandtech knows more than me, that's why I provided two sources for the information I knew. But because you are really in this, wearing your heart on your sleeve, you decided to be passive aggressive.

Anyway, it is what it is.
 

Dunlop

Member
Nah, each system has differing priorities. Xbone is targeting the dvr market primarily Sony is interested in the games market.

It's not one or the other.

MS's first party companies do not care about the "DVR" market and will work just as hard as Sony's first party.

For now I can see it being a draw, with Sony getting the edge a few years down the line
 

Tulerian

Member
Sounds like MS are trying to hide a down clock with supposed good news, where under very specific circumstances they think they have observed some test results indicating the ability to read/write simultaneously.

Smoke and mirrors.

I don't believe anything this company says right now, until verified by third parties.
 

Cidd

Member
Dude, what the you do you think devs are going to be using the extra ram for in the near term? >5 gig being used in anything other than a completely wasteful manner will not start happening until way way way later in this generation.

Keep telling yourself that, Devs will find a way to use any option given to them.
 
Nah, each system has differing priorities. Xbone is targeting the dvr market primarily Sony is interested in the games market.


They said it was a gaming system first before anything else. If you think they're lying that's another story, and if you're judging by the reveal, that's what the reveal was for, E3 focused purely on games. PS4 actually showed a lot of TV stuff at E3, doesn't mean that's their main focus.
 
Oh fucking come on. You really think we should believe that right after news started spreading that the system was inferior to the PS4 because of that HD IR camera's costing and downclocking due to yield problems, all of a sudden some magical performance boost was discovered? Really? This is way too convenient.
 

DieH@rd

Banned
Speculative article that is full of rumors + tech speek + GAF = hilarity

If this news is true... then this means that 33% slower GPU of Xbone will not be bottlenecked when accessing smallish 32MB of integrated ESRAM [PS4 dont have this problems, devs demanded easy architecture, and Sony provided] . Nothing has changed much on performance side, except life of high end game programmers will become little more easier when trying to optimize what to put in that small memory pool.
 
I don't think it's as daft as it seems... if you look at the memory timings, you see masses of clock cycles just 'waiting' in the various stages of the transaction, those are required for many reasons, one of those is stability.. I can imagine they'd have set the timings at design time to conservative values, only to find when they get final silicon that noise levels are better then expected, (or other factors affecting stability) and so can shave a few clock cycles here and there with the timings..

I'm basing this of currently working with ARMs talking to many different peripherals as memory mapped IO, where we design worse case timings until we get the final hardware and often find we can tighten up the timings be a reasonable margin..

This is actually great info.

Appreciated.
 

LiquidMetal14

hide your water-based mammals
I wish someone from MSFT would just outline their platform like we've seen Mark Cerny.

One on hand we had all this info and big wording from MSFT.

On the other hand you have Mark Cerny making 2 speeches already. One in February and one just a day or two ago. There's no transparency in his words.
 
I wish I had the technical understanding to even participate in these conversations, all I know is that if the down clock thing turns out to be true... yikes.
 
Status
Not open for further replies.
Top Bottom