• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DigitalFoundry: X1 memory performance improved for production console/ESRAM 192 GB/s)

Status
Not open for further replies.
It's not one or the other.

MS's first party companies do not care about the "DVR" market and will work just as hard as Sony's first party.

For now I can see it being a draw, with Sony getting the edge a few years down the line


How wide would this edge be?
 

Drek

Member
http://www.anandtech.com/show/1689/2

I don't know man, but I think Anand knows more about this than you do.

Doesn't that just prove his original point though?

This 192GB/s figure comes from the exact same source as the 256GB/s EDRAM figure for X360.

Anyhow, a few quibbles I have with this:

1. They act like it can read and write simultaneously, but then do the math as though it can only read and write simultaneously 88% of the time. Why can't it do so the other 12%? That makes no mathematical sense. Unless they down clocked to 750MHz and this is fancy PR spin to make it sound like it won't be a problem (at which point the numbers add up perfectly to their 192GB/s theoretical, FYI).

2. If the theoretical is really 192GB/s and their best "believed" real-life scenario is 133GB/s during an idealized test that is some pretty un-optimized testing there. Are they really unable to utilize more than 69% of the ESRAM's bandwidth in real world testing? Why in the world would we expect developers to do any better?

The article then goes out of it's way to mention that "to the best of their knowledge" the GPU and CPU are both still at 800MHz and 1.6GHz, and then specifically sites that is at parity with the PS4 (despite all the extra cores the PS4's GPU has, which is what really matters for real performance).

Reads like a piece of FUD to me that Eurogamer was more than happy to put up because it's a guaranteed hit generator.
 

borghe

Loves the Greater Toronto Area
I am no expert but while the small amount of ESRAM seems like it will come in handy I dont get why people think it can magically make up the difference between DDR3 and GDDR5.

make up the difference? No. And with multi-plat titles, until the tools mature we probably won't see any benefit from it.. and even then little benefit. PS4 multi-plat games will still pull away probably by around 2-3 years in.

However, there is definitely some cool stuff to having the fast cache on-die. I'm guessing by the middle of the gen xbone exclusives will start putting many multi-plat xbone titles to shame.

kind of the problem with a complicated architecture. The same reason why PS3 has typically gotten the short end of the stick for almost every multi-plat game this gen, yet at the same time also has hands down the best looking games of the entire generation.
 

DieH@rd

Banned
I wish someone from MSFT would just outline their platform like we've seen Mark Cerny.

One on hand we had all this info and big wording from MSFT.

On the other hand you have Mark Cerny making 2 speeches already. One in February and one just a day or two ago. There's no transparency in his words.

They cant, when they have slower hardware. And in the sense, they did nothing almost new with the Xbone architecture. They repeated basic design of X360, only with X86.
 

jaypah

Member
Yes they are. But people in this thread are acting like this one bandwidth improvement ( theoretical, final?) defines and changes everything.

DF just threw a slice of cheese into a cage full of hugry rats.

Right, and people from both sides went crazy for it. Your post was fine until "lol, any news is good news, right haha". The delusional people have been corrected numerous times. Whatever, the overly hostile tone in these next Gen threads is disappointing sometimes and my coffee hasn't set in yet.
 

PFD

Member
Sony considered going with high bandwidth eDRAM, but then decided against it because it would make the PS4 difficult to develop for.

Here's a slide from Cerny's talk:

edramxks5w.jpg
 
I wish someone from MSFT would just outline their platform like we've seen Mark Cerny.

One on hand we had all this info and big wording from MSFT.

On the other hand you have Mark Cerny making 2 speeches already. One in February and one just a day or two ago. There's no transparency in his words.

not going to happen. Too many check list items in favor or PS4.
 

Synless

Member
In the end, does it even matter? The X1 still only has 5 GB out of the 8 that it can use for games. I can't see any real advantage this will give them when push comes to shove.
 
I wish someone from MSFT would just outline their platform like we've seen Mark Cerny.

One on hand we had all this info and big wording from MSFT.

On the other hand you have Mark Cerny making 2 speeches already. One in February and one just a day or two ago. There's no transparency in his words.
given the state of microsoft right now you'd probably have to wait for the PR statement for it.
 
I think this quote speaks for the most part. Optimizing games for the Xbone will be a much harder task then PS4 due to the ESRAM implementation. In other words, PS4 will be much easier to develop for compared to Xbone.


I am pretty sure that BF 4 won't be 1080p on Xbone. 60 FPS and 1080p is certainly not possible.

Unlike 360, render targets can also reside in the main ram on xbone.

One possible scenario, for a deferred game would be having the g buffer on esram (useful, because some screen space effects benefit from low latency) feeding the gpu which them renders the render targets onto the main ram.

The bandwidth figures from vgleaks shows that the memory setup was intended to use with operations taking place on both memories at the same time. Other important thing to notice, in those scenarios there are always room for the DMEs to operate in parallel with the gpu too for example.

Let me just point something out. The 10 MB of Edram on the 360 were capable of 256GB/s.

I don't know exactly what does that mean in this context.

Different scenarios.

For the ROPs in 360, they need to be quite small to not make the daughter die too huge, so they had to be as simply as possible. Since they sit on top of a pool of memory on the same die they could put a memory controller which allowed all of the ROPs to write everything they could at once to that memory so no compression was needed. 256GB/s is simply how much the ROPs can write, and they can only achieve that figure while writing with 4x sampling, that quadruples the amount of data written.

Xbone ROPs are more in line with current gpu's, which have more complex ROPs that use compression to minimize the required bandwidth, so they can write many more pixels, with higher precision formats, requiring less bandwidth.
 

strata8

Member
Condescending? I specifically I said I didn't know, but it looked like you were wrong. And you pull one source to say that Anandtech knows more about it than I do?

Fair enough. It sounded like sarcasm when I read it, and came off a really douchey. But I guess that's not what you intended.
 
I wish someone from MSFT would just outline their platform like we've seen Mark Cerny.

One on hand we had all this info and big wording from MSFT.

On the other hand you have Mark Cerny making 2 speeches already. One in February and one just a day or two ago. There's no transparency in his words.

If they did that they would acknowledge the specs advantage Ps4 have. Even if they feel like they can match or surpass Ps4 performance with their design, they still wouldn't do because of the numbers sending a bigger message.
 

Jagerbizzle

Neo Member
Keep telling yourself that, Devs will find a way to use any option given to them.

Thanks for the FYI. I'll be sure to let all of my colleagues know that we're idiots.

I can find quite a few ways to waste that much ram right now, but realistically it's not going to be used for anything useful anytime soon. The biggest benefit I see from this is that it will allow you to be lazier and spend less time optimizing for memory, allowing more focus on features.

That will still be years down the road.
 

Pie and Beans

Look for me on the local news, I'll be the guy arrested for trying to burn down a Nintendo exec's house.
Maybe they have found a boost, which is great news for the console and multi-plats, but I think even mentioning the "nearly twice as fast as it was" and conveniently PS4 beating THEORETICAL 192GB/s is straight up bullshit on DF's part.

I'm sure they did find ways to make more efficient use of idle time, but just not to that dumb extent.
 

LiquidMetal14

hide your water-based mammals
They cant, when they have slower hardware. And in the sense, they did nothing almost new with the Xbone architecture. They repeated basic design of X360, only with X86.

Don't get me wrong, we know the HW is better on PS4 but I would just like to hear some kind of technical savvy coming from MSFT instead of buzz words.

You have Mark Cerny and then the endless quotes and articles like this from DF.

My takeaway from this is that Sony is being forthcoming enough while MSFT hasn't given enough information so we have these types of articles happening which then confuse some people more.

For better or worse, talk about some of the technical bits on Xbone.
 

IcyEyes

Member
I know I know, this is not a tech forum, but some post are horrific !

If this rumor is true, good for all the developers, but no, that's not mean the X1 it's more capable than the PS4 and no, it's not on par.
 

ChryZ

Member
Microsoft techs have found that the hardware is capable of reading and writing simultaneously.
WTF, shouldn't they know what their design is capable of? Do they throw stuff in a box and then go on a magical journey of exploration and discovery?
 
Not necessarily good news, it may be the result of a downclock. Going to be interesting to see how this plays out.


To your last paragraph though, MS did go the safe route. They knew from day one that they wanted 8gbs of ram. DDR3 was the only safe way to make that happen when they were designing the unit. Sony wanted unified memory with gddr5 and first targeted 2gbs... then 4gbs... and then right before the reveal found out that 8gbs was feasible. There is no way that Cerny and co could have seen this coming - affordable, mass produced 512mb sticks of gddr5 weren't supposed to be ready yet. So they shocked everyone, including Microsoft and first party devs, when they announced 8gbs. But the reason they got there was dumb fucking luck. Microsoft though they were going to have a memory advantage from day one and got unlucky.

I think their luck is still holding up Mort, you may be able to confirm with your own sources, but I have a source saying that a major memory manufacturer is currently testing 8Gbit (1GB) GDDR5 chips of the exact spec required for 176GB/s bandwidth using 8 chips. They are supposed to be ready for mass production this time next year and would reduce the cost of Sony's RAM by a third and put PS4 hardware on an even keel globally before the end of the summer.

WTF, shouldn't they know what their design is capable of? Do they throw stuff in a box and then go on a magical journey of exploration and discovery?

Wait, let me get this straight. Before the reveal we were talking about 102GB/s for the ESRAM, which was read or write (because they didn't know simultaneous addressing was possible) but that should have been 204GB/s. Now we are talking about 192GB/s read/write. In the like-for-like comparison that is a downgrade, which makes much more sense than a last minute upgrade. As anyone with an ounce of sense knows, performance from manufacturing to release only ever goes one way (down).
 
it's fairly simple. It's all about yields. You won't know what temperatures your chips can run stably at until after you're manufacturing them. Some will run stable at higher performance levels than others. Generally with something like this, it's more about how good your manufacturing process is.

Say Microsoft hoped 90% of the chips would be stable at 176 GB/sec, once they scaled up manufacturing they discovered that 90% of the chips were actually stable at 192 GB/sec.

That's not what at all they said happened.
 

strata8

Member
Doesn't that just prove his original point though?

This 192GB/s figure comes from the exact same source as the 256GB/s EDRAM figure for X360.

It's a pretty different situation. MS stated publicly that bandwidth of the 360 EDRAM was 256GB/s at first, but that was proven to be incorrect. This time around we've got an 'official' 102GB/s (that can be confirmed by looking at the specs), but that's been revised up to 192GB/s.

Remember that according to the article this is coming from developer sources, and I don't think it would do MS any good to misrepresent the eSRAM bandwidth in that situation.
 
We've been hearing about yield issues from reliable sources for months and suddenly MS is not only saying that's not true, but that they're getting even high quality yields than previously stated?

Oh yeah, this is from Microsoft.

why would they lie to their partners who have to produce these games in certain time frames with certain spec expectations?
 
I am no expert but while the small amount of ESRAM seems like it will come in handy I dont get why people think it can magically make up the difference between DDR3 and GDDR5.

It's not magic, not everything in the pipeline requires that high of bandidth, in fact the larger portion of operations don't even need above 40GB/s, while super bandwidth intensive frame buffer stuff will be cached into ESRAM, experts believe 32MB is more than enough for caching 1080p +4xAA.

Think of a family that's moving, you load all of your furniture in a truck but you need to keep running back and forth between houses and running errands or maybe you pack your refrigerated stuff that has to get there right away, for that you use your car and you have plenty of room there just for perishables. What's more, the car that you're using is even faster than the main delivery service of the PS4 which has to load everything there.

The PS4 solution is ideal because everything gets there fast and easy, but the key thing is that BOTH are getting the required job done. The differentiating factors will be the amount of work getting shipped, and that's where the difference lies, the PS4's GPU can issue more work, not considerable but an advantage nonetheless.
 

IcyEyes

Member
Sony considered going with high bandwidth eDRAM, but then decided against it because it would make the PS4 difficult to develop for.

Here's a slide from Cerny's talk:

edramxks5w.jpg

This comparison is pretty nice and think that number as a F1 car (I love F1)
It's better have a good car that perform pretty well in any part of the track, instead an incredible fast car on the straight, but pretty slow and hard to handle in the curves.
 

Jagerbizzle

Neo Member
What studio are you currently employed at?

A big one working on a title for one of the two platforms. I have some obvious bias and only have first hand experience with one of the platforms so cannot compare and contrast anything detailed.

A mod can obviously verify this information if they'd like.
 
Don't get me wrong, we know the HW is better on PS4 but I would just like to hear some kind of technical savvy coming from MSFT instead of buzz words.

You have Mark Cerny and then the endless quotes and articles like this from DF.

My takeaway from this is that Sony is being forthcoming enough while MSFT hasn't given enough information so we have these types of articles happening which then confuse some people more.

For better or worse, talk about some of the technical bits on Xbone.

Probably has to do that sony drivers and other software tools are more mature.
They have pinned down base performance down. From what rumors tells us microsoft drivers are still a bit meh meh and tools were way behind those of sony. I would bet the its engineering hell right now at xbox hardware/OS division.
 

borghe

Loves the Greater Toronto Area
So they shocked everyone, including Microsoft and first party devs, when they announced 8gbs. But the reason they got there was dumb fucking luck. Microsoft though they were going to have a memory advantage from day one and got unlucky.

and this... this is where almost all greatness comes from... you can work and work and work and work... and dumb fucking luck will always still somehow pull ahead.
 

Vestal

Gold Member
In the end, does it even matter? The X1 still only has 5 GB out of the 8 that it can use for games. I can't see any real advantage this will give them when push comes to shove.

Find me a game that can even use 4GB of ram.. Right now and the forseable future(3-4) years this is simply not possible.

Maybe just maybe in about 5 years a game could take up 5GB, but what are you going to fill it up with? Textures?? Hmm you need a Killer GPU to justify that amount of ram used, HMM crap neither of these systems has that.
 
Would that mean the PS3 is still 50% more powerful or is it more like 35%? Also the PS3 does have more available ram, which is good for graphic stuff or something.
 

benny_a

extra source of jiggaflops
To your last paragraph though, MS did go the safe route. They knew from day one that they wanted 8gbs of ram. DDR3 was the only safe way to make that happen when they were designing the unit. Sony wanted unified memory with gddr5 and first targeted 2gbs... then 4gbs... and then right before the reveal found out that 8gbs was feasible. There is no way that Cerny and co could have seen this coming - affordable, mass produced 512mb sticks of gddr5 weren't supposed to be ready yet. So they shocked everyone, including Microsoft and first party devs, when they announced 8gbs. But the reason they got there was dumb fucking luck. Microsoft though they were going to have a memory advantage from day one and got unlucky.
Is this what your sources tell you?

Because if you look at the Hynix and Samsung GDDR5 road map they project what will available in which quarter quite a bit into the future. Now of course all the milestones need to be hit to make this a reality, but lucking into 8GB when 2GB was the target is interesting.

Would that mean the PS3 is still 50% more powerful or is it more like 35%? Also the PS3 does have more available ram, which is good for graphic stuff or something.
Still 50% as that figure only talks about the GPU performance.

Find me a game that can even use 4GB of ram.. Right now and the forseable future(3-4) years this is simply not possible.
Killzone Shadow Fall used 4.5GB of memory in the 20th February demo.
 

Synless

Member
Find me a game that can even use 4GB of ram.. Right now and the forseable future(3-4) years this is simply not possible.

Maybe just maybe in about 5 years a game could take up 5GB, but what are you going to fill it up with? Textures?? Hmm you need a Killer GPU to justify that amount of ram used, HMM crap neither of these systems has that.
If they have it to use as a standard, they will.
 
Not necessarily good news, it may be the result of a downclock. Going to be interesting to see how this plays out.


To your last paragraph though, MS did go the safe route. They knew from day one that they wanted 8gbs of ram. DDR3 was the only safe way to make that happen when they were designing the unit. Sony wanted unified memory with gddr5 and first targeted 2gbs... then 4gbs... and then right before the reveal found out that 8gbs was feasible. There is no way that Cerny and co could have seen this coming - affordable, mass produced 512mb sticks of gddr5 weren't supposed to be ready yet. So they shocked everyone, including Microsoft and first party devs, when they announced 8gbs. But the reason they got there was dumb fucking luck. Microsoft though they were going to have a memory advantage from day one and got unlucky.
They didn't. Esram was in the design from the beginning, long before 8gb was.
 
Would that mean the PS3 is still 50% more powerful or is it more like 35%? Also the PS3 does have more available ram, which is good for graphic stuff or something.

Computationally seen yes the ps4 will always have 50% more TFlops then X1 no matter what other systems they adjust. Inb4 they found 2 more extra CU like the 7790
/Beliebe... :p
 

Espada

Member
I'll wait until some insiders chime on this because from the timing and the way it sounds, it stinks of nonsense. Just like the X1 downclock rumors and the PS4 2.0Ghz increase, I doubt this is true.
 

tinfoilhatman

all of my posts are my avatar
So, if the numbers are accurate this confirms ESRAM downgrade (102GB/s -> 96) and Thuway info?


Are people REALLY this desperate here to spin this into a negative for Microsoft?

Bravo Gaf bravo.......you've sunk to a new low I didn't think was possible
 

borghe

Loves the Greater Toronto Area
So, if the numbers are accurate this confirms ESRAM downgrade (102GB/s -> 96) and Thuway info?
this is the only thing I don't understand in the article.. they say this suggests the downclock rumors are false, but yes, by yours and everyone else's math, taking MS' 192GB/s number and their explanation that you can read/write in the same clock cycle, yes that is 96GB/s actual bandwidth meaning a 750Mhz clock speed. aka 50Mhz downclock. Who knows though....

Find me a game that can even use 4GB of ram.. Right now and the forseable future(3-4) years this is simply not possible.
They already pegged the Killzone presentation demo at roughly 5GB. Also remember that PLENTY of PC games are already coming in around this number.. Games having recommended requirements of 4GB+ system RAM and 2GB+ vram.
 

LiquidMetal14

hide your water-based mammals
Are people REALLY this desperate here to spin this into a negative for Microsoft?

Bravo Gaf bravo.......you've sunk to a new low I didn't think was possible

There's nothing wrong with some HW speculation. Your passive aggressive posting isn't going to advance the conversation though.
 

Vestal

Gold Member
Is this what your sources tell you?

Because if you look at the Hynix and Samsung GDDR5 road map they project what will available in which quarter quite a bit into the future. Now of course all the milestones need to be hit to make this a reality, but lucking into 8GB when 2GB was the target is interesting.


Still 50% as that figure only talks about the GPU performance.


Killzone Shadow Fall used 4.5GB of memory in the 20th February demo.

Would love to see in what it was wasted... Games that look better on PC use less than 2GB.
 

Cidd

Member
Thanks for the FYI. I'll be sure to let all of my colleagues know that we're idiots.

I can find quite a few ways to waste that much ram right now, but realistically it's not going to be used for anything useful anytime soon. The biggest benefit I see from this is that it will allow you to be lazier and spend less time optimizing for memory, allowing more focus on features.

That will still be years down the road.

Isn't that a big benefit in itself?

It may not be big for you but it can be for other smaller devs.
 

8bits

Banned
In the end, does it even matter? The X1 still only has 5 GB out of the 8 that it can use for games. I can't see any real advantage this will give them when push comes to shove.


This post from the xboxone subreddit might interest you and many others....


I've been doing game environments for the past 13 years. In that time I have worked for 4 major game studios in the US. I don't see the 3D artists perspective voiced much during next gen discussion and I want to give mine. I also want to explain why I am supporting the XB1.

On current gen consoles (512mb ram) we would be lucky to have 200mb available for art at any one time. That means 200mb per load. Some engines like GTA or Red Dead constantly stream data, (no load screens) but there is still only about 200mb loaded for art at any one given time. When I say art I mean all the environment meshes and textures, vfx, characters, vehicles, 1st and 3rd person weapons, light maps, custom skins etc...Audio would get around 75mb, UI around 10mb and the rest of the memory would be spread out through dozens of other systems. Each engine is different but from my experience 200mb has been pretty standard for art.

Now over half my job is figuring out what compromises I need to make between what I really want to put on screen and what is technically possible to put on screen in 200mb. We take a piece of concept art and mentally chop that up into repeatable modules that become our set dressing and props. We want to make everything you see unique and high resolution but this is not possible because we have a tight schedule and we are limited by that 200mb gate. (More on the schedule later) To make things more complicated, we need to pre plan for worse case scenarios when designing environments. We need to account for times when every player is on screen, all shooting and throwing grenades, spawning blood splatter vfx smoke grenades, rocket trails... We can't just make a scene run at 30 fps in isolation and call it a day. We need to leave room for a cluster of intense gameplay to happen on top of the scene. This further eats into our vision.

This 200mb limit has been the gate for realtime 3D graphics for the last 8 years. PC exclusives are free from that gate but if it was cross platform that gate effected that art team even if they had high detail mode on the PC version.

The point of me explaining this to you is to illustrate how much more room we now have on next gen systems and how that gate will be pushed up to around 3gb. It could be a bit more or a bit less but that's my estimate. You may be asking why I am not saying 4,5,6,7Gb. This is where the schedule comes into play. It is simply not possible for a team of 15 artists to fill that much RAM full of art data on our tight schedules even if it is available to us. Instead of hiring more artists, studios start outsourcing to churn out more content. Some studios crunch for months filling up 200mb worth of detail. Imagine trying to fill 4,5,6 GB worth of data. Now factor in the state of the industry. Game staff is constantly getting laid off. Publishers have a hard time making profits on all but the biggest franchises. If we don't find ways to create a constant revenue stream (dlc, microtrans, mmo subscription), or limit used games and stop piracy, studios simply cant pay a staff to utilize all this extra RAM. You need staff to stay, to become senior and to know the tools inside out. Artists jumping from studio to studio from layoffs creates a brain drain and ultimately hurts the quality of games. It can take a new artist 6-12 months to fully ramp up on all the tools and custom tech and become proficient. This is a huge inefficiency in the games industry and hurts the final product. This is one reason I support What Microsoft is doing. They understand this vicious cycle and are trying to improve not only games, but the industry as a whole. I'll get off that soap box now. I hope you get it.

Back to the fun stuff. If a game like Gears of War can look good using 200mb, imagine what we can do with 3gb. That is just an unbelievable amount of resources for us to play with. I don't have to sacrifice the concept artists vision any more. I don't have to burn the time and effort making compromises any more. It is much harder to constantly optimize assets than it is to build high resolution from the get go. 3gb is almost enough resources to skip the compromise phase entirely. This makes my job so much more enjoyable and also gives players more immersive worlds to explore.

But...we are not there until developers stop supporting current gen consoles. This is why console games keep looking better and better each year. As we drop the current gen SKU's we can now focus entirely on the next gen without the 200mb gate of current gen. If you think those launch titles look good, just wait to see what is coming in the next 5 years.

Edit: For clarity on why I'm supporting Microsoft.
What Microsoft is doing (limiting used sales, stopping piracy, allowing additional revenue streams) is as big a factor for next gen graphics as the new tech is. if you can't pay artists to fill the gigs of data with art, all that extra ram is just wasted. The industry is having a hard time paying 15 artists to fill up 200mb of data. How do you expect them to pay 35 artists to fill 8gb. The industry has to change to allow greater revenue, which allows more artists which allows more detail. Microsoft is trying to do that.
 
Status
Not open for further replies.
Top Bottom