• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DigitalFoundry: X1 memory performance improved for production console/ESRAM 192 GB/s)

Status
Not open for further replies.

coldfoot

Banned
GDDR5 is good for graphics, less good for CPU operations due to the higher latency.
Please stop this bullshit. GDDR5 and DDR3 latency are comparable. It's the memory controller implementations that add the latency, when people compare Intel's best in business DDR3 memory controller with a GPU's GDDR5 controller that optimizes bandwidth over latency.
 

RoboPlato

I'd be in the dick
176 is also theoretical. Just how specs are advertised.
They can get pretty close to the 176 on a more consistent basis. The original peak theoretical performance for the XBO was 170Gb/s but they only really said the 102 number since it was more representative.
 
Hasn't Ms started talking recently that the system has more than 200GB/s of available bandwidth?

If their claims about the gpu and cpu seeing both as a combined pool with added bandwidth for some operations are true, that figure falls in line with 68 + 133 GB/s.
Not that big of a jump because before their theoretical peak bandwidth was 170GB/s

That was by adding both memories theoretical bandwidth, if they do this now the value would be obviously higher XD (260 GB/s to be precise)
 

Kyon

Banned
GDDR5 is good for graphics, less good for CPU operations due to the higher latency.
DDR3 is good for the CPU but worse for the graphics due to lower bandwith. ESRAM is there to make up for that, at least a little bit.

didnt Durante shoot this bullshit claim down already? Cause thats exactly what it is. You tried tho
 

kortez320

Member
No devs will hit either 176 on the PS4 or 192 on the Xbone.

I've been saying this over and over that memory bandwidth is not the real advantage the PS4 has over the Xbone. The real advantage is increased GPU area due to not having a bunch of space taken up by EDRAM.

This is something that could only be alleviated (for Microsoft) by super high unattainable clock speeds on the GPU core that simply won't happen.
 

Auto_aim1

MeisaMcCaffrey
They are not adding stuff. The eSRAM bandwidth has been boosted by 88% according to the article, which means a jump from 102GB/s to a theoretical 192GB/s. They may not hit the latter but the jump is pretty good. It probably won't allow them to do a lot since they don't have much of it--just 32MB.
 

derFeef

Member
Please stop this bullshit. GDDR5 and DDR3 latency are comparable. It's the memory controller implementations that add the latency, when people compare Intel's best in business DDR3 memory controller with a GPU's GDDR5 controller that optimizes bandwidth over latency.

Thanks for the correction :)

It's crazy you are still spreading this bullshit. You are a better poster than this.

I apologize, that's what I read :/
 

Radec

Member
So if we update the console power to DBZ level..

PS4:
tumblr_ls02v5slfd1r1smd2o1_500.gif


XBone:
tumblr_m9e6piZyfZ1qiqegzo1_500.gif


WiiU:
tumblr_lxfr5yWNDl1r8tyjfo1_500.gif


:p
 
32MB is still too less though, but good news nonetheless.

"At 32MB the ESRAM is more than enough for frame buffer storage, indicating that Microsoft expects developers to use it to offload requests from the system memory bus. Game console makers (Microsoft included) have often used large high speed memories to get around memory bandwidth limitations, so this is no different. Although 32MB doesn’t sound like much, if it is indeed used as a cache (with the frame buffer kept in main memory) it’s actually enough to have a substantial hit rate in current workloads"

Anandtech.com
 
2 different pools of memory do not produce constant 192GB/s. 1 pool of unified 176GB/s without an added step (eDRAM). It was posted earlier by another. I don't think I have to outline why the "theoretical" number is a bit misleading when talking about overall performance.



LOL math up in here. Looks like he edited.

If MS doesn't even know there on system (which is funny) how would you know how they came up with the math? Anyway like others have said they can't lie to the developers because it will do nothing but backfire. I already Know the Xbox One is suppose to be the weaker system, but clearly they made great strides on being on equal ground with the PS4.
 
How does the cloud comes into place in this? As far as I know the Cloud is there so it's possible to have Open World games without pop in and stuff like that right?
 
Microsoft tells developers that the ESRAM is designed for high-bandwidth graphics elements like shadowmaps, lightmaps, depth targets and render targets. But in a world where Killzone: Shadow Fall is utilising 800MB for render targets alone, how difficult will it be for developers to work with just 32MB of fast memory for similar functions?
I think this quote speaks for the most part. Optimizing games for the Xbone will be a much harder task then PS4 due to the ESRAM implementation. In other words, PS4 will be much easier to develop for compared to Xbone.

At that point it's possible that we may see ambitious titles operating at a lower resolution on Xbox One compared to the PlayStation 4.
I am pretty sure that BF 4 won't be 1080p on Xbone. 60 FPS and 1080p is certainly not possible.
 
GDDR5 is good for graphics, less good for CPU operations due to the higher latency.
DDR3 is good for the CPU but worse for the graphics due to lower bandwith. ESRAM is there to make up for that, at least a little bit.

Oh GAF... Once you want to cling to hope, you REALLY cling to it...
 
They are not adding stuff. The eSRAM bandwidth has been boosted by 88% according to the article, which means a jump from 102GB/s to a theoretical 192GB/s. They may not hit the latter but the jump is pretty good. It probably won't allow them to do a lot since they don't have much of it--just 32MB.

This is exactly what the article is saying.
 

spyshagg

Should not be allowed to breed
You guys need to read the article.






They found a loop hole that allows to do some simultaneous read/write operations.






..its not the result of design, or intent. Dumb luck really.
 
So the gap is less significant than initially thought. Can only be good news for everyone.

If the services that MS plans to provide pan out the way they intend then I think that becomes much more of a selling point than a minor graphics difference which at this point seems like it will more or less be a non factor (for most).
 

Durante

Member
How does the cloud comes into place in this? As far as I know the Cloud is there so it's possible to have Open World games without pop in and stuff like that right?
No, that's not right. The cloud is primarily there to sell people on the idea of always-online, and secondarily to host multiplayer servers. Which do what servers have always done.

Oh brother... Once you want to cling to hope, you REALLY cling to it GAF. lol
Well, i's true that higher latency and higher bandwidth are generally better for GPU workloads and lower latency and less bandwidth are generally better for CPU workloads.
 

Kyon

Banned
I'm still trying to wrap my head around the fact they MS themselves didnt know this (that is if this isnt fluff)
 

ekim

Member
From the Xbone architecture panel :
Again on the ram, we really wanted to get 8gb and make that power friendly as well which is a challenge to get both power friendly for acoustics and get high capacity and high bandwidth. So for our memory architecture we're actually achieving all of that and we're getting over 200gb/s across the memory sub-sytem.

So this possibly explains the naive fallacy of 200gb/s
 

USC-fan

Banned
"At 32MB the ESRAM is more than enough for frame buffer storage, indicating that Microsoft expects developers to use it to offload requests from the system memory bus. Game console makers (Microsoft included) have often used large high speed memories to get around memory bandwidth limitations, so this is no different. Although 32MB doesn’t sound like much, if it is indeed used as a cache (with the frame buffer kept in main memory) it’s actually enough to have a substantial hit rate in current workloads"

Anandtech.com
Thats the same size of the wiiu edram.... its small.
 

LiquidMetal14

hide your water-based mammals
If MS doesn't even know there on system (which is funny) how would you know how they came up with the math? Anyway like others have said they can't lie to the developers because it will do nothing but backfire. I already Know the Xbox One is suppose to be the weaker system, but clearly they made great strides on being on equal ground with the PS4.

I wouldn't even put it on equal ground just due to the 2 memory pools. The theoretical performance is higher but you're not going to hit that all the time as opposed the PS4 approach of a more consistent flow of data with the unified pool of GDDR5.

Let's make it clear that this makes Xbone look a little better though. This isn't bad news.
 

harSon

Banned
Good. I don't understand why people wouldn't want this to be true. The stronger the Xbox One is, the better multiplatform games are going to look considering developers typically cater to the lowest common denominator.
 

Pociask

Member
I know a lot of people knock the Wii U for having a complex set up - how does it compare in terms of complexity to the XBone (not in terms of power)? Assuming a developer was interested in getting peak performance out of both machines?
 

Auto_aim1

MeisaMcCaffrey
"At 32MB the ESRAM is more than enough for frame buffer storage, indicating that Microsoft expects developers to use it to offload requests from the system memory bus. Game console makers (Microsoft included) have often used large high speed memories to get around memory bandwidth limitations, so this is no different. Although 32MB doesn’t sound like much, if it is indeed used as a cache (with the frame buffer kept in main memory) it’s actually enough to have a substantial hit rate in current workloads"

Anandtech.com
Yes? It's not enough when you take into account the competitor's memory setup which allows them to do a lot more.
 

derFeef

Member
Oh GAF... Once you want to cling to hope, you REALLY cling to it...

I am not entirely sure I am wrong here, and please stop with the implication taht everyone is on an agenda. It's what I know and if I am wrong I already apologized. Only on GAF you can say something and you already are a fanboy clinging on hope or something, ugh...
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
Apparently, there are spare processing cycle "holes" that can be utilised for additional operations.

Smells like BS.

Edit: I see it is Leadbetter, back to his old ways. I read this whole thing as him talking to a friend MS employees, who si telling him about a rare occasion where the eSRAM can achieve better then typical bandwidth and they want to fight a "PS4 is faster" tide so this crap of a tech article is their brainchild. Now MS fans will run around claiming 192GB/s without qualifying it. And the stuff about SHAPE and DDR latency at the end is just the fanboy cherry on the top at the end. Leadbetter can't help himself.
 

strata8

Member
I know a lot of people knock the Wii U for having a complex set up - how does it compare in terms of complexity to the XBone (not in terms of power)? Assuming a developer was interested in getting peak performance out of both machines?

Neither of them are any more complex than the 360 was.
 
No devs will hit either 176 on the PS4 or 192 on the Xbone.

I've been saying this over and over that memory bandwidth is not the real advantage the PS4 has over the Xbone. The real advantage is increased GPU area due to not having a bunch of space taken up by EDRAM.

This is something that could only be alleviated (for Microsoft) by super high unattainable clock speeds on the GPU core that simply won't happen.

Also PS4 devs don't have to jump through hoops to use the 32 mb of eSRAM efficiently. They just have large pool of unified fast memory. Much preferrable.
 

onQ123

Member
Hasn't Ms started talking recently that the system has more than 200GB/s of available bandwidth?

If their claims about the gpu and cpu seeing both as a combined pool with added bandwidth for some operations are true, that figure falls in line with 68 + 133 GB/s.


That was by adding both memories theoretical bandwidth, if they do this now the value would be obviously higher XD (260 GB/s to be precise)

& what make you think that they are not talking about both now?
 

BigTnaples

Todd Howard's Secret GAF Account
So Xbox One was designed for bigger worlds and PS4 for better looking worlds. 3rd parties will still make it the same for both, though.



Bigger? No.


The XB1 still uses more ram for the OS than the PS4 at this point. Meaning less ram available for games.
 
Not only do I smell a bit of bs in this, but in the end you can't simply add both bandwidths together.

Good for Xbox One developers though if true, extra performance is always good.
 

BizzyBum

Member
Also, with the unlimited power of the cloud, we could easily just download more ram from microsoft for better FPS in games.

This thing is a sleeper hit already.
 
I am not entirely sure I am wrong here, and please stop with the implication taht everyone is on an agenda. It's what I know and if I am wrong I already apologized. Only on GAF you can say something and you already are a fanboy clinging on hope or something, ugh...

You made it sound like they were on equal footing because each was less good at something. Couldn't be further from the truth.
 

kitch9

Banned
GDDR5 is good for graphics, less good for CPU operations due to the higher latency.
DDR3 is good for the CPU but worse for the graphics due to lower bandwith. ESRAM is there to make up for that, at least a little bit.

FFS where are people getting this complete and utter bollocks from?

*Edit* I see the correction has already been made.
 
Status
Not open for further replies.
Top Bottom