• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DigitalFoundry: X1 memory performance improved for production console/ESRAM 192 GB/s)

Status
Not open for further replies.

Jack cw

Member
Why people think that memory bandwidth is gonna make your games leaps and bounds better? At 1080p there's no real benefit on it, maybe some games could squish 2 or 5 more fps but that's it.

It's not necessarily about frames per second but actual data like AI, assets etc. that could be shoved between the components. And after Cernys talk it made sense to use a unified memory pool. PS4s gpu is still faster according to the specs. But it's good news after als that downclocking rumors.
 
its good news for everyone i if this is true, don't know why some of you want this to be false.

Questioning what it actually means ≠ wanting it to be false.



so the XBOX 1 is now more powerful than the PS4? good news for Microsoft

That's a generous reading of that article. And by that I mean you didn't read that article.


The best new in that article is that xbone dev kits are still getting significant performance improvements upon each release and the developers think they will match their targets without much problem.


Total agreement.



Good. I don't understand why people wouldn't want this to be true. The stronger the Xbox One is, the better multiplatform games are going to look considering developers typically cater to the lowest common denominator.

I think ports are usually targeted at the most popular platform, not the LCD, but I agree with your point and if true this is good news for everyone.
 
Good article.

Surprising that they underestimated the ESRAM that way. Although it's still only 32MB (which I assume matters). I'd love to hear a mature explanation of how all of this factors in though.


So it looks like DF is predicting the difference to present itself as a resolution difference. Is there a particular reason for that, and does it have more to do with the GPU or RAM? I'd assume that's the GPU difference.
Its definitely the GPU difference. Devs can and will work around the ram differences. The GPU power difference they cannot.

They can possibly do something like use the same amount of GPU cores as the xbone for rendering and use the extra cores for advanced physics and particle effects or just for faster rendering. Its the GPU all the way.
 
Good article.

Surprising that they underestimated the ESRAM that way. Although it's still only 32MB (which I assume matters). I'd love to hear a mature explanation of how all of this factors in though.


So it looks like DF is predicting the difference to present itself as a resolution difference. Is there a particular reason for that, and does it have more to do with the GPU or RAM? I'd assume that's the GPU difference.

From what they talk about in the next paragraphs it makes it sound that DF thinks they are indeed very similar performance wise, or that at least it will be a while to Ps4 to come out on top. That specific quote imply that developers were out in the dark regarding xbox one performance longer that they have been on Ps4, and as a result they may have pared back their games more than needed, and that might show up on launch titles.
 
Not really any more interesting than they were before.

I was thinking the same thing. People are getting hyped for a theoretical bandwidth boost. This doesn't put XB1 on equal footing with the PS4 at all. The PS4 still has a huge GPU advantage. But any improvements is good i guess right? Haha.
 

Jagerbizzle

Neo Member
Forza looks great as always, but there wasn't anything that stood out about it graphically in terms of being next-gen.

Not to mention I don't know if there was even an 'approximate' XB1 machine on the floor. Weren't they using Windows 7 PCs with higher spec Nvidia cards?

Forza was confirmed as running on XB1 beta kits on the floor.
 
smh at some of the arm chair hardware engineering posts in here.

30117619.jpg

that's pretty much this thread in a nutshell.

yes, everyone, the theoretical maximum bandwidth of 32MBs of RAM in the Xbone is going to be the deciding factor on whether or not it's more powerful than the PS4.

If it's 176 GB/s then the Xbone will be less powerful than the Wii U. If it's 192 GB/s then it will be the singularity and computing power will become infinite overnight.

OBVIOUSLY.
 
I view this with great scepticism just like the rumour that Sony had increased the CPU clock to 1.8GHz, it is very unlikely that MS have increased performance this late in the game.
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
so the XBOX 1 is now more powerful than the PS4? good news for Microsoft

It has started, DF and MS have produced the result they wanted, misdirection and confusion.
 
This is the issue with any next gen platform. Even Mark Cerny mentioned things being worked on for year 3 and 4 in his talk at Gamelab.

It's basically saying that games will continue to look better when devs get more time with the HW. Look at launch vs late cycle games.

I understand the process. I was speaking to their assumption that devs were being conservative with the Xbox One specifically over PS4 for launch.
 

KOHIPEET

Member
I am not a tech guy, so someone could enlight me on the topic, but even if there's a higher bandwidth eSram pool of 32 MB, the GPU, or the CPU, (or both) still have to move data in and out of it (due to it's tiny size), whilst on PS4, most of the data can just sit in the 7 GBs of memory. This, in my interpretation spares some cycles, giving the PS4 advantage.

Feel free to call me stupid, as I'm just using my common sense here.
 
True but the PS3 memory pool structure is not the same as the Xbox One so I expect Ports to run the same on both consoles.

So because Xbone has a bit more theoretical eSRAM bandwith, this does elevate the 50% stronger GPU and still superior unified RAM setup of the PS4? Ok dude...

Together with this weird statement that was already shot down, it seems you're just trying to spead false info:

So basically the so called power gap is even smaller. heck i thought the xbox cpu was at 1.2 but it is at 1.6. So basically multiplats will look the same and run smoothly on both systems it will take the exclusives to show a difference if there is one.
 
Let me just point something out. The 10 MB of Edram on the 360 were capable of 256GB/s.

I don't know exactly what does that mean in this context.
 

Crisco

Banned
I am not a tech guy, so someone could enlight me on the topic, but even if there's a higher bandwidth eSram pool of 32 MB, the GPU, or the CPU, (or both) still have to move data in and out of it (due to it's tiny size), whilst on PS4, most of the data can just sit in the 7 GBs of memory. This, in my interpretation spares some cycles, giving the PS4 advantage.

Feel free to call me stupid, as I'm just using my common sense here.

The One has dedicated hardware for these operations. It will be a black box to developers.
 

jaypah

Member
No. He never confirmed the downclock rumors.

He only confirmed the yield issues, and unless Gamestop preorders allotments have increased 10x as much then that's still very much true.

I thought the downclock rumor WAS from Thuway. Who did it originally come from then?
 

Applecot

Member
Mark Cerny was just describing this last night (or whenever it was for you). 176 > 1088 or whatever the numbers were. We'll see how it affects performance once the consoles launch.

Edit - This also might not have much impact on the total FLOPs of the GPUs and the added complexity; on top of the fact this isn't part of the unified RAM but is a tiny 32MB cache (much like your CPU has a cache) out of 8GBs that can only bustle data at 80ish GB/s
Also noting that this figure is the maximum theoretical performance, meaning it could even have been worded in a misleading way.

Historically Sony have had amazing first party studios and with this more simple and overall better hardware I think we're more likely to see some higher graphical fidelity out of the PS4. I honestly think multiplats will be downgraded to bad port status.
 

x3sphere

Member
I wouldn't expect those awards to be for graphics.

TitanFall doesn't even really look good. In fact, I'd be willing to be that it will look noticeably better on PS4. Especially since it's coming later.

Forza looks great as always, but there wasn't anything that stood out about it graphically in terms of being next-gen.

Not to mention I don't know if there was even an 'approximate' XB1 machine on the floor. Weren't they using Windows 7 PCs with higher spec Nvidia cards?

Blown out of proportion based on the fact that LocoCycle was running off a PC. Several other devs confirmed they were on actual hardware at the show. Forza, Fantasia, Ryse, and Project Spark were definitely running off actual units.
 
It's not necessarily about frames per second but actual data like AI, assets etc. that could be shoved between the components. And after Cernys talk it made sense to use a unified memory pool. PS4s gpu is still faster according to the specs. But it's good news after als that downclocking rumors.
Well,
800Mhz x 128 bits = 102 GB/s
According to the article the flux capacitor found by MS allows read and write at the same time(intel, hire these guys) and so theorical max bandwidth should be 102 x 2 = 204
But the news say is 194 with take us to a 96GB/s read and 96GB/s write and so to 750MHz x 128 bits.

Downgrade or what?
 
This comment in the EG article made me chuckle :lol

Director: "What is the throughput?"
Engineer: "102GB/s"
/SLAP
Director: "Try again"
Engineer: /sniff "176GB/s?"
Director: "Nearly. Have one more go."
Engineer: "Err, 1..9..2GB/s?"
Director: "That's better. You can stay."
 

Solal

Member
Am I the only one not believing a word that MS says ?

I mean: MS says its console is more powerful than we think... seriously?
 
That's true of everything though, tools and kits improve very quickly at a launch of a new console.

They improve, but sometimes not as much as devs would like :p

I hope one of those developers are Dice, because i really want a 1080p, 60fps BF4 console experience :p
 
that's pretty much this thread in a nutshell.

yes, everyone, the theoretical maximum bandwidth of 32MBs of RAM in the Xbone is going to be the deciding factor on whether or not it's more powerful than the PS4.

If it's 176 GB/s then the Xbone will be less powerful than the Wii U. If it's 192 GB/s then it will be the singularity and computing power will become infinite overnight.

OBVIOUSLY.

Like, if theoretical bandwidth was the most important thing in these systems then why not simply go with 64MB of super fast EDRAM/ESRAM?

Put in 2GB of DDr3 in there, and let embedded memory do the rest right? Doesn't make much sense right?
 

KOHIPEET

Member
that's pretty much this thread in a nutshell.

yes, everyone, the theoretical maximum bandwidth of 32MBs of RAM in the Xbone is going to be the deciding factor on whether or not it's more powerful than the PS4.

If it's 176 GB/s then the Xbone will be less powerful than the Wii U. If it's 192 GB/s then it will be the singularity and computing power will become infinite overnight.

OBVIOUSLY.

Roling on the ground. Thank you for this post! :)
 
Very good news if true.

With less of a gap between the platforms it would be easier for developers to utilize the full power of the PS4 in multiplatform games.

Secret sauce?
 

strata8

Member
Well,
800Mhz x 128 bits = 102 GB/s
According to the article the flux capacitor found by MS allows read and write at the same time(intel, hire these guys) and so theorical max bandwidth should be 102 x 2 = 204
But the news say is 194 with take us to a 96GB/s read and 96GB/s write and so to 750MHz x 128 bits.

Downgrade or what?

The article very clearly said that they could read and write at the same time in some cases. "Some" being the key word there.
 

enra

Neo Member
If it's used as a cache, both Intel and Anand disagree with you.

"Intel claims that it would take a 100 - 130GB/s GDDR memory interface to deliver similar effective performance to Crystalwell since the latter is a cache. Accessing the same data (e.g. texture reads) over and over again is greatly benefitted by having a large L4 cache on package."

Intel's Crystalwell uses 25.6GB/s main memory and 100GB/s eDRAM.

"If it’s used as a cache, the embedded SRAM should significantly cut down on GPU memory bandwidth requests which will give the GPU much more bandwidth than the 256-bit DDR3-2133 memory interface would otherwise imply. Depending on how the eSRAM is managed, it’s very possible that the Xbox One could have comparable effective memory bandwidth to the PlayStation 4. If the eSRAM isn’t managed as a cache however, this all gets much more complicated."

But as Anand also said, that might not be the case.


that doesn't make any sense if cache + ddr3 was as effective as gddr5 every gpu maker would be using it
 

jaypah

Member
I was thinking the same thing. People are getting hyped for a theoretical bandwidth boost. This doesn't put XB1 on equal footing with the PS4 at all. The PS4 still has a huge GPU advantage. But any improvements is good i guess right? Haha.

Yes? Improvements are usually a good thing, right?
 

orioto

Good Art™
Am i reading the op right ? Microsoft just pulled a "oohhh wait, we were wrong, our consle is actually way more powerfull!!!" ?? Really ?
 
I wouldn't expect those awards to be for graphics.

TitanFall doesn't even really look good. In fact, I'd be willing to be that it will look noticeably better on PS4. Especially since it's coming later.

Forza looks great as always, but there wasn't anything that stood out about it graphically in terms of being next-gen.

Not to mention I don't know if there was even an 'approximate' XB1 machine on the floor. Weren't they using Windows 7 PCs with higher spec Nvidia cards?

It did looked better than any other racing game at e3, even those who ran at 30fps.
 

Jagerbizzle

Neo Member
I am not a tech guy, so someone could enlight me on the topic, but even if there's a higher bandwidth eSram pool of 32 MB, the GPU, or the CPU, (or both) still have to move data in and out of it (due to it's tiny size), whilst on PS4, most of the data can just sit in the 7 GBs of memory. This, in my interpretation spares some cycles, giving the PS4 advantage.

Feel free to call me stupid, as I'm just using my common sense here.

As far as memory bandwidth for the GPU is concerned, using the embedded ESRAM allows for ~170GB/s which I believe is comparable to what you can get on the PS4.

The fact that it sits in a unified 7GB pool in the PS4 doesn't make much of a difference as far as a dev is concerned. I'd imagine you'll either have an API that exposes this to you easily on the XB1 or just invest in writing the required code once for your engine and be done with it.
 
Not it wasn't. It was capable of 32GB/s.

Not it wasn't. It was capable of 32GB/s.

http://features.teamxbox.com/xbox/1145/The-Xbox-360-Dissected/p6/

Now that we have given an overview to the main parts of the Xbox 360, it is time to bring in some numbers. The Xbox 360 GPU holds the memory controller, which is connected to the three-core CPU by a 22GB/sec bus, and to the SouthBridge (designed by SiS) and I/O controller via a 2-lane PCI-Express link. As we mentioned above, the eDRAM has a 256GB/sec bandwidth and is connected to the GPU via a wide bus running at 2GHz.

http://www.extremetech.com/gaming/75254-xbox-360-gpu-details

The 10MB of EDRAM is actually on a separate die, at least initially. As future process technologies become available, it is possible that it could be on the same piece of silicon as the GPU. Still, the EDRAM resides on the same package, and has a wide bus running at 2GHz to deliver 256GB/sec of bandwidth. That’s a true 256GB/sec, not one of those fuzzy counting methods where the 256GB is “effective” bandwidth that accounts for all kinds of compression. The GPU writes the back buffer, Z buffer, and stencil buffer to the EDRAM. When it is finally able to drawn to the screen, the EDRAM transfers the back buffer to the 512MB of GDDR3 for scan-out. The EDRAM does not store any textures—the full 10MB gets pretty much filled up with 1280×720 HD resolution, including Z, stencil, and anti-aliasing sub-pixel samples.


I don't know man, it seems like you are wrong.
 
Well,
800Mhz x 128 bits = 102 GB/s
According to the article the flux capacitor found by MS allows read and write at the same time(intel, hire these guys) and so theorical max bandwidth should be 102 x 2 = 204
But the news say is 194 with take us to a 96GB/s read and 96GB/s write and so to 750MHz x 128 bits.

Downgrade or what?
That actually makes sense. I don't get how they'd just now figure out it can simultaneously read and right. Can PS4's memory simultaneously read and write?
 

chubigans

y'all should be ashamed
What a weird article. Performance of consoles can increase with more finalized dev tools? Multiplatform titles will have to be downgraded slightly on the XB1? As time progresses, games will look better on the XB1? You don't say!

I thought the downclock rumor WAS from Thuway. Who did it originally come from then?

It was from thurway. Then CBOAT confirmed the yield issues but completely ignored the downclock rumor.
 
Status
Not open for further replies.
Top Bottom