Codeblew
Member
Isn't eSRAM faster than GDDR5 in terms of raw speed?
Nope. 176 GB/s GDDR5 vs 102 GB/s eSRAM.
Handy chart:
Isn't eSRAM faster than GDDR5 in terms of raw speed?
Handy chart:
Yes, but due to the way the manufacturing and die-space works it would have necessitated a further reduction in GPU compute capabilities.
There are lots of factors, but as Cerny once showed in a talk they could have developed a memory architecture that was 1000GB/s of very fast embedded memory paired with 8GB of slow memory.
What the real life implication for the graphics compute power would have been is just conjecture but based on the Xbone it wouldn't have been good.
No need for a very fast memory setup if you don't have the necessary GPU to take advantage of that fast memory.
Snap, Media, TV, Kinect
And Apps
with Kinect + DDR5, it will be much higher than $499
To be fair, those numbers are based on 800MHz which isn't true any longer.Nope. 176 GB/s GDDR5 vs 102 GB/s eSRAM.
I think that's a bit of an oversimplification. Many expected MS to have 8GB of DDR3 RAM along with the eSRAM and Sony to have 4GB of GDDR5 RAM. I don't think it's accurate to say that MS not using 8GB of GDDR5 is cheaping out in any way, Sony for a long time didn't even know if it was possible.
Now I know this isnt a thread about compute but could someone lay out the advantages of GPU Compute for me?
If its used how exactly does that help devs? I know it frees up CPU processes, but what exactly are the kinds of things that can be done with it?
Snap, Media, TV, Kinect
And Apps
To be fair, those numbers are based on 800MHz which isn't true any longer.
It's now 6.625% higher on the eSRAM side. ;-)
We also known that there are compression algorithms present with the eSRAM that can help fit larger frame buffers in the 32mb's.
The graphics chip on the Xbox One has to draw the picture somewhere. The main memory is slow, so it's like picking up pens and doing the drawing underwater. The ESRAM is like a tiny dry spot out of the pool where the graphics chip can work faster, but the problem is it has to find room for both the pens and do the drawing in that little spot, so it's got helpers constantly swapping in and out pens and papers from the pool. It can only fit a few papers at full resolution, so if it's trying to draw a fancy picture with lots of layers it could run out of room and have to reduce the size (resolution) of the drawing. Keep in mind this is how the 360 worked, but the dry spot was faster and larger in relative terms and the swapping more automatic.
With the PS4, everything's laid out in a giant gymnasium with a waxed floor so everything happens faster than even the dry spot in the Xbox One, and there's no real need to shuffle things around in it. The main drawback is that everybody, including Sony, thought the gym was going to be half the size of the pool the Xbox was using up until the last minute of the design phase. There's also suggestions that the waxed floor might make people slip a little when changing directions (latency) compared to the pool, but the GPU doesn't change directions much, and the CPU walks slower and more carefully anyway (different memory controller).
Hope that makes sense.
They knew they wouldn't be hitting 1080p with it, they just expected Sony would also be in a position where they would not be able to hit 1080p, since everyone was expecting Sony to put 2-4 GB of GDDR5 in the PS4 tops.
That February reveal where they announced 8 GB GDDR5 was probably one of the worst days to be a member of the XBox division at Microsoft.
Indeed. When the PS4 specs were leaked and revealed to have 4GB of GDDR5, I was impressed by the tech, but greatly disappointed by the amount. If they didn't get 8GB at least, they would be in trouble (and we gotta thank that dev that told them so).
When they said 8GB in february, I was completely blown away, I couldn't believe they had done it. I think someone said even most of Sony's teams didn't know about the upgrade, they found out during the reveal. Maybe it was Cerny, or someone at GG.
Now imagine the faces of the people in the XB1 team...
They could have done the same thing with GDDR5 though. I think it has more to do with cost of 8GB of GDDR5 at the time they designed the hardware.
So DDR3 is better for some tasks? Why? It's still slower from what I can tell looking at the graphs.Snap, Media, TV, Kinect
And Apps
They could have done the same thing with GDDR5 though. I think it has more to do with cost of 8GB of GDDR5 at the time they designed the hardware.
The question was why MS couldn't settle for 2gb or 4gb of GDDR5 the way Sony was planning on doing
If bi-directionality was unique to eSRAM this would be a great point, but GDDR5 is also bi-directional.The only conflicting info I see here is the bi-directional nature of the ESRAM. Are we not allowed to add bandwidth if read and write can happen simultaneously? Is 204 GBps wrong? Why would crytek say that, then?
So much thing I learned from this thread. Now I know exaclty what's wrong with Esram. But to parrot other question people have asked: why did Microsot fo gor DDR3? Isn't that quite obsolete already? Why not 2 or 4 GB of GDDR5?
Ty lots of info in here
I believe i now understand, what i don't get is why Microsoft with all the experience and stuff made that mistake.
Seems like the PS4 has a ton of advantages :O
Talk to me like I'm a total idiot that doesn't know anything about this stuff, because that's what I am. This is something at the heart of lots of discussions and articles, and all of these things are written for people that already understand the basic facts.
Thanks in advance.
Can you elaborate on this at all? How is something like Forza 5 managing to output native 1080p at 60fps? No AA and no deferred rendering? Is this a permanent bottleneck, or do you think new techniques will be developed during this gen to overcome it?
So much thing I learned from this thread. Now I know exaclty what's wrong with Esram. But to parrot other question people have asked: why did Microsot fo gor DDR3? Isn't that quite obsolete already? Why not 2 or 4 GB of GDDR5?
Microsoft's approach to asynchronous GPU compute is somewhat different to Sony's - something we'll track back on at a later date. But essentially, rather than concentrate extensively on raw compute power, their philosophy is that both CPU and GPU need lower latency access to the same memory. Goossen points to the Exemplar skeletal tracking system on Kinect on Xbox 360 as an example for why they took that direction.
http://www.eurogamer.net/articles/digitalfoundry-vs-the-xbox-one-architects
Isn't eSRAM much lower latency than the other RAM types because it's embedded? So while the bandwidth is lower, it's quicker?
Is eSRAM like CPU cache in that way? Or do both these systems have CPU caches -and is there any difference there?
The question was why MS couldn't settle for 2gb or 4gb of GDDR5 the way Sony was planning on doing
The question was why MS couldn't settle for 2gb or 4gb of GDDR5 the way Sony was planning on doing
Wasn't there some hesitation on Sony's side even? I mean they were at 4gb pre reveal then changed it, so I wonder if they had found in the days leading up to the reveal that they would be able to manage 8gb without a crazy price hike/shortage.
What makes GDDR5 have high latency in the PS4 or is that an inherent flaw of the GDDR5 technology?Yes, it's very low latency but the cpu doesn't have enough bandwidth access to it to make use of it and it's wasted on the GPU since they are so paralleled that they are very latency tolerant. GDDR5 has high latency but massive bandwidth which makes it perfect for a GPU. Even Microsoft's own engineer admitted as much in a DF interview.
Ty lots of info in here
I believe i now understand, what i don't get is why Microsoft with all the experience and stuff made that mistake.
Seems like the PS4 has a ton of advantages :O
Thanks, and ouch at no AA.Forza manages 1080p by having an early last gen forward renderer and no AA. So it gets the magical native 1080p at the expense of jaggies and not being able to have dynamic lighting or other modern effects.
So DDR3 is better for some tasks? Why? It's still slower from what I can tell looking at the graphs.
If bi-directionality was unique to eSRAM this would be a great point, but GDDR5 is also bi-directional.
So if you want to inflate the bandwidth numbers, go ahead and use the by two multiplier, but do it for both to be mathematically consistent. This is not a recommendation by me, by the way. I think doing so would be silly and complicate all comparisons in the future.
(The additional point that CryTek are amazing for getting better bandwidth numbers in their release game than lab tests by Microsoft is also worth pointing out every time. But then again they are wizards because they release 1440p screenshots for a 900p game.)
Entrecôte;88236913 said:
No, it's not better. It's just that what Microsoft needed to realize its vision—snap, apps, Kinect, TV, etc.—was a lot of RAM, as opposed to very fast RAM, which is why they went with the slower DDR3. They could have 8 GB for sure. On the other hand, it appears that Sony prioritized really fast RAM over quantity of RAM (which they obviously still wanted to maximize). At the time, it was unknown if they would even be able to get 8 GB of GDDR5 in there.So DDR3 is better for some tasks? Why? It's still slower from what I can tell looking at the graphs.
If bi-directionality was unique to eSRAM this would be a great point, but GDDR5 is also bi-directional.
So if you want to inflate the bandwidth numbers, go ahead and use the by two multiplier, but do it for both to be mathematically consistent. This is not a recommendation by me, by the way. I think doing so would be silly and complicate all comparisons in the future.
(The additional point that CryTek are amazing for getting better bandwidth numbers in their release game than lab tests by Microsoft is also worth pointing out every time. But then again they are wizards because they release 1440p screenshots for a 900p game.)
I don't have any experience working with frame buffers but it does not make sense to me to compress data in a frame buffer. I cannot imagine doing math calculations on compressed data efficiently. Maybe a game dev will chime in.
Forza manages 1080p by having an early last gen forward renderer and no AA. So it gets the magical native 1080p at the expense of jaggies and not being able to have dynamic lighting or other modern effects.
Read up on tile rendering to see how devs can work around this. They faced the same issues with the 360. Its 10MB eDRAM wasn't enough for 720p and MSAA, but devs eventually worked around this to achieve just that.
This was my original impression too. Now I'm confused!isn't that 176 gbps already bi-directional? that 102 gbps isn't, and that's why microsoft uses 204 gbps. their mathematical mistake is adding both esram and ddr3 as 271 gbps or some other bullcrap.
again, due to overheating concerns, microsoft is apparently targeting 130-140 gbps on its esram.
Cerny was talking about DDR3 (68GB/s) + eDRAM (1000GB/s), not DDR3 (68GB/s) + eSRAM (102GB/s)As Cerny said there were two ways to go with the unified memory design, though. One DDR3 pool of memory with eSRAM to help the GPU make up for the slow speed of DDR3, or one pool of GDDR5.
Where is that overheating thing from? I'm only familiar with the technical fellow interview by Digital Foundry where 150GB/s was the maximum they could get in the lab. I don't remember anything about heat being brought up there as an explanation.again, due to overheating concerns, microsoft is apparently targeting 130-140 gbps on its esram.
Devs managed to do deferred rendering with a 720p framebuffer and up to four targets on 360, though that was admittedly pretty hard to do and therefore rare. Not all render targets need to be full resolution and color.Too large a wafer size, too many transistors and too expensive. Problem is, with deferred rendering, 32mb is simply not large enough for 1080p frame buffer. Hence why 720p/900p will be common place on the Xbox One. Can't really avoid it altogether because the DDR3 doesn't really have the bandwidth to compete.
Non layman post about it here.
What makes GDDR5 have high latency in the PS4 or is that an inherent flaw of the GDDR5 technology?
Could you cite the latency numbers that are not based around the different memory controllers in use for the different configuration in which that memory has been used in the past? (Where GDDR5 was strictly a GPU thing.)
What makes GDDR5 have high latency in the PS4 or is that an inherent flaw of the GDDR5 technology?
Could you cite the latency numbers that are not based around the different memory controllers in use for the different configuration in which that memory has been used in the past? (Where GDDR5 was strictly a GPU thing.)
Yeah, I just would like to know numbers, preferable on the AMD architecture just so I understand what the cost is to travel beyond the die.Latency is a function of how close the ram is to the CPU/gpu. For eSRAM it's embedded.