w1gglyjones
Member
Nintendo picked 32 MB also so it must be enough... rite?
This phrase is... so full of contradictions. 'is how microsoft arrives at 200 GB/s but in reality it's not how any of this works" O_O Pratically it's how to say xbone arrives at it's 200 GB because it's a bullshit.
Very old article before ms revised its esram bw... now total aggregate is over 270 iirc
Actual esram is 204. Theoretical peak
32mb is plenty to double buffer 1920x1080x32bit. esram was designed for frame buffer just like edram on wiiu and can be used as scratchpad.
Back of the envelope calculations: 1280x720 frame with no AA = 7MB, so a 1920x1080 frame is 2.25x bigger at 15.75, which means 32 is just barely big enough to double buffer.
Edit: Based on some article about the 360 eDRAM.
We all know Microsoft's strategy in choosing the memory architecture for the Xbone - split into the main DDR3 and on-die ESRAM pools. This thread isnt about that. Its not about comparison with PS4 APU either. I'm talking about the size Microsoft choose for the ESRAM - 32MB.
I'll start off with a quote from Anandtech's Xbone Arch analysis;
So Anand notes that if ESRAM is used as a cache (which it is) then it will be of significant perf benefit in terms of current workloads but also notes that there is not much room for growth.
Moving onto an another Anand piece on the GT3e (Iris Pro 5200) ..
So on one hand we have a console APU that needs to be future-proofed for atleast 5 years and Microsoft chose 32MB and on the other hand we have a very miserly Intel which is very conservative with it's die-sizes and they decided to future-proof theirs with 128MB.
To me, this just reads like a non-issue right now but years down the road, devs may need to play the optimize game more and more with the 32MB of ESRAM.
Very old article before ms revised its esram bw... now total aggregate is over 270 iirc
Actual esram is 204. Theoretical peak
If we know anything at all about console development it's that
a) they are always outdated quicker than we hope
b) developers are always crafty about finding workarounds
You could have said the exact same thing about the PS3 having 256MB of main RAM and 256MB VRAM in 2005 compared with the 6-8GB main RAM and 2GB+ VRAM they have now. And it still managed this:
I honestly don't see why this is different.
Nintendo picked 32 MB also so it must be enough... rite?
I have been told it doesn't work like that....?
Memory is memory and access time are insignificant.
Depending on how the eSRAM is managed, its very possible that the Xbox One could have comparable effective memory bandwidth to the PlayStation 4. If the eSRAM isnt managed as a cache however, this all gets much more complicated.
Back of the envelope calculations: 1280x720 frame with no AA = 7MB, so a 1920x1080 frame is 2.25x bigger at 15.75, which means 32 is just barely big enough to double buffer.
Edit: Based on some article about the 360 eDRAM.
No more. My head hurts. Lets just play games.
This phrase is... so full of contradictions. 'is how microsoft arrives at 200 GB/s but in reality it's not how any of this works"
O_O
So xbone arrives at it's 200 GB because... it's a lie?
Show me a BOM estimate that says that. I don't think you're necessarily wrong, but I've never seen yield mentioned in a BOM estimate, so I assume they're simply billing the wafer cost plus die per wafer. Still, I expect the yield of these parts to be somewhat similar since they're identical in a lot of ways.
wasnt 130GB/s the practical peak though.
LOL What??? And why would anyone waste that memory on this way? Once the final buffer is calculated you store that buffer onto the main RAM. Having it on the eDram would be a waste of 16MB of that memory in order to save a mere 15MB/frame of memory bandwidth.Back of the envelope calculations: 1280x720 frame with no AA = 7MB, so a 1920x1080 frame is 2.25x bigger at 15.75, which means 32 is just barely big enough to double buffer.
Edit: Based on some article about the 360 eDRAM.
Yes and gddr5 176GB/s is theoretical max as well as is 204 esram and 68 ddr3
So we should see more than likely 1080p running smooth, just plenty of staircases?
What does it matter if it's listed or not? No one gets 100% yield. So whether it's listed on the BOM or not makes no difference, because the cost is still there.
Where do you go to learn this stuff? Computer Engineering class? Cuz from what I've seen of comp sci and IT they don't go over hardware in this fashion.
But if the BOM estimators figure cost by wafer and how many dies a wafer gives, they may not be accounting for yield. That's my point.
If we know anything at all about console development it's that
a) they are always outdated quicker than we hope
b) developers are always crafty about finding workarounds
You could have said the exact same thing about the PS3 having 256MB of main RAM and 256MB VRAM in 2005 compared with the 6-8GB main RAM and 2GB+ VRAM they have now. And it still managed this:
I honestly don't see why this is different.
Where do you go to learn this stuff? Computer Engineering class? Cuz from what I've seen of comp sci and IT they don't go over hardware in this fashion.
Well I guess you can since anand said this
...
and from what we know so far, Microsoft intended to use it as a cache. But i wouldn't be the one ask about this tech stuff, I just thought it was interesting to see DDR3/eRam vs GDDR comparison from someone besides Microsoft so I posted that.
Seriously, can we ditch "XBOne"? It sounds stupid and juvenile. Can't we just all agree to use "XBO" instead? I mean, on one hand we all demand to be taken seriously by the industry, yet on the other we're still making dildo jerk off boner jokes at every passing opportunity. C'mon, sons.
On topic: I know nothing about tech specs, but I do know that clever artists and engineers will find ways to use the hardware to their advantage and still make beautiful games.
XB1 is fine with me. Though I usually demand that the industry becomes serious first before I stop my dildo jerk off boner jokes.
Seriously, can we ditch "XBOne"? It sounds stupid and juvenile. Can't we just all agree to use "XBO" instead? I mean, on one hand we all demand to be taken seriously by the industry, yet on the other we're still making dildo jerk off boner jokes at every passing opportunity. C'mon, sons.
Erm, neither, actually. I just Googled for the game and posted one of the first screenshots I saw. Does filtering and AA have much to do with RAM quantity? I was under the impression it doesn't. My point was more about texture quality and the like.They sure as heck didn't manage "this" in terms of the final image quality of that game. The game is great, and incredibly well-executed in both gameplay and technical achievement, but please don't post a bullshot (in terms of filtering and AA) as an indication of what was achieved in the hardware, as it either shows you aren't familiar with the game, or you're hoping others aren't.
Yes, it caused issues. I was making a point about canny developers finding workarounds.And using the split 256/256 memory pool for the PS3 did cause constant issues that had to be worked around in a way which wasted resources that would otherwise have allowed more efficient development efforts.
It's only offensive or annoying if you let it be. Seriously, it's just "Xbone" or just "The 'bone" by this point. Unless the poster is explicitly making a boner comparison or some such, there's no harm in it in and of itself.
XB1 is fine with me. Though I usually demand that the industry becomes serious first before I stop my dildo jerk off boner jokes.
If we know anything at all about console development it's that
a) they are always outdated quicker than we hope
b) developers are always crafty about finding workarounds
You could have said the exact same thing about the PS3 having 256MB of main RAM and 256MB VRAM in 2005 compared with the 6-8GB main RAM and 2GB+ VRAM they have now. And it still managed this:
I honestly don't see why this is different.
Don't say "they managed this" and then post a fucking bullshot which looks 10times better than the actual game and doesn't factor in the bad framerate. From a technical standpoint, Last of Us is a severly flawed game. And it's quite bad for 2013 standards. If anything your pointing out quite the opposite of what you intended - consoles are too weak and it shows, making devs compromissing the quality of their titles on technical AND gameplay layers.
Anandtech said:All information points to 32MB of 6T-SRAM, or roughly 1.6 billion transistors for this memory. Its not immediately clear whether or not this is a true cache or software managed memory. Id hope for the former but its quite possible that it isnt.
You're in a very small minority with that opinion.
Any reason except redundancy why they would make the esram in blocks of 8Mb?
A 1080p@32bit rendertarget is almost 8Mb
Agreed. TLoU was a great game but anybody using it as an example of amazing graphics is just fooling themselves. The image quality is nothing like that screenshot thrown around in this thread and the framerate was very unstable especially during larger outside areas. Heck I even saw seams popping up in some dark areas of the environment.I don't think it's totally unfair. I enjoyed the game in spite of a framerate and image quality that did distract from the presentation. What was achieved on the PS3 is impressive relative to its age and architecture (and it does stand on its own), but compared to what's been achieved elsewhere, I can understand some annoyance at people calling it anything close to a marvel. Flawed is a fair word for it. Severely flawed is overstating it, but as a response to an image that doesn't represent the quality of the game, I get it.
Make it X1 one letter less to type..
2) How can they assure "109GB/s minimum bandwidth"?
The Anandtech article is old but the information pertinent to the discussion of this thread (32MB ESRAM) is still relevant. Not sure, why your jab was required though.Old articles are old. Good job, as usual.
I'm not saying TLOU isn't without compromises, though, merely that it was achieved with the not-forward-thinking 256MB of VRAM; hence supporting my point that developers are canny at working around such deficiencies.
GDDR5 is way faster but what advantages does it yield over the X1? So far it seems that Sony simply got a better GPU as they didnt need to waste silicon to compensate.
Can some elaborate on 'dat GDDR5?'
wasnt 130GB/s the practical peak though.