At that point it's possible that we may see ambitious titles operating at a lower resolution on Xbox One compared to the PlayStation 4.
Ambitious titles?
At that point it's possible that we may see ambitious titles operating at a lower resolution on Xbox One compared to the PlayStation 4.
Please stop this bullshit. GDDR5 and DDR3 latency are comparable. It's the memory controller implementations that add the latency, when people compare Intel's best in business DDR3 memory controller with a GPU's GDDR5 controller that optimizes bandwidth over latency.GDDR5 is good for graphics, less good for CPU operations due to the higher latency.
They can get pretty close to the 176 on a more consistent basis. The original peak theoretical performance for the XBO was 170Gb/s but they only really said the 102 number since it was more representative.176 is also theoretical. Just how specs are advertised.
Not that big of a jump because before their theoretical peak bandwidth was 170GB/s
GDDR5 is good for graphics, less good for CPU operations due to the higher latency.
DDR3 is good for the CPU but worse for the graphics due to lower bandwith. ESRAM is there to make up for that, at least a little bit.
GDDR5 is good for graphics, less good for CPU operations due to the higher latency.
Please stop this bullshit. GDDR5 and DDR3 latency are comparable. It's the memory controller implementations that add the latency, when people compare Intel's best in business DDR3 memory controller with a GPU's GDDR5 controller that optimizes bandwidth over latency.
It's crazy you are still spreading this bullshit. You are a better poster than this.
32MB is still too less though, but good news nonetheless.
2 different pools of memory do not produce constant 192GB/s. 1 pool of unified 176GB/s without an added step (eDRAM). It was posted earlier by another. I don't think I have to outline why the "theoretical" number is a bit misleading when talking about overall performance.
LOL math up in here. Looks like he edited.
I think this quote speaks for the most part. Optimizing games for the Xbone will be a much harder task then PS4 due to the ESRAM implementation. In other words, PS4 will be much easier to develop for compared to Xbone.Microsoft tells developers that the ESRAM is designed for high-bandwidth graphics elements like shadowmaps, lightmaps, depth targets and render targets. But in a world where Killzone: Shadow Fall is utilising 800MB for render targets alone, how difficult will it be for developers to work with just 32MB of fast memory for similar functions?
I am pretty sure that BF 4 won't be 1080p on Xbone. 60 FPS and 1080p is certainly not possible.At that point it's possible that we may see ambitious titles operating at a lower resolution on Xbox One compared to the PlayStation 4.
GDDR5 is good for graphics, less good for CPU operations due to the higher latency.
DDR3 is good for the CPU but worse for the graphics due to lower bandwith. ESRAM is there to make up for that, at least a little bit.
192gb/s to a small 32mb memory. PS4 does 176GB/s to 8Gigabytes of memory.
so no.
They are not adding stuff. The eSRAM bandwidth has been boosted by 88% according to the article, which means a jump from 102GB/s to a theoretical 192GB/s. They may not hit the latter but the jump is pretty good. It probably won't allow them to do a lot since they don't have much of it--just 32MB.
No, that's not right. The cloud is primarily there to sell people on the idea of always-online, and secondarily to host multiplayer servers. Which do what servers have always done.How does the cloud comes into place in this? As far as I know the Cloud is there so it's possible to have Open World games without pop in and stuff like that right?
Well, i's true that higher latency and higher bandwidth are generally better for GPU workloads and lower latency and less bandwidth are generally better for CPU workloads.Oh brother... Once you want to cling to hope, you REALLY cling to it GAF. lol
Again on the ram, we really wanted to get 8gb and make that power friendly as well which is a challenge to get both power friendly for acoustics and get high capacity and high bandwidth. So for our memory architecture we're actually achieving all of that and we're getting over 200gb/s across the memory sub-sytem.
Thats the same size of the wiiu edram.... its small."At 32MB the ESRAM is more than enough for frame buffer storage, indicating that Microsoft expects developers to use it to offload requests from the system memory bus. Game console makers (Microsoft included) have often used large high speed memories to get around memory bandwidth limitations, so this is no different. Although 32MB doesnt sound like much, if it is indeed used as a cache (with the frame buffer kept in main memory) its actually enough to have a substantial hit rate in current workloads"
Anandtech.com
How does the cloud comes into place in this? As far as I know the Cloud is there so it's possible to have Open World games without pop in and stuff like that right?
No - PS4 has still the superior GPU specs.
If MS doesn't even know there on system (which is funny) how would you know how they came up with the math? Anyway like others have said they can't lie to the developers because it will do nothing but backfire. I already Know the Xbox One is suppose to be the weaker system, but clearly they made great strides on being on equal ground with the PS4.
So did the 360.
Not really any more interesting than they were before.Things are getting reeeaaal interesting
Yes? It's not enough when you take into account the competitor's memory setup which allows them to do a lot more."At 32MB the ESRAM is more than enough for frame buffer storage, indicating that Microsoft expects developers to use it to offload requests from the system memory bus. Game console makers (Microsoft included) have often used large high speed memories to get around memory bandwidth limitations, so this is no different. Although 32MB doesnt sound like much, if it is indeed used as a cache (with the frame buffer kept in main memory) its actually enough to have a substantial hit rate in current workloads"
Anandtech.com
Oh GAF... Once you want to cling to hope, you REALLY cling to it...
It's the same bad math that's been used before. You can't just add the speeds of both. That's not how bottlenecks work.
Apparently, there are spare processing cycle "holes" that can be utilised for additional operations.
Not really any more interesting than they were before.
I know a lot of people knock the Wii U for having a complex set up - how does it compare in terms of complexity to the XBone (not in terms of power)? Assuming a developer was interested in getting peak performance out of both machines?
No devs will hit either 176 on the PS4 or 192 on the Xbone.
I've been saying this over and over that memory bandwidth is not the real advantage the PS4 has over the Xbone. The real advantage is increased GPU area due to not having a bunch of space taken up by EDRAM.
This is something that could only be alleviated (for Microsoft) by super high unattainable clock speeds on the GPU core that simply won't happen.
Hasn't Ms started talking recently that the system has more than 200GB/s of available bandwidth?
If their claims about the gpu and cpu seeing both as a combined pool with added bandwidth for some operations are true, that figure falls in line with 68 + 133 GB/s.
That was by adding both memories theoretical bandwidth, if they do this now the value would be obviously higher XD (260 GB/s to be precise)
So Xbox One was designed for bigger worlds and PS4 for better looking worlds. 3rd parties will still make it the same for both, though.
So did the 360.
And what did we get? Mostly better ports on the 360...
Not only do I smell a bit of bs in this, but in the end you can't simply add both bandwidths together.
Good for Xbox One developers though if true, extra performance is always good.
Yeah, this pretty much means nothing.90% of games will look the same on both. Stop dreaming.
So if we update the console power to DBZ level..
PS4:
XBone:
WiiU:
I am not entirely sure I am wrong here, and please stop with the implication taht everyone is on an agenda. It's what I know and if I am wrong I already apologized. Only on GAF you can say something and you already are a fanboy clinging on hope or something, ugh...
GDDR5 is good for graphics, less good for CPU operations due to the higher latency.
DDR3 is good for the CPU but worse for the graphics due to lower bandwith. ESRAM is there to make up for that, at least a little bit.