ginger ninja
Banned
In dragon ball terms, how far behind behind ms is compared to Sony if the rumored specs are true?
PS4 - Android 17
Durango - Android 18
It does not matter, microsoft's architecture is good as it is. It's using DDR3 but a very high frequency DDR3 not the same the WiiU is using.
Wii U - Mr. Satan
Let's use Dragon Ball instead
PS4- Goku w/ tail
360- Krillin
GDDR Goat?
Can someone explain how latency will benefit a GPU that will mostly perform vector accesses from L1 and L2 cache and not from esram directly? I don't understand how it will benefit the CPU much either as cache miss are fewer and fewer nowadays. Are gaming algorithm more prone to generate cache miss?From what I heard somewhere was that eSRAM has lower latency
Microsoft had the performance crown the past 2 gens, it's surprising to think they may not this time, especially if it ends up being as decisive of a loss as it looks now with the rumored specs.
Why is the Xbox doing this? We don't know what their doing. Yeah but why are they doing it?
It's like everyone heard about gddr5 for the first time and decided that without a gddr5 memory solution a device is weak.
Are we gonna be talking about memory this whole generation?
There is better memory than gddr5 out there btw.
I think those same people forget that the ps4 only has a confirmed 1.8tflop.I think many people forget that MS has rumored 1,2tf GPU. Going for GDDR5 with that GPU doesn't make any sence. Bandwidth only works if you have power to push things with that bandwidth.
Right now they don't need GDDR5 using it would be waste of resources. Instead they choose slower big pool of memory and because it is to slow for system they patched it with SRam to give them boost to bandwidth.
Why is the Xbox doing this? We don't know what their doing. Yeah but why are they doing it?
It's like everyone heard about gddr5 for the first time and decided that without a gddr5 memory solution a device is weak.
Are we gonna be talking about memory this whole generation?
There is better memory than gddr5 out there btw.
Can someone explain how latency will benefit a GPU that will mostly perform vector accesses from L1 and L2 cache and not from esram directly? I don't understand how it will benefit the CPU much either as cache miss are fewer and fewer nowadays. Are gaming algorithm more prone to generate cache miss?
Anyway, if someone has a good explanation for that, I'm genuinely eager to read from it because this argument is getting championed and nobody seems to be able to back it up.
What you are suggesting would result in either developers having to really muck around with how they address the two pools, or a lot of what amounts to double handling of data, wasting resources. More than likely a mix of the two. What I mean by double handling is that you would often see situations where the DDR3 pool would be used to pre-fetch data for the GDDR5 pool, making it a less effective solution.So separating the RAM would cause issues with what exactly? I thought the PS3 problem was that the CPU and GPU only could access half? What if the Durango CPU and GPU could access both the 2 GB pool and 6 GB pool of RAM? Would this be too difficult/expensive to create that same bandwidth channels since it would be 4 buses instead of 2?
That depends WAY too much on MS' implementation of the two pools, both in a hardware handler (which we know it has) and how much of the access developers will have to customize flow between the two pools and to the silicon. It's an unknown at this point.Has anyone tried to predict what would the effective bandwidth of Durango be?
I thought that rumor (which site was it again?) sounded BS when the person leaking it just added the ESRAM and the DDR3's bandwidth numbers...
This is not a question of will they do this, but why didn't they do this.
So Durango is rumored to have 8 GB DDR3 with 32 MB eSRAM. I understand that MS cannot just swap the DDR3 RAM with GDDR5 in any efficient way without delaying the console. It sounds like 8 GB GDDR5 was too expensive to do to them, and eSRAM wasn't too expensive, even though eSRAM is pretty pricey too. But why didn't MS just use 6 GB DDR3 RAM, with 2 GB of GDDR5 RAM. Change the balance between GDDR5 and DDR3 as much as you need to make it price efficient. Even if they only did 512 MB of GDDR5, this sounds to make more sense to me, who doesn't know much. Here's why I think it makes more sense.
Their current setup allows a max bandwidth of 170GBps if they most efficiently use the eSRAM, with a minimum bandwidth of about 68GBps. They are limited by that 32 MB of RAM that is much faster.
My idea would remove the eSRAM, and replace it with 512MB - 2GB of GDDR3 RAM, and the remaining RAM to make it 8 GB be DDR3 RAM. Wouldn't this allow them to have a max bandwidth of 176 GBps, and a minimum bandwidth of 68 GBps, but they would now have the hyperfast RAM have a capacity between 512 MB - 2 GB.
Thoughts?
A very similar thing happened the last time around with XDR, and then when people started seeing both consoles' real-life results, it quickly died down.
I think those same people forget that the ps4 only has a confirmed 1.8tflop.
Latency isn't a big issue for GPU operations where you're generally working on larger quantities of data at a time, and that data demand can be predicted in advance. This is why high end graphics cards use GDDR5, it's high bandwidth is the big gain for graphics processing.
Latency is a big deal for general CPU operations where the CPU will demand a lot of small packets on short notice.
Imagine you ran a parts supply company with a wide array of products and had the choice between two delivery contractors.
Delivery contractor A can delivery 176 crates of various parts a day to your customers, but you need to supply those crates 2 days in advance so they can load the trucks correctly.
Delivery contractor B can deliver only 68 crates of various parts a day to your customers, but also only needs to receive the crates 1 day in advance from you.
If your customers know what they need long term and are looking to go through large quantities of your parts they'll be best serviced if you choose delivery contractor A. If your customers have a hard time predicting day to day usage and are making a wide array of demand based products they would be best serviced by contractor B, assuming contractor B can meet demand.
A GPU is the first kind of customer. It's easy to predict what resources it will demand next and therefore your biggest concern is just giving it enough to work with.
A CPU is the second kind of customer. What we as end users choose to do within the OS can change at the drop of a hat and the CPU needs to service that as much as possible. Being able to deliver 176 crates of parts in a day doesn't do much for your customer if those are the wrong parts. Being able to deliver data extremely fast to the CPU doesn't do the CPU much good if it isn't the right type of data.
What you are suggesting would result in either developers having to really muck around with how they address the two pools, or a lot of what amounts to double handling of data, wasting resources. More than likely a mix of the two. What I mean by double handling is that you would often see situations where the DDR3 pool would be used to pre-fetch data for the GDDR5 pool, making it a less effective solution.
Also, as Brad Grenz said in this thread, you start running into serious bus logistics that will start handicapping both pools. This is why ESRAM is beneficial.
That depends WAY too much on MS' implementation of the two pools, both in a hardware handler (which we know it has) and how much of the access developers will have to customize flow between the two pools and to the silicon. It's an unknown at this point.
I think the 8gb gddr5 is awesome and we will eventually see if it has the desired effect. I just think its funny how people are suddenly so interested in memory and memory bandwidth. It's like Sony knew gaf would freak out about it.
Yes but we aren't talking 8gb on these cards. Ms might have gddr5 on its gpu.True but there is point when you need to get better ram.
GDDR5 was already put into 1,5tf cards before that point was GDDR3 and that was used mostly on cheaper cards that couldn't output high resolution with good framerate.
DDR3 stacked runs circles around GDDR5.
I think it was a great decision. I think how the talk shifts is so funny though. People are setting themselves up to believe that the next Xbox is weak as hell unless it has at least 8gb gddr5. I just think its funny.It's the one thing developers have been clamoring about this entire fucking generation.
Sony not only delivered the goods in that department, they doubled down on it, making for happy developers that can look forward to working on a platform that isn't hideously constrained in every way possible.
If we had been given the kinds of details we currently have for PS4 and rumored for X720 with last generation's systems the various bottlenecks and obstructions in the PS3 would have been spotted very quickly.
Latency isn't a big issue for GPU operations where you're generally working on larger quantities of data at a time, and that data demand can be predicted in advance. This is why high end graphics cards use GDDR5, it's high bandwidth is the big gain for graphics processing.
Latency is a big deal for general CPU operations where the CPU will demand a lot of small packets on short notice.
Imagine you ran a parts supply company with a wide array of products and had the choice between two delivery contractors.
Delivery contractor A can delivery 176 crates of various parts a day to your customers, but you need to supply those crates 2 days in advance so they can load the trucks correctly.
Delivery contractor B can deliver only 68 crates of various parts a day to your customers, but also only needs to receive the crates 1 day in advance from you.
If your customers know what they need long term and are looking to go through large quantities of your parts they'll be best serviced if you choose delivery contractor A. If your customers have a hard time predicting day to day usage and are making a wide array of demand based products they would be best serviced by contractor B, assuming contractor B can meet demand.
A GPU is the first kind of customer. It's easy to predict what resources it will demand next and therefore your biggest concern is just giving it enough to work with.
A CPU is the second kind of customer. What we as end users choose to do within the OS can change at the drop of a hat and the CPU needs to service that as much as possible. Being able to deliver 176 crates of parts in a day doesn't do much for your customer if those are the wrong parts. Being able to deliver data extremely fast to the CPU doesn't do the CPU much good if it isn't the right type of data.
I don't think you understand how the hardware works.
It's a logistical nightmare for developers if the RAM isn't unified. In your scenario, they would have to specifically code which RAM to use for which data.
Also, correct me if I'm wrong but there would be issues connecting the different types of RAM to the same bus while maintaining a high throughput.
Why are people shitting the bed because Microsoft's console might not have exactly the same specs as Sony's?
I really don't see the point of trying to 2nd guess Microsoft before the product is even in anybody's hands to test in the first place. You don't even know how it will perform outside of the lab and you're already giving Microsoft shit about their design decisions? Are you an electrical engineering god with a crystal ball that allows you to divine how developers will fare on every platform?
Let's pretend that the PS4's memory implementation is better in every way, shape or form. Even then it is pointless to judge it in a vacuum devoid of pricing information, devoid of library considerations.
Ease down Ripley, ease down, you're just grinding the trans-axle.
Not to just shoot down your post in such a lazy way, but latency becomes far less important as speed increases. The speed of the Vram with GDDR5 is what essentially compensates for the lesser latency. It's been that way each ram generation actually, with older models of ram having less latency, but newer models being faster and/or having more bandwidth.
I.e, DDR actually has less latency than DDR2, DDR2 less than DDR3, and DDR3 less than DDR4. But it matters little because the latter are much faster, and the added speed is far more useful.