• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

X1 DDR3 RAM vs PS4 GDDR5 RAM: “Both Are Sufficient for Realistic Lighting”(Geomerics)

I don't think so. The GPU in the Xbone is around 1.6 to 1.8B transistors (7770-7790). The GPU in the PS4 is around 2.8B transistors (7850). Jaguar cores are pretty much the same across the two. I think 1B is a safe, low estimate for 8 cores. 6T 32MB of ESRAM is going to be 1.6B transistors.

So we've got 3.8B for PS4, 4.4B for Xbone. That's not counting zlib, audio DSP, custom memory interfaces etc. across the two. With 5 billion transistors, it's impossible for the Xbone to be double the 3.8B count. At most, it's likely 25% more, but that likely doesn't even translate into 25% bigger because that ESRAM is going to pack tight in terms of die space.

If you calculate it like so, fair enough. I was going with the 3 billion transistor for PS4/5 billion transistor for Xbox One figures floating around where.
 
If you calculate it like so, fair enough. I was going with the 3 billion transistor for PS4/5 billion transistor for Xbox One figures floating around where.

That's not based in reality then. The PS4 GPU will likely be a little less than 2.8B because that's the 7870 number and the 7850 is simply a cut down 7870 with the same die, but there's no other good GPU to go from. It has to at least be greater than the 2.08B on the 7790 which is only 896 SP and 16 ROPs, That's 14 CUs versus 18 CUs and 16 ROPs vs 32 ROPs. The GPU alone is likely near half the Xbone's 5B count.
 

cyberheater

PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 Xbone PS4 PS4
Good for him, product manager != engineer. You can't perform higher than your theoretical maximum.

So you're saying you know more about the performance of Xbox One then the product manager at AMD. You are starting to look stupid.
 

Truespeed

Member
Not really shocking news but interesting to see someone involved in the business acknowledging that the PS4 is more powerful - at least in terms in RAM.

You would either need to be a technical idiot or a liar to not acknowledge that the PS4 is more powerful.
 

i-Lo

Member
Pertaining to the article and wondering if corroborating evidence exists: 7GB being reserved for games on PS4.
 
Yes, it does patch the bandwidth, but it, in combination with the move engines, is specifically designed to keep those compute units fed. Anyone who has read the Vgleaks info on the move engines can see that's one of their purposes. If you don't believe they won't achieve that cause in at least some meaningful way over a plain 12 CU GPU, I don't know how to help you.

It's like adding a re-order or branch prediction buffer to a CPU. It's a well accepted method of boosting IPC.

It's still going to vastly underperform compared to the 18 CUs on the PS4, but it won't be a strict 50% boost, 33% reduction and will vary depending on the type of workload.

Erm... it doesn't change the theoretical performance... it doesn't increase bandwidth, it doesn't change the power.

You should probably look into what DMEs do.
 

TheCloser

Banned
So you're saying you know more about the performance of Xbox One then the product manager at AMD. You are starting to look stupid.

When a project manager makes a statement like that, then i will have to call bs. That statement is 100% wrong. I have said it and will continue to say it, you cannot perform higher than your theoretical maximum. You can argue all you want but it is a fact and you can take it to the bank. With posts like this, you just continue to prove my point that you have no idea what you are talking about. You don't even know what the role of a product manager is which is to "investigate, select, and choose products to develop for an organization, performing the activities of product management." His statement is 100% wrong.
 
Indeed. Although IIRC (and it's been a few years now) there was also 512k on the IOP, 256k more on the SPU (sound, no relation to the things in Cell), and some tiny amount in VU0 and VU1, 4k and 16k? I think the IPU worked out of main memory.

Damn that thing was fun. Fiendish, but fun.

Derp. 2MB on the IOP and 512k (or was it 1MB) on SPU2. Because of course it has to have the same memory pools as the PS1.
 

Jedeye Sniv

Banned
It's not, see the issue here is devs won't start to optimize until they have exhausted their memory pool. That is when you start coming up with unique and complex solutions to complex problems. Ie, witness using 5gb of ram or skyrim ps3 complex solution to memory issue.

This is one of the first times I have seen someone talk sense about this stuff since the RAM disparity was announced. we won't be seeing nearly a fraction of what these machines can really do until 4 or 5 years in. Right now, next gen games are probably just using the same idea of current gen but with a whole load more resources. It won't be until much later when devs are used to the systems and can come up with all sorts of clever workarounds and techniques.

GTA4 to GTA5.
 

cyberheater

PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 Xbone PS4 PS4
When a project manager makes a statement like that, then i will have to call bs. That statement is 100% wrong. I have said it and will continue to say it, you cannot perform higher than your theoretical maximum. You can argue all you want but it is a fact and you can take it to the bank. With posts like this, you just continue to prove my point that you have no idea what you are talking about. You don't even know what the role of a product manager is which is to "investigate, select, and choose products to develop for an organization, performing the activities of product management." His statement is 100% wrong.

I've worked with project managers at a technical level. Although they might not have a deep understanding of individual components they do have a good overall understanding at a technical level of the product.

He was also a analyst programmer in a previous job so does have a technical background.

And nowhere did Dave state that Xbox One would operate at a higher then theoretical maximum.
 

WolvenOne

Member
Quintessential rule of RAM, there is no such thing as sufficient RAM!

At least not until we have a HDD replacement that can operate at RAM like speeds. Once we have TB sized drives, with 500+ MBps transfer rates, and low latency operating capacity, then RAM becomes somewhat less important.
 

Madness

Member
512 mb was enough when your competitor had 256 + 256...

But 5gb+32mb eSRAM isn't enough when your competitor now has 7gb GDDR5.
 

FINALBOSS

Banned
Seriously, he's right.

http://beyond3d.com/showpost.php?p=1763939&postcount=4710

http://beyond3d.com/showpost.php?p=1763889&postcount=4704

http://beyond3d.com/showpost.php?p=1763153&postcount=4649

Seriously, it's getting sad and the mods at B3D won't ban these idiots so the forum is basically becoming unreadable right now. Rangers/specialguy/Tyrone is in full on melt down mode right now grasping at as many straws as possible and juniors/newly registered posters who post "inside information" get taken seriously over there despite none of these ever turning out to be right.


Also doesn't help that we have SenjutsuSage aka Rangers 2.0 diving in to threads like this one and doing the same kinda shit. SenjutsuSage has claimed "insider information" as well.
 
this i the lightning you get with 8GB GDDR5 Ram..

UuCexBB.png


(j/k)

I can see this being annoying as hell.
 

JJD

Member
That Dave guy is product manager at AMD for GPUs. Just saying.

So he is from the business side, not the tech side?

Just because he works at AMD doesn't mean he has a deep understanding of the technology.

And color me surprised that an AMD employee is going out of his way to talk good things about...an AMD GPU! Lol!

Did you really expect him to say "Well, we're providing both consoles with our GPUs but the PS4 solution is absolutely superior in every single way to the Xbone solution.".

Dude is just trying to make the Xbone GPU look good enough, and frankly I agree with him. It will be enough but the PS4 is the superior console when it comes to specs.
 

bj00rn_

Banned
Well, you are taking this a bit too far aren't you?
The point is that developers may choose what they see fit for their games, but obviously prebaked lightning vs real time has a different impact on the game engine performance and may prevent from achieving 60 fps for example. I think Forza tries to deliver the best visual quality but using last gen standards for example, and probably is the right choice so early this generation, as most people won't care/notice. Instead evolution is trying to deliver a next gen visual experience, but may end being a bad choice if they cannot reach 60fps. But obviously the second one is what i would expect to be the norm for both PS4 and XBone, as well as being 1080p native and hopefully 60 fps when needed.

We had a pretty funny situation during the transition to DX10 where almost all PC games with DX10 support looked the same as DX9 but had worse performance. But at least we could feel better about knowing it was there, kinda like a placebo, right..? So yeah, to be fair, that was also the point that you mentioned and which is in line with what I'm trying to say as well. And as you, I also think it will be less baked lighting eventually - If it makes sense, everything comes with a price.
 
When a project manager makes a statement like that, then i will have to call bs. That statement is 100% wrong. I have said it and will continue to say it, you cannot perform higher than your theoretical maximum. You can argue all you want but it is a fact and you can take it to the bank. With posts like this, you just continue to prove my point that you have no idea what you are talking about. You don't even know what the role of a product manager is which is to "investigate, select, and choose products to develop for an organization, performing the activities of product management." His statement is 100% wrong.

Nobody ever suggested that Xbox One would perform above its theoretical maximum. First of all, he was talking about real-life performance in all those cases (someone of your background should be well aware of the fact that theoretical peak performance is not sustainable in real-life situations, since that's the best case scenario), and secondly, he was not talking about isolated performance of the GPU alone, but the whole graphical subsystem as a function of proper ESRAM utilization (again, as an example, you should remember that RSX alone was nothing to write home about, but assisted by SPEs, PS3's entire graphical subsystem was very capable).
 

J-Rzez

Member
For console gamers, it's huge, I can give you that I guess. But 7GB for the entire system isn't particularly impressive.

It should hopefully put it in a good light for a longer time than usual though, judging by how much memory is used in high end PC games currently, both in system memory and GPU.
 

FINALBOSS

Banned
Nobody ever suggested that Xbox One would perform above its theoretical maximum. First of all, he was talking about real-life performance in all those cases (someone of your background should be well aware of the fact that theoretical peak performance is not sustainable in real-life situations, since that's the best case scenario), and secondly, he was not talking about isolated performance of the GPU alone, but the whole graphical subsystem as a function of proper ESRAM utilization (again, as an example, you should remember that RSX alone was nothing to write home about, but assisted by SPEs, PS3's entire graphical subsystem was very capable).

Who really cares? It's all moot anyways. If his statement carried any sort of water or contained any magical bit of surprise relevance it would have been quoted 9000 times and posted in the press all over the place.
 

Pistolero

Member
Read on B3D that MS has been thinking about going 12 GB (that and a GPU upclock). Apparently wispered by some insiders. Don't know how realistic it is to modify specs this late and what potential advantages the 4 additional GB would bring (the GPU is powerful, but not THAT powerful)...
 

nib95

Banned
Read on B3D that MS has been thinking about going 12 GB (that and a GPU upclock). Apparently wispered by some insiders. Don't know how realistic it is to modify specs this late and what potential advantages the 4 additional GB would bring (the GPU is powerful, but not THAT powerful)...

It's best to just ignore B3D these days. Unless one of the actual verified insiders or developers makes a post.
 
Read on B3D that MS has been thinking about going 12 GB (that and a GPU upclock). Apparently wispered by some insiders. Don't know how realistic it is to modify specs this late and what potential advantages the 4 additional GB would bring (the GPU is powerful, but not THAT powerful)...

Dev kits probably have 12 GB, but they just can't increase that willy nilly. They'd have to redesign the memory interface, motherboard, etc. Unless they doubled the chip densities and made it 16 GB.

GPU upclock is possible, but could impact yields.
 
Nobody ever suggested that Xbox One would perform above its theoretical maximum. First of all, he was talking about real-life performance in all those cases (someone of your background should be well aware of the fact that theoretical peak performance is not sustainable in real-life situations, since that's the best case scenario), and secondly, he was not talking about isolated performance of the GPU alone, but the whole graphical subsystem as a function of proper ESRAM utilization (again, as an example, you should remember that RSX alone was nothing to write home about, but assisted by SPEs, PS3's entire graphical subsystem was very capable).

I'm sorry, but did you just try to draw a parallel between the CELL processor assisting the RSX in graphics rendering and eSRAM?!?

WOW!

One is just a tiny cache memory pool, and the other is a 200+GFLOPS CPU. It's pretty obvious the two are not the same in any way shape or form.

I would have thought that with your aforementioned electrical engineering background you would have known this?
 

Perkel

Banned
Yes, it does patch the bandwidth, but it, in combination with the move engines, is specifically designed to keep those compute units fed. Anyone who has read the Vgleaks info on the move engines can see that's one of their purposes. If you don't believe they won't achieve that cause in at least some meaningful way over a plain 12 CU GPU, I don't know how to help you.

It's like adding a re-order or branch prediction buffer to a CPU. It's a well accepted method of boosting IPC.

It's still going to vastly underperform compared to the 18 CUs on the PS4, but it won't be a strict 50% boost, 33% reduction and will vary depending on the type of workload.

move engines are not exclusive to Xbone difference here is there there are more of them than in standard GCN there are 4 of them where in standard GCN 1.0 there are 2 bi-directional engine. Their purpose is to make best of available bandwidth. Their purpose will probably be moving things out of slow DDR3 ram and move it to faster ESRAM. 2 DMA
engines should be also in PS4

more on that in this AMD GCN overview paper:

I personally think more DMA engines are simply another patch on their memory problem. Since they use second pool of memory and that memory is way faster they need to move a lot of stuff, therefore they upped number of engines to get best of their shitty bandwidth. Xbone also is supposed to be proper media center so TV, twitch app, skype and many things that run at the same time require a lot of moving in memory so those DMA sure are handy for those tasks.

Thing is PS4 has UMA there is no second pool of memory so things after you load them into memory don't need to be moved (because there is no second memory like VRAM in PC or EDRAM in PS2 or ESRAM in Xbone)

Which means UMA imo should be superior to 2 pools of memory and DMA engines in therms of efficiency.

There are ton of people more knowledgeable than me so if something is wrong (in my post) please speak.
 

Perkel

Banned
B3D needs a cleanup, Bish would do the trick.

I think they should just profile themselves or maybe like GAF close doors for juniors until they prove knowledge.
Quick simple question from programming would make good junior test.

Or make 2 sided B3D. One open and closed forum for proven elite programmers.
Right now i see ton of people registering there account just for sake of fanboy war linking to their posts as if they represent B3D.
 

Snubbers

Member
It's best to just ignore B3D these days. Unless one of the actual verified insiders or developers makes a post.

I still frequent B3D.. I don't see the same massive bias as some people suggest, if we spot picked posts from here you could make it look like zoo in 30 seconds.. The fact is as you say, you can ignore the interlopers and just look at what the more trusted devs/insiders say, they give far more impartial insight into things..

I think people should look inward before chastising them, the quickness to jump on internet rumour with both feet seems very prevalent here, the amount of threads that get created and bandwagons start rolling only to find out the truth is pretty mundane isn't doing this place any favours..

I like listening to ERP and Sebbbi, very impartial (considering one works for Sony I believe), and they at least cast some insight into the technical side of things..
 

mrklaw

MrArseFace
Dev kits probably have 12 GB, but they just can't increase that willy nilly. They'd have to redesign the memory interface, motherboard, etc. Unless they doubled the chip densities and made it 16 GB.

GPU upclock is possible, but could impact yields.

They could release the devkits. Isn't that similar to what happens with 360 and a last minute bump in ram? They went with the devkit motherboard or something?
 

TheCloser

Banned
Nobody ever suggested that Xbox One would perform above its theoretical maximum. First of all, he was talking about real-life performance in all those cases (someone of your background should be well aware of the fact that theoretical peak performance is not sustainable in real-life situations, since that's the best case scenario), and secondly, he was not talking about isolated performance of the GPU alone, but the whole graphical subsystem as a function of proper ESRAM utilization (again, as an example, you should remember that RSX alone was nothing to write home about, but assisted by SPEs, PS3's entire graphical subsystem was very capable).


Again, the reason i posted what i posted was because he said that the whole system could out perform a gpu with a theoretical maximum of 1.79 TFlops which is a whole load of rubbish. If the same code was written specifically for the 1.79TF gpu to take advantage of its hardware, there is no way the xbox one can out perform it. Yes, lets leave theoretical maximums alone, all things equal, if the gpu in the Xbox one was switched out for a gpu with 1.79TF, it would easily outperform the GPU currently in the Xbox One. The whole point of the design is to allow the Xbox one to get as close to its theoretical maximum as possible. The design was well thought out but unfortunately, that design is vastly inferior to that of the PS4. The reason people refer to the PS4 as a super charged computer is not because of its brute force(not even as strong as a 680 in theory) but because of steps that have been taken to revolutionize the way console hardware is designed. Its design is simply superior to pcs but the hardware is not.
 
move engines are not exclusive to Xbone difference here is there there are more of them than in standard GCN there are 4 of them where in standard GCN 1.0 there are 2 bi-directional engine. Their purpose is to make best of available bandwidth. Their purpose will probably be moving things out of slow DDR3 ram and move it to faster ESRAM. 2 DMA
engines should be also in PS4

more on that in this AMD GCN overview paper:

I personally think more DMA engines are simply another patch on their memory problem. Since they use second pool of memory and that memory is way faster they need to move a lot of stuff, therefore they upped number of engines to get best of their shitty bandwidth. Xbone also is supposed to be proper media center so TV, twitch app, skype and many things that run at the same time require a lot of moving in memory so those DMA sure are handy for those tasks.

Thing is PS4 has UMA there is no second pool of memory so things after you load them into memory don't need to be moved (because there is no second memory like VRAM in PC or EDRAM in PS2 or ESRAM in Xbone)

Which means UMA imo should be superior to 2 pools of memory and DMA engines in therms of efficiency.

There are ton of people more knowledgeable than me so if something is wrong (in my post) please speak.

Both Xbone and PS4 have 4 ACE rather than 2. It's a feature of GCN 2.0 they both have. DME is different and is an interface to the ESRAM cache that is unique to Xbone. ACE are a dispatch unit, the DMEs are functioning more like a pre-fetch.
 

avaya

Member
With HMC the bandwidth feeding mobile SoCs will far outstrip that delivered by GDDR5, 2013 is really bad timing for next-gen. We are on the cusp of a paradigm shift on the semiconductor side, performance is not going to increment but explode across the entire range of TDPs. Intel finally getting serious about mobile will aid that ramp.

I think this will have to be a 5yr cycle this time around since mobile will start to give comparable performance within 3-4yrs.
 

Snubbers

Member
Again, the reason i posted what i posted was because he said that the whole system could out perform a gpu with a theoretical maximum of 1.79 TFlops which is a whole load of rubbish. If the same code was written specifically for the 1.79TF gpu to take advantage of its hardware, there is no way the xbox one can out perform it. Yes, lets leave theoretical maximums alone, all things equal, if the gpu in the Xbox one was switched out for a gpu with 1.79TF, it would easily outperform the GPU currently in the Xbox One. The whole point of the design is to allow the Xbox one to get as close to its theoretical maximum as possible. The design was well thought out but unfortunately, that design is vastly inferior to that of the PS4. The reason people refer to the PS4 as a super computer is not because of its brute force(not even as strong as a 680 in theory) but because of steps that have been taken to revolutionize the way console hardware is designed. Its design is simply superior to pcs but the hardware is not.

If anyone refers to the PS4 as a super computer, they clearly have questionable logic.
Cerny went to great lengths to show how they went with time to triangle and a more straight forward architecture with small (but effective) tweaks here and there to improve efficiency.. He even mentioned using eDRAM as a more exotic approach but it would take developers too long to get to grips with..

I massively respect his approach of effectively KISS design and I hope it bears real fruit, as the PS3 to me was the shining example of attempts to be a supercomputer falling at the first hurdle..
 
Top Bottom