• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Microsoft Xbox Series X's AMD Architecture Deep Dive at Hot Chips 2020

what the xbox guys said at hot chips, is that basically 10gb are ok for gpu, they expect the 6 to be used for os and cpu stuff.
what new they said is that the ssd will load to that part of memory stuff for the cpu to decompress-evaluate, and gpu has direct snoop on that.
this as an explanation on how their new various three-letter acronym technologies will work out.
 
Last edited:
what the xbox guys said at hot chips, is that basically 10gb are ok for gpu, they expect the 6 to be used for os and cpu stuff.
what new they said is that the ssd will load to that part of memory stuff for the cpu to decompress-evaluate, and gpu has direct snoop on that.
I thought the system could tap into the low clocked 6gb if needed as well for games. So in reality its like 12-13ish gbs for games just that a few of it is slower.
 
Sure. How about this:



It’s obvious you are a toddler level PS fanboy. And seriously don’t know WTF you are talking about. Just cut bait, back away, and continue to jerk off to pictures of Saint Cerny until the PS5 is laying on the couch next to you this holiday.
Im the fanboy right
 
I own all three.....
I am just responding to absolutely absurd posts.
No you aren't. You're being a douche. This is obvious from your statement about him enjoying his xbox while you play your higher rated, better visual... Blah, blah, blah. You guys are so fucking transparent that its just sad when you act like your motivations are different than what you are putting out there.
 
I spend more time playing Tetris on my PS4 (or shit......Mario Golf on my 2DS) than I do on my Xbox OR PC.

I just callz it like I seez it. And, well........¯\_(ツ)_/¯


Welcome to Sony/SsdGAF or whatever.
And i was playing the hell out of Ark on my PS4 and gears5 on my x1s till a freaking storm royal fucked me with an electric surge. Now im playing Animal crossing with the little ones until the new consoles drop.
 
No you aren't. You're being a douche. This is obvious from your statement about him enjoying his xbox while you play your higher rated, better visual... Blah, blah, blah. You guys are so fucking transparent that its just sad when you act like your motivations are different than what you are putting out there.
I have 1/4 the post as you but almost the same reaction score but im the one with motivations.
 
Kinda ironic coming from a guy backing up a system that spent 7 years giving him a whole lot of nothing.

I guess that 18% GPU advantage will make up for a dozen AAA games including a few GOTY contenders lol.
Way to stay on topic, when you guys can't make some kind of sound related argument for what's at hand you diverge into the irrelevant.

Lazy shit posting nonsense.
 
Way to stay on topic, when you guys can't make some kind of sound related argument for what's at hand you diverge into the irrelevant.

Lazy shit posting nonsense.
Nonsense when you spent 7 years starved for exclusives? I wonder if you were barking up like that when the PS4 was crushing the Xbox One by like 50% all the while delivering far superior games.

This fanboy hypocrisy is really amusing.
 
Nonsense when you spent 7 years starved for exclusives? I wonder if you were barking up like that when the PS4 was crushing the Xbox One by like 50% all the while delivering far superior games.

This fanboy hypocrisy is really amusing.
We're talking about hardware here, keep it up.
 

GODbody

Member
Apologies for me not reading the entire thread, first, but has the discussion about asymmetric memory been finalised?

From the slides I noticed that the word "interleaved" is used in regards of the high (10GB) and low (6GB) memory accesses, and noticed the two SoC coherency fabric MCs(memory controllers, plural?) blocks on the APU diagram - also counting the 10 x 16bit channels on each that are definitely giving the 320bit in an odd configuration (IMHO).

In the next-gen thread - a few months back at the XsX DF specs reveal - someone from Era was being quoted about the memory contention on the XsX - saying accessing the slower memory cost memory performance - and despite it making perfect sense (to me at least) the issue was never settled AFAIK and constantly refuted that the unified 448GB/s was in real-terms superior to the XsX setup.

In a conversation back then I made the analogy of a weird RAID setup - and my belief that data will need copied from the CPU 6GB to the 10GB for use, because the interleaving will be different for the 10GB and 6GB and the GPU needs packed/strided data with the same interleaving in all likelihood (IMHO) - but until it was confirmed the memory accesses were interleaved it was something I felt I couldn't be sure about, but with the slide saying "interleaved" I thought it was worth a rerun, in case someone else has a different take on the info.

I made a post about this actually.

We do know the details of the arrangement. It was posted earlier in the thread.

h22uypU.jpg


It's unified memory with asymetrical chips. There are 10 memory chips in total. four 1 GB chips and six 2 GB chips. Twenty 16 bit channels.

Data in the GPU optimal space can span 1 GB from all the chips giving a speed of 560 GB/s (560 ÷ 10 = 56 GB/s per chip).

Data in the standard memory space is only in the 2 GB chips, data can only span the second GB of the 2 GB chips giving a speed of 336 GB/s (336 ÷ 6 = 56 GB/s per chip).

uCQsgut.png


All of the chips have the same underlying GDDR6 speed (56 GB/s).
 
It already exists.
Sorry. I am pretty new here. Thanks for that link tho. I’ll be using it.
 

Journey

Banned
Some Xbox fans are like flat earthers, even with slapped with evidence they'll deny and scream conspiracy. Please proceed to comment, so that I can continue increasing my ignore list


Are you referring to actual science proving that a 12TF console with basically the same RDNA2 architecture will be more capable than one where only at it's peak will provide 10TF? especially when it has custom dedicated Ray Tracing hardware that the other does not?
 
Yes and the Series X has an 18% advantage in compute performance. Big whoop lol.
But it doesn't, we already know that's a lowball figure.

First of all bandwidth is a huge differentiator especially when it comes to rendering performance. It's the reason the Xbox One X was running games on average at resolutions 70-80% higher than the PlayStation 4 Pro while only having a 43% more capable GPU. With the PlayStation 5 and Series X we're again met with a large divergence in memory bandwidth afforded to the GPU.

Secondly there's what I said in relation to frequency not closing the gap with CU's, it's just a fact of reality that it does not. Lesser CU count overclocked to reach the stated teraflop figure of a like minded GPU with more CU's at lower frequency results in the obvious workflow advantage of the higher CU count GPU. Bear in mind this is with a GPU hitting the same teraflop target, with the Series X and the PlayStation 5 there is a 1.875 teraflop disparity so it's plain to see how that pans out when extrapolated.

3654685-untitled-1.png


Third we come down to the architectures, the cat is out of the bag for the PlayStation 5 in terms of being a hybrid GPU implementation, this means the GPU will come with the benefits of 7nm RDNA architectural efficiencies. For the Series X it's quite clearly an RDNA 2 GPU given its specifications, with that comes the efficiency uplift of being natively of that architecture and 7nm+. What that uplift is has yet to be revealed but it's not going to be 0% which would widen the gap further.

Fourth we come to the variability factor of the PlayStation 5, its frequency is not fixed, it deviates based upon the power budget of what it's processing/rendering. That "18%" is based upon the PlayStation 5 when at peak boost frequency, but we all know that's not going to be the case especially in rendering heavy games where power demands spike dramatically.

So at the end of the day that 18% is not actually 18%, it is left to be seen what it actually averages out to but it's going to be a decent and substantive margin above that.
 
I thought the system could tap into the low clocked 6gb if needed as well for games. So in reality its like 12-13ish gbs for games just that a few of it is slower.
all memory (except what is reseved for os) is available. microsoft explained the logic of their design. read again.
and "games" also need memory for cpu things, ai etc. we dont know yet what will be reserved by os
 

MarkMe2525

Member
I don't.
but practically EVERY thread is ruined by a bunch of folks -the same folks- rushing in to damage control.
here are the facts:

xbox series x is going to be the better machine
exactly like xbox x was the better machine compared to ps4pro,
exactly like the ps4 was the better machine compared to xbone,
exactly like the 360 was the better machine compared to ps3, etc.

and the absolute difference between them will be the biggest power difference ever before on consoles.

that doesnt mean that playstation 5 sucks. just that xbox is the better machine.


as I said, since I already know I will be using the xbox as a main console,
the only thing I care about is to see a ps5 stripdown and hands-on,
because I dont want it to be a frying pan like my ps4's were (and are),
and the choices sony made to appear competitive or "equal" are worrisome
Dude.. you are going to start a 360 vs PS3 war in here with talk like that!! Those get ugly
 

Bo_Hazem

Banned
It already exists.

Tried to lift it up, no luck. It'll actually make most fans get along and laugh together, but they seem serious about it. :lollipop_tears_of_joy:
 
But it doesn't, we already know that's a lowball figure.
Even highballing to 33%, that's still nothing.

First of all bandwidth is a huge differentiator especially when it comes to rendering performance. It's the reason the Xbox One X was running games on average at resolutions 70-80% higher than the PlayStation 4 Pro while only having a 43% more capable GPU. With the PlayStation 5 and Series X we're again met with a large divergence in memory bandwidth afforded to the GPU.
It's a huge differentiator at 4K in the case of the PS4 Pro because it chokes out at 4K. The 2080 has the same mem bandwidth as the 5700XT yet it doesn't get mollywhopped by the 2080 Ti which has a bandwidth 37.5% higher. Cuz they don't choke at 4K.

Secondly there's what I said in relation to frequency not closing the gap with CU's, it's just a fact of reality that it does not. Lesser CU count overclocked to reach the stated teraflop figure of a like minded GPU with more CU's at lower frequency results in the obvious workflow advantage of the higher CU count GPU. Bear in mind this is with a GPU hitting the same teraflop target, with the Series X and the PlayStation 5 there is a 1.875 teraflop disparity so it's plain to see how that pans out when extrapolated.
The higher you push a GPU above its specified clocks , the lower the performance you'll get as you approach the max (parallelization and all that jazz) and since nobody knows how high RDNA2 chips can go let alone how it translates to consoles, this comparison is pointless.

Third, we come down to the architectures, the cat is out of the bag for the PlayStation 5 in terms of being a hybrid GPU implementation, this means the GPU will come with the benefits of 7nm RDNA architectural efficiencies. For the Series X it's quite clearly an RDNA 2 GPU given its specifications, with that comes the efficiency uplift of being natively of that architecture and 7nm+. What that uplift is has yet to be revealed but it's not going to be 0% which would widen the gap further.
That's a simple case of you wanting to hear what you want to hear. The cat isn't out of the bag. Mark Cerny and Lisa Hsu plainly referred to it as RDNA2. The engineer said something along the lines of it being a mixture of both. Who is right remains to be seen. Don't act like this guy is the be-all, end-all of what we know about the PS5 and ignore what was said beforehand. That's confirmation bias to its fullest.

Fourth we come to the variability factor of the PlayStation 5, its frequency is not fixed, it deviates based upon the power budget of what it's processing/rendering. That "18%" is based upon the PlayStation 5 when at peak boost frequency, but we all know that's not going to be the case especially in rendering heavy games where power demands spike dramatically.

So at the end of the day that 18% is not actually 18%, it is left to be seen what it actually averages out to but it's going to be a decent and substantive margin above that.
So that 18% might be 20% or 25%, either way, not a difference that will put one way above the other. They'll both run 30fps/60fps and the SX will have an advantage in res and perhaps slightly better graphics. You're basically swapping out a 2070 for a 2080.
 
Last edited:
I will definitely pass that on to the rest of the team.

Maybe a group dedicated to console warring would be better? That way if there are threads that get pretty heated due to console warring they can be moved to that group.

Just a suggestion because just one thread can get a ton of different topics mixed in.

Edit: Also another benefit to this is that the mods can ban the console warriors from the gaming discussion forum and just let them continue their battles in the console wars one.
 
Last edited:
Edit: Also another benefit to this is that the mods can ban the console warriors from the gaming discussion forum and just let them continue their battles in the console wars one.
No, just ban them to that single topic for a month!
 
Sounds like a pretty good thing to keep for the PS5. Anyone have any idea if the PS5 kept this?

Possibly? If they are relying on hardware BC I assume they will need to keep this level of functionality and ensure the PS5 can provide it. I'm wondering if they will have a software solution for allowing the GPU to snoop CPU cache; Series X allows its GPU to snoop CPU cache through hardware support but the inverse requires software solution.

Then again, I'm wondering if in fact that's not something PS5 has and therefore could be a reason they added the cache scrubbers to the GPU.

Apologies for me not reading the entire thread, first, but has the discussion about asymmetric memory been finalised?

From the slides I noticed that the word "interleaved" is used in regards of the high (10GB) and low (6GB) memory accesses, and noticed the two SoC coherency fabric MCs(memory controllers, plural?) blocks on the APU diagram - also counting the 10 x 16bit channels on each that are definitely giving the 320bit in an odd configuration (IMHO).

In the next-gen thread - a few months back at the XsX DF specs reveal - someone from Era was being quoted about the memory contention on the XsX - saying accessing the slower memory cost memory performance - and despite it making perfect sense (to me at least) the issue was never settled AFAIK and constantly refuted that the unified 448GB/s was in real-terms superior to the XsX setup.

In a conversation back then I made the analogy of a weird RAID setup - and my belief that data will need copied from the CPU 6GB to the 10GB for use, because the interleaving will be different for the 10GB and 6GB and the GPU needs packed/strided data with the same interleaving in all likelihood (IMHO) - but until it was confirmed the memory accesses were interleaved it was something I felt I couldn't be sure about, but with the slide saying "interleaved" I thought it was worth a rerun, in case someone else has a different take on the info.

I know the Era user you are talking about, Lady Gaia, and I remember more or less what they were saying. And I did notice the SoC coherency fabric MCs in the graph but wasn't sure what to make of them (but it sounds like they are memory controllers as you say). However, I could've sworn MS listed their GDDR6 as 20-channel, not 16-channel.

Aside from that, I actually don't think what Lady Gaia (or you here for that matter) were saying is at play with MS's setup, because the "coherency" in SoC Coherency Fabric MC gives it away, at least IMHO. Coherency in the MCs would mean that consistency in data accesses, modifies/writes etc. between chips being handled through the two would be maintained by the system automatically, and not require shadow masking/copying of data between one pool (the fast pool) to the other (the slower pool), or vice-versa. Otherwise, if not for the coherency, that would probably be a requirement as we see on PCs (which fwiw are nUMA systems by and large rather than hUMA like the consoles will be).

There is probably still some very slight penalty in switching from the faster pool to the slower one, but I'm guessing this is only a few cycles lost. There was some stuff from a rather ridiculous blog I came across written back in March that tried insinuating a situation where Series X was pulling in less bandwidth than the PS4 in real-world situations due to interleaving the memory; quite surely no console designer would choose a solution with THAT massive of a downside for a next-gen system release, so it made it easy to write that idea off. Lady Gaia's take makes a bit more sense but even there I think they are overshooting the penalty cost.

Additionally, we should consider that games won't be spending even amounts of time between the faster and slower memory pools. That's the main reason I tend to write off anything trying to average out the bandwidth between the pools. Yeah, it's a neat metric to consider for theoretical discussion, but it makes no sense to present as a realistic use-case figure because you will have most games spending the vast majority of the time on GPU-bound operations. There are probably other things MS have implemented with the memory management between the two pools handled through API tools that haven't been disclosed (though I hope they disclose them at some point) that probably simplify things a lot for devs and take advantage of the fact the system is still by all accounts a hUMA design.
 
Last edited:

MrFunSocks

Banned
Can someone that’s concerned about the state of Xbox please post more Craig memes while pretending to not be a sony fanboy? I feel like it’s been a whole page since one of the usual suspects did that and I’m worried for them.

This deep dive has got them shaken. I think the ps5 is now up to 3 GPUs according to them, thanks to hidden ones in the SSD and Tempest.
 
Last edited:
Can someone that’s concerned about the state of Xbox please post more Craig memes while pretending to not be a sony fanboy? I feel like it’s been a whole page since one of the usual suspects did that and I’m worried for them.

This deep dive has got them shaken. I think the ps5 is now up to 3 GPUs according to them, thanks to hidden ones in the SSD and Tempest.
Quit baiting people.. it doesn't help.
 

GODbody

Member
Can someone that’s concerned about the state of Xbox please post more Craig memes while pretending to not be a sony fanboy? I feel like it’s been a whole page since one of the usual suspects did that and I’m worried for them.

This deep dive has got them shaken. I think the ps5 is now up to 3 GPUs according to them, thanks to hidden ones in the SSD and Tempest.
Team Blue isn't shaken, there's literally nothing else for them to talk about and that's when the Schadenfreude comes out. Sony hasn't been pumping out news at anywhere the rate that MS has been. The front page is constantly littered with Xbox threads. There's really no outlet for new PS5 tech discussions unless you want to comment in one of the DualSense discussion threads.
 
Last edited:
Top Bottom