• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Full details of the Scorpio engine in Xbox one X from Hotchips conference 2017

Colbert

Banned
(1) Take a look at golem.de (https://www.golem.de/news/scorpio-e...soc-der-xbox-one-x-1708-129610.html). (2) But anyway - it's not the interface that is the limit here.

(1) Thank you.

(2) 3gbps vs. 5 (USB3) or 6gbps (SATA3). I think there is a difference. Why else MS would recommending to use external hdds for faster loading. But anyway it is what it is. Nothing crucial but noneoftheless a missed opportunity imo.

Only thing I found so far is pretty picture from a german site. Nothing on hotchips that I can see.

Thank you too.
 

amardilo

Member
Is the HDMI In the same as the Xbox One S? Does that mean you can't pass-through a 4K set top box through the console?

I was planning on getting a 4K set top box (sometime next year) to replace my current set top box which goes through my Xbox One but it seems like I might not be able to do that.
 

Colbert

Banned
Is the HDMI In the same as the Xbox One S? Does that mean you can't pass-through a 4K set top box through the console?

I was planning on getting a 4K set top box (sometime next year) to replace my current set top box which goes through my Xbox One but it seems like I might not be able to do that.

As already answered earlier in this thread. HDMI in is version 1.4. Its the same as Xbox One S. No UHD passthrough.

Microsoft-Xbox-One-X-Scorpio-Engine-Hot-Chips-29-07.png
 

c0de

Member
(2) 3gbps vs. 5 (USB3) or 6gbps (SATA3). I think there is a difference. Why else MS would recommending to use external hdds for faster loading. But anyway it is what it is. Nothing crucial but noneoftheless a missed opportunity imo.

The HDD is limiting it anyway.
 

ethomaz

Banned
Can you swap internal HDD in Scorpio?

If not then SATA3 is a pretty useless feature that will only drive up costs. SATA3 only will give you advantage if you can put a SSD on Scorpio.

This was a smart decision.
 

c0de

Member
The sata-3 didn't really seem to help ps4 pro, the bottleneck is elsewhere

The Pro has better load times because of the better CPU. SSDs are fast because of their access times, not really the sometimes brutal bandwidth. It also largely depends on the way the files are organized for a game but even if you would assume a game with 50Gb and you are able to read 200MB/s, you would have transferred the whole of the game in you end up around 6 minutes in total for only transferring data which of course does not happen as it is broken down to many way smaller files.
 
Looks like they doubled the number of ACEs compared with XB1... it is still half of PS4/Pro.
1 HWS (Polaris) = 2 old ACEs (GCN 1.1)

PS4/Pro has 8 compute processors.
XB1 has 2 compute processors.
XB1X has 4 compute processors.

That is what AMD call ACE.

If you add the graphic (or command processor) you will have: 8x2, 2x2 and 4x2 respectively.

It is doubled from XB1 but half PS4/Pro.
AFAIK, PS4 Pro Polaris GPU also has 4 ACEs and 2 HWS.

Hypervisor is nothing unique or bad. The PS4 uses a Hypervisor as well.
Source?

I remember PS3 having a Hypervisor (GameOS/OtherOS), but Sony has never said anything about the PS4.

ps: Having a hypervisor is probably a good thing for BC.

285 GB/s just for the GPU is really good.
285GB/s just for the GPU or for the APU as a whole?

Does that mean that the CPU has 41GB/s of RAM bandwidth?

I also wonder if there's any performance bottleneck when both CPU and GPU access RAM simultaneously (kinda like in the PS4).

We still haven't seen the PS4 Pro die but I expect it to be the same way with 2 GPU arrays next to each other just like this.

img_20170821_093005_575px.jpg
The fact that both companies had to disable half the GPU for BC indicates that they have something in common due to AMD/Polaris hardware.

And yes, that die schematic looks like a "butterfly" GPU.

So Scorpio will only use a SATA2 interface for the HDD. BUT: SATA2 is completely fine - I expect most savings coming from the CPU decompressing data when looking at load times.
Mechanical HDDs barely max out SATA1, let alone SATA2.

(2) 3gbps vs. 5 (USB3) or 6gbps (SATA3). I think there is a difference. Why else MS would recommending to use external hdds for faster loading. But anyway it is what it is. Nothing crucial but noneoftheless a missed opportunity imo.
Because external HDDs have more dense platters and thus higher transfer rates. It's not a matter of interface (USB vs SATA).

Can you swap internal HDD in Scorpio?

If not then SATA3 is a pretty useless feature that will only drive up costs. SATA3 only will give you advantage if you can put a SSD on Scorpio.

This was a smart decision.
No, you can't and SATA3 isn't that expensive either. Still useless for HDDs tho.
 
I know nothing about this so I'm going to ask some simple Q's.

With the reduced latency, is that from the cache?

I think there's two ways they could have improved it: Making the execution pipelines shorter, or meddling with the memory controller to be faster. No idea what they actually did, but I suspect it was something with the memory controllers. Because back at the xbone reveal they mentioned one of the reasons for going with DDR3 was the cpu latency... It's kinda impressive to move to another memory type, supposedly one that adds latency and still manage to reduce the overall latency.
 

arhra

Member
(1) Thank you.

(2) 3gbps vs. 5 (USB3) or 6gbps (SATA3). I think there is a difference. Why else MS would recommending to use external hdds for faster loading.
The biggest benefit, I believe, simply comes from the reduction in IO conflicts - not having to schedule game loading around OS accesses and suchlike (and all the additional seeking such things would require).
 

c0de

Member
The biggest benefit, I believe, simply comes from the reduction in IO conflicts - not having to schedule game loading around OS accesses and suchlike (and all the additional seeking such things would require).

This is also true. It was confirmed here by Greenberg,I think, and I said this several times before that. These things are more and more like ordinary computers and thus face the same problems.
 
One X was falsely reported to feature HDMI 2.1 by digital foundry at first. It turned out to be support for Freesync. Only select PC monitors support that standard.

I'm quite stunned. Never heard this. I'm on the fence of buying a new TV to play games. Decided to wait on HDMI 2.1 because of Xbox One X. But... there is no point in waiting now?
 

LCGeek

formerly sane
So the CPU gains are:

- 30% more clock (likely the most of the new performance gains)
- up to 20% less latency (but no info on how that translates into real world performance)
- Code translation for the virtualized environment runs 4.5+% faster than the regular xbone, apparently that practically negates the virtualization costs.



That's why they are able to deliver complete compatibility, plus it's safer, and with the cpu customizations it can virtualized code at almost native speed, so it's even better.

There is also CPU offloading which is a first for windows from my knowledge. If it works out hopefully the PC spectrum gets to use it.
 

Bsigg12

Member
I'm quite stunned. Never heard this. I'm on the fence of buying a new TV to play games. Decided to wait on HDMI 2.1 because of Xbox One X. But... there is no point in waiting now?

I don't believe testing and cert for 2.1 has been released yet but since they know the specification, they could always build it with the specs in mind.
 
I don't believe testing and cert for 2.1 has been released yet but since they know the specification, they could always build it with the specs in mind.

Okay thank you 👍🏻 But was I was wondering. If I don't need a new tv because Xbox One X and PS4 don't take advantage of HDMI 2.1 right?
 
I'm quite stunned. Never heard this. I'm on the fence of buying a new TV to play games. Decided to wait on HDMI 2.1 because of Xbox One X. But... there is no point in waiting now?

I don't believe testing and cert for 2.1 has been released yet but since they know the specification, they could always build it with the specs in mind.
Xbox One X is not built to take advantage of HDMI 2.1 features

However, it's possible HDMI 2.1 TVs will support Freesync 2 next year

Okay thank you 👍🏻 But was I was wondering. If I don't need a new tv because Xbox One X and PS4 don't take advantage of HDMI 2.1 right?
What kind of TV do you have currently?
 

Caayn

Member
Xbox One X is not built to take advantage of HDMI 2.1 features

However, it's possible HDMI 2.1 TVs will support Freesync 2 next year
Let's say that the XB1X did have HDMI 2.1, it'll mean that it could output HDR content at 2160p at 60Hz without needing to use chroma subsampling to stay within the bandwidth constraints of HDMI 2.0. Even then it most likely would never take advantage of the >2160p and >60Hz support though, that I agree with.
 
Let's say that the XB1X did have HDMI 2.1, it'll mean that it could output HDR content at 2160p at 60Hz without needing to use chroma subsampling to stay within the bandwidth constraints of HDMI 2.0. Even then it most likely would never take advantage of the >2160p and >60Hz support though, that I agree with.

I wonder what will happen with this, would be pretty terrible to have a 4k HDR capable console using chroma subsampling. The SOC has a Displayport 1.2a lane (~21Gbps) to the HDMI TCON, that means the SOC can transport HDR10 4:4:4 @60Hz (~18Gbps) to the TCON. The question is whether or not that TCON can be flashed and if HDMI spec will allow for a dumbed down version of HDMI 2.1 which only does 60Hz HDR10 instead of 120Hz HDR12.
 

Dehnus

Member
PS4/Pro has 8 compute processors.
XB1 has 2 compute processors.
XB1X has 4 compute processors.

That is what AMD call ACE.

If you add the graphic (or command processor) you will have: 8x2, 2x2 and 4x2 respectively.

It is doubled from XB1 but half PS4/Pro.

Sigh, It's too late for me to explain to you what AMD's ACE are, but they are not compute processors. They are schedulers that plan the tasks for the processing units on the card. Nothing more. More in that case is not always better, you just want the units fed till optimum. More Schedulers and longer pipelines doesn't meanyou automatically get better performance and it will never go past the theoretical maximum. As once fed, they are full, and there is nothing more to add.

Apparently MS believes the amount of ACES in the S and the X are enough, and they have the numbers to back this up from games running on the Xbox One. If it was a bottleneck, it would have been increased far higher than it was, and considering the amount of CU's added to the X, having only Double the amount of ACE units compared to having more than 3 times the amount of CUs, is actually a reduction. Thus it isn't an issue for their design and how Direct 3D handles it on the Xbox. Might just be that MS does a lot of prescheduling in API, before filling the pipes.. or that it is just easier to schedule in their API.

But believe me you cannot just compare those numbers like they are percentages or power differences. Some architectures benefit more than others from longer pipelines.
 

Dehnus

Member
1 HWS (Polaris) = 2 old ACEs (GCN 1.1)


AFAIK, PS4 Pro Polaris GPU also has 4 ACEs and 2 HWS.


Source?

I remember PS3 having a Hypervisor (GameOS/OtherOS), but Sony has never said anything about the PS4.

ps: Having a hypervisor is probably a good thing for BC.


285GB/s just for the GPU or for the APU as a whole?

Does that mean that the CPU has 41GB/s of RAM bandwidth?

I also wonder if there's any performance bottleneck when both CPU and GPU access RAM simultaneously (kinda like in the PS4).


The fact that both companies had to disable half the GPU for BC indicates that they have something in common due to AMD/Polaris hardware.

And yes, that die schematic looks like a "butterfly" GPU.


Mechanical HDDs barely max out SATA1, let alone SATA2.


Because external HDDs have more dense platters and thus higher transfer rates. It's not a matter of interface (USB vs SATA).


No, you can't and SATA3 isn't that expensive either. Still useless for HDDs tho.

BINGO! not to mention higher RPM rates, extra platters (4 vs 3) and other things a 3.5 inch disk has over a 2.5 inch Laptop Spinner.

The problem isn't the bus, it's the 2.5 inch laptop spinner that these consoles come with.
 
What kind of TV do you have currently?

A Panny plasma GT50 ( still looking great ) in the living and a Sony 40W4000. Few years old but very fast ( inputlag very low )

I wanna buy a 4K set but I can't spend a lot because I've got 2 kids going to college in a few weeks. Then again. I've got a PS4 Pro and have a Xbox One X in pre-order. I feel like I'm missing out on HDR.

What's weird though, image quality on Xbox One and PS4 Pro bother me more on my current tv's ( I keep messing with the settings ) than the IQ of my Switch on the same tv's. Thinking about it, maybe the lack of 60 fps games on PS4 and Xbox One is responsible for the way I feel. Personally I love the feel of 60 fps games.

Actually, if I have the choice I'll always prefer a higher framerate than more pixels ( 4K 60 would be an excellent combo of couse ) , so maybe I should wait until I get the Xbox One X. If it is capable of 1080/60 on my favourite games I may wait for the OLED prices to come down and HDMI 2.1 implementation.
 

onQ123

Member
The fact that both companies had to disable half the GPU for BC indicates that they have something in common due to AMD/Polaris hardware.

And yes, that die schematic looks like a "butterfly" GPU.

This die looks just like I expected PS4 Pro die to look with 2 GPU clusters put together like the Jaguar clusters.
 

onQ123

Member
According to the SDK... PS4 Pro is 64 ROPS and 1MB L2 Cache.

I was hoping the leak would have this information, I been trying to find the leaked docs but Sony seem to be doing a good job of keeping it off the net because I couldn't find it after I heard about the leak.


Edit: So PS4 Pro GPU really is 2 modified PS4 GPUs next to each other.
 
A Panny plasma GT50 ( still looking great ) in the living and a Sony 40W4000. Few years old but very fast ( inputlag very low )

I wanna buy a 4K set but I can't spend a lot because I've got 2 kids going to college in a few weeks. Then again. I've got a PS4 Pro and have a Xbox One X in pre-order. I feel like I'm missing out on HDR.

What's weird though, image quality on Xbox One and PS4 Pro bother me more on my current tv's ( I keep messing with the settings ) than the IQ of my Switch on the same tv's. Thinking about it, maybe the lack of 60 fps games on PS4 and Xbox One is responsible for the way I feel. Personally I love the feel of 60 fps games.

Actually, if I have the choice I'll always prefer a higher framerate than more pixels ( 4K 60 would be an excellent combo of couse ) , so maybe I should wait until I get the Xbox One X. If it is capable of 1080/60 on my favourite games I may wait for the OLED prices to come down and HDMI 2.1 implementation.
Hold on to your GT50 until HDMI 2.1 features are implemented at the earliest. That's a great TV.
 
The fact that both companies had to disable half the GPU for BC indicates that they have something in common due to AMD/Polaris hardware.

And yes, that die schematic looks like a "butterfly" GPU.
Ms doesn't disable half the gpu for BC. The older SDKs have a hard coded split, giving 20CUs to pixel/compute and 20 to vertex shaders but the whole gpu is exposed to the games. For games on newer SDK even if they not support Scorpio the 40cus are exposed in a unified way.
 
Ms doesn't disable half the gpu for BC. The older SDKs have a hard coded split, giving 20CUs to pixel/compute and 20 to vertex shaders but the whole gpu is exposed to the games. For games on newer SDK even if they not support Scorpio the 40cus are exposed in a unified way.
I don't think that's true and it doesn't make sense either (due to unified shaders). 3TF for vertex shaders alone sounds like an overkill. Vertices don't need that much processing power.

There are 3TF available for all types of shaders. Still plenty enough for old, unpatched games.

ps: You also forgot to mention geometry shaders.
 

onQ123

Member
It seems that PS4 Pro has the edge in a few areas like pixel fillrate & maybe geometry rendering but I think it will be awhile before this information comes out so we can compare the actual specs & Xbox One X seem to be more straight forward while PS4 Pro added features that will require more work from the devs but PS4 Pro is a lot closer to Xbox One X than what the fp32 Flops show.
 

Alexious

Member
It seems that PS4 Pro has the edge in a few areas like pixel fillrate & maybe geometry rendering but I think it will be awhile before this information comes out so we can compare the actual specs & Xbox One X seem to be more straight forward while PS4 Pro added features that will require more work from the devs but PS4 Pro is a lot closer to Xbox One X than what the fp32 Flops show.

How do you figure that PS4 Pro has the edge in a few areas if "it's going to be a while before this information comes out"?
 

pottuvoi

Banned
It seems that PS4 Pro has the edge in a few areas like pixel fillrate & maybe geometry rendering but I think it will be awhile before this information comes out so we can compare the actual specs & Xbox One X seem to be more straight forward while PS4 Pro added features that will require more work from the devs but PS4 Pro is a lot closer to Xbox One X than what the fp32 Flops show.
Memory bandwidth is most likely one of the big wins for XOX as GPUs are in general quite bit bandwidth bound.
It will be certainly interesting to see papers and developers describe differences in future years.
 
TBH, 64 ROPs sounds like an overkill.

If it's true, then why doesn't Sony/Cerny advertise it to their advantage? Are they waiting for Scorpio's release or something?
 

longdi

Banned
When did PS4 Pro get 64 rops? I thought it is RX580, which only have 32 rops. You cant just slap rops on the compute units.
 
TBH, 64 ROPs sounds like an overkill.

If it's true, then why doesn't Sony/Cerny advertise it to their advantage? Are they waiting for Scorpio's release or something?

Maybe because those who are are not knowledgeable about specs like this won't know what ROPs are or what they do.

If the PS4 Pro's GPU really does have 64 ROPs it doesn't tell the whole story about how the GPU performs as there are other variables at play depending on the software. If something is not ROP bound then the ROP disparity may not matter much, If something is then this can potentially be countered by utilizing compute shaders.

The Xbox One X's GPU is stronger in almost all other areas, the PS4 Pro's GPU has less memory bandwidth to work with and may be memory bandwidth limited, as a result of this it could be unable to achieve it's maximum fill rate.
 

Belker

Member
I wanna buy a 4K set but I can't spend a lot because I've got 2 kids going to college in a few weeks.

Are there any shops that do student discounts? Give the kids money for the TV, have them buy it, but they ship it to you. They can enjoy it when they come home for the holidays, you get to use it in the meantime.

In a few years you can pass the TV to them when you upgrade again.
 

Proelite

Member
The differences in RTR and some other recent games between the pro and the xbx seems to be bigger than the 50% the GPU, bandwidth, and ram can allows for. 4k checkerboard vs 4k native with improvements takes almost twice the specs. Devs probably haven't spent efforts on optimizing for the pro like they're doing on the XBX. It'll be great to see what the 16bit extensions can do for the gap.
 

onQ123

Member
TBH, 64 ROPs sounds like an overkill.

If it's true, then why doesn't Sony/Cerny advertise it to their advantage? Are they waiting for Scorpio's release or something?

You have to remember that PS4 & PS4 Pro also output to the TV when running VR games so they have reasons for using more than 32 ROPs with the PS4 Pro.

I don't think we have heard much from Sony/Cerny about the PS4 Pro in response to the Xbox One X to begin with & even if they did I don't think they would advertise having more ROPs to a console market, they had 32 ROPs vs Xbox One's 16 ROPs before but I don't remember them using it as a talking point


When did PS4 Pro get 64 rops? I thought it is RX580, which only have 32 rops. You cant just slap rops on the compute units.

PS4 Pro GPU is not a RX580 it's closer to a Vega GPU
Maybe because those who are are not knowledgeable about specs like this won't know what ROPs are or what they do.

If the PS4 Pro's GPU really does have 64 ROPs it doesn't tell the whole story about how the GPU performs as there are other variables at play depending on the software. If something is not ROP bound then the ROP disparity may not matter much, If something is then this can potentially be countered by utilizing compute shaders.

The Xbox One X's GPU is stronger in almost all other areas, the PS4 Pro's GPU has less memory bandwidth to work with and may be memory bandwidth limited, as a result of this it could be unable to achieve it's maximum fill rate.
Nope the ROPs wouldn't tell the full story I'm not sure why you would think I was saying that the ROPs would tell the full story & yes PS4 Pro memory will limit it.

The differences in RTR and some other recent games between the pro and the xbx seems to be bigger than the 50% the GPU, bandwidth, and ram can allows for. 4k checkerboard vs 4k native with improvements takes almost twice the specs. Devs probably haven't spent efforts on optimizing for the pro like they're doing on the XBX. It'll be great to see what the 16bit extensions can do for the gap.

If native 4K take almost twice the specs as 4K checkerboard rendering any dev that used native 4K would be fools but as you can see from the dice slides the gain from using Checkerboard rendering doesn't cut the work in half even with the hardware added to PS4 Pro , You & others seem to think that checkerboard rendering is actually half the resolution of native resolutions.


 

blastprocessor

The Amiga Brotherhood
I was hoping the leak would have this information, I been trying to find the leaked docs but Sony seem to be doing a good job of keeping it off the net because I couldn't find it after I heard about the leak.


Edit: So PS4 Pro GPU really is 2 modified PS4 GPUs next to each other.

Yeah l couldn't find the leaked docs on the pro l know someone on beyond forwarded a copy to the moderator.
 

dr_rus

Member
I was hoping the leak would have this information, I been trying to find the leaked docs but Sony seem to be doing a good job of keeping it off the net because I couldn't find it after I heard about the leak.


Edit: So PS4 Pro GPU really is 2 modified PS4 GPUs next to each other.

Nah, it's ~52 NGCs glued together.

...There is zero reason to put two PS4 GPUs next to each other when you are building a single chip SoC system.
 
If native 4K take almost twice the specs as 4K checkerboard rendering any dev that used native 4K would be fools but as you can see from the dice slides the gain from using Checkerboard rendering doesn't cut the work in half even with the hardware added to PS4 Pro , You & others seem to think that checkerboard rendering is actually half the resolution of native resolutions.

That's because they are counting the full render time, which includes vertex and other work that won't scale at all with resolution.

CB will be costlier than just rendering half the pixel load, because it has do to something extra (but even a simple software upscale would), but the extension of that is currently undisclosed.
 
You have to remember that PS4 & PS4 Pro also output to the TV when running VR games so they have reasons for using more than 32 ROPs with the PS4 Pro.

I don't think we have heard much from Sony/Cerny about the PS4 Pro in response to the Xbox One X to begin with & even if they did I don't think they would advertise having more ROPs to a console market, they had 32 ROPs vs Xbox One's 16 ROPs before but I don't remember them using it as a talking point




PS4 Pro GPU is not a RX580 it's closer to a Vega GPU

Nope the ROPs wouldn't tell the full story I'm not sure why you would think I was saying that the ROPs would tell the full story & yes PS4 Pro memory will limit it.



If native 4K take almost twice the specs as 4K checkerboard rendering any dev that used native 4K would be fools but as you can see from the dice slides the gain from using Checkerboard rendering doesn't cut the work in half even with the hardware added to PS4 Pro , You & others seem to think that checkerboard rendering is actually half the resolution of native resolutions.

The ROPs not telling the full story wasn't aimed at you, I said it doesn't tell the full story as the post I quoted mentioned advertising it as an advantage, to which I responded with "it doesn't tell the whole story about how the GPU performs as there are other variables at play depending on the software".
 
Top Bottom