Could it be that they just don't list all of them so they actually have more and the 2x2 they display are just stand-ins?
its 2x8 but only 2 can be active at any one time.
Could it be that they just don't list all of them so they actually have more and the 2x2 they display are just stand-ins?
Fair enough.
What is your take on hUMA for Xbox One, guys?
Standard GCN is 2 ACEs with 1 queue for each ACE (2 ACEs/2 queues)
PS4 will have 8 ACEs with 8 queues for each ACE (8 ACEs/64 queues)
HD8000 or beyond (Kabini, Kyoto, for example) have 4 ACEs with 8 queues each (4 ACEs/32 queues)
This is a very interesting feature since this is not a AMD-only thing. Nvidia calls it "Hyper-Q" and claims that it will greatly improve GPU utilization. Titan for example has 32 queues:
Yep, that's very true.
So you get low-latency graphics at the expense of the need to manage the eSRAM at all cost.
How do you know? ^_^
It will still get pushed on PC and PS4.So in conclusion the the GPGPU capabilities for the Xbox One (eSRAM not fully coherent, not as many command processors and compute queues) are lower than on PS4.
Which in turn means we won't be seeing that being pushed across the board for next-gen.
I'm quite bummed by this, as third-parties most likely won't take advantage of something only the PS4 really benefits from.
Hotchips 25 - what we've learned:
Xbox One is custom HD7000, PS4 is custom HD8000. You can see that on this pic:
But on PC it's not the same kind. I keep being unspecific when I say GPGPU. I mean the particular brand that AMD is lobbying for with hUMA that allows for quick interplay of CPU and GPU without taking the huge latency hit of the copy process.It will still get pushed on PC and PS4.
So in conclusion the the GPGPU capabilities for the Xbox One (eSRAM not fully coherent, not as many command processors and compute queues) are lower than on PS4.
Which in turn means we won't be seeing that being pushed across the board for next-gen.
I'm quite bummed by this, as third-parties most likely won't take advantage of something only the PS4 really benefits from.
But on PC it's not the same kind. I keep being unspecific when I say GPGPU. I mean the particular brand that AMD is lobbying for with hUMA that allows for quick interplay of CPU and GPU without taking the huge latency hit of the copy process.
http://www.vgleaks.com/playstation-4-includes-huma-technology/
This deserves a new thread i think. I leave it up to you.
Hotchips 25 - what we've learned:
Two compute command processors (ACEs) and most likely two compute queues for Xbox One. PS4 has eight ACEs and 64 compute queues. That means that Xbox One will probably suck at at GPGPU compared to PS4 and will not be able to do asynchronous fine-grain compute (GPGPU without penalty for rendering) efficiently.
The latency won't really matter much in GPU-workflows. The ESRAM is there to provide enough bandwidth for rendering into pixelbuffers while the DDR3-bandwidth is there for texture fetching, either direct or indirect by copying textures into ESRAM first via the DMEs. Latency doesn't really matter here.
8000 series is what AMD uses for their latest 28nm APUs. These GPUs have improved compute capabilities compared to the 7000 series.
Gpus are good to hide latency with common graphical tasks, that follows predicable and linear memory access patterns. Common graphical tasks also have a predicate execution flow, for example where you read some data from a texture and multiply the read value with a bunch of normals or something like that..
Throw in more complex shaders and gpgpu computations, which break the memory access and execution flow and latency on the gpu becomes a real issue. And that's is pretty noticeable from the performance impact that even high end pc parts have.
That's why Ps4 has so many more compute threads on the fly that even the highest end card current available.
Edit: I'm not sure if the results are directly applied to gpgpu in gaming, but if you look for academic papers for gpgpu researches, it's pretty common to find out that optimizing memory access patterns for the gpu can yield extremely high performance gains, much more so than bruteforcing the problem by adding more processing power. So having a low latency memory for that kind of computations, might actually help achieve better performance faster.
That's not necessarily true. Multiple queue engines are used so you have many threads on flying around so you can switch between then to hide memory latency. With esram that's potentially a much lesser problem on xbone than it is on PC or Ps4.
German tech site Planet3DNow (specialized in AMD hardware) comes to the conclusion:
Xbox One does not have hUMA
Mark Diana and Heise were right. ESRAM is not coherent and Xbox One basically looks like an APU on Llano/Trinity level. They're not saying what this means for performance, but they say that coding for Xbox One will be a bigger effort than coding for a more advanced HSA stage.
Your confusing HD8000 with HD8000M. The former is what AMD uses for 28nm APUs (Kabini, Kyoto, etc), the latter is a rebrand for discrete notebook GPUs.
German tech site Planet3DNow (specialized in AMD hardware) comes to the conclusion:
Xbox One does not have hUMA
Mark Diana and Heise were right. ESRAM is not coherent and Xbox One basically looks like an APU on Llano/Trinity level. They're not saying what this means for performance, but they say that coding for Xbox One will be a bigger effort than coding for a more advanced HSA stage.
Your confusing HD8000 with HD8000M. The former is what AMD uses for 28nm APUs (Kabini, Kyoto, etc), the latter is a rebrand for discrete notebook GPUs.
German tech site Planet3DNow (specialized in AMD hardware) comes to the conclusion:
Xbox One does not have hUMA
Mark Diana and Heise were right. ESRAM is not coherent and Xbox One basically looks like an APU on Llano/Trinity level. They're not saying what this means for performance, but they say that coding for Xbox One will be a bigger effort than coding for a more advanced HSA stage.
As I presumed. The addition of the GPU-only eSRAM only meant complications as it would be forced to be shared between the CPU and GPU. The drawbacks simply outweighed the benefits.
What does does this mean for the PS4 fidelity advantage? Is this just something to make coding easier or is this something that will markably improve performance?
Hotchips 25 - what we've learned:
Xbox One is custom HD7000, PS4 is custom HD8000. You can see that on this pic:
Two compute command processors (ACEs) and most likely two compute queues for Xbox One. PS4 has eight ACEs and 64 compute queues. That means that Xbox One will probably suck at at GPGPU compared to PS4 and will not be able to do asynchronous fine-grain compute (GPGPU without penalty for rendering) efficiently. This is somewhat disappointing: Even a Kabini Notebook APU with 2 GCN CUs has four ACEs and 32 Queues. To me it looks like Microsoft never intended to do GPGPU with this console. So why would they need hUMA at all?
There are still some people here who won't stop believing it.AMD already said that but people reacted as usual.
Well wasn't it kinda obvious from the beginning that X1 doesn't support hUMA?
We've had the ESRAM latency song and dance before, but no one has ever really mentioned plausible numbers on it. AMD cards tend to have pretty high latency costs everywhere -- according to the GCN presentation at GDC Europe latency to L1 was still on average 20x that of latency to LDS memory, and I doubt that ESRAM is closer than L1 and L2 in the memory hierarchy.
It's difficult to know hard numbers without actually testing, but it's obvious that a memory that sits physically close to the execution unit, enclosed on the same package will have a much lower latency to get the data ready when compared to the main memory bus.
I doubt esram would have much worse latency compared to a L2 cache. Why it would? Aren't caches usually made of sram too?
Seems like that PS4 GPU will have longer legs and be best exploited by first parties as time passes for GPGPU functions.
I also noticed that Xbox One GPU is DirectX 11.1+ & PS4 GPU is DirectX 11.2+
lol ...u know directx are a microsoft thing right?
lol ...u know directx are a microsoft thing right?
lol ...u know directx are a microsoft thing right?
lol ...u know directx are a microsoft thing right?
It's directx feature sets, not directx API.
EDIT: beaten.
lol ...u know directx are a microsoft thing right?
Don't know if already posted but new information from hUMA PS4
http://www.vgleaks.com/more-exclusive-playstation-4-huma-implementation-and-memory-enhancements-details/
from the xbox tech thread... a gaffer found this update on hUMA on the ps4
http://www.vgleaks.com/more-exclusi...plementation-and-memory-enhancements-details/