• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Polaris architecture to succeed Graphics Core Next

fred

Member
If AMD have any sense they'll release their new GPUs in Q1 or Q2 this year when Oculus Rift and HTC Vive release. I'm expecting NVidia to do the same.

I'm going to be getting a new GPU in May or June this year for VR and there'll be plenty of people wanting to do the same when Oculus Rift and HTC Vive are released.

The only concern I have about getting an AMD card over an NVidia card is AMDs problems regarding drivers.
 

Kezen

Banned
That said, AMD has no one to blame for their current problems other than AMD. It's AMD that made Bulldozer with such a flawed and badly conceived microarchitecture. It's AMD that dramatically overpaid to acquire ATI and then did basically nothing with their acquisition except drive it to financial ruin. You can't blame Intel and Nvidia for that. And it's up to AMD to save themselves.

Completely agree.
Nvidia or Intel don't have to play nice with them, it's a competition after all.

Nvidia Gameworks is tuned for Nvidia and I don't see anything wrong with that at all, it's their investment. AMD are free to do the same.
 
If AMD have any sense they'll release their new GPUs in Q1 or Q2 this year when Oculus Rift and HTC Vive release. I'm expecting NVidia to do the same.

I'm going to be getting a new GPU in May or June this year for VR and there'll be plenty of people wanting to do the same when Oculus Rift and HTC Vive are released.

The only concern I have about getting an AMD card over an NVidia card is AMDs problems regarding drivers.

they are releasing a dual fiji card to coincide with VR early 2016, but dont expect their new gpu, or nvidias, until the 2nd half of the year.
 

fred

Member
they are releasing a dual fiji card to coincide with VR early 2016, but dont expect their new gpu, or nvidias, until the 2nd half of the year.

NVidia have delayed the release of Pascal for a while, everyone was expecting it to release last year. Releasing in Q1 or Q2 this year would be the intelligent thing for NVidia to do.

If AMD only release a dual Fiji card this year for VR it isn't going to be enough.
 

Locuza

Member
Nvidia presented Pascal for 2016.

AMD and Nvidia can't release their new cards, whenever they would like.
 

Durante

Member
If AMD have any sense they'll release their new GPUs in Q1 or Q2 this year when Oculus Rift and HTC Vive release.
NVidia have delayed the release of Pascal for a while, everyone was expecting it to release last year. Releasing in Q1 or Q2 this year would be the intelligent thing for NVidia to do.
Your posts make it sound like companies just have to decide to release something in some timeslot and then that will happen.

These are new GPU microarchitectures, not a pair of socks. There are a lot of things which can go wrong and delay a release.
 

AmyS

Member
Nvidia presented Pascal for 2016.

AMD and Nvidia can't release their new cards, when they would like.

Exactly. Mass production of these AMD GPUs by GloFo and Samsung is purportedly supposed to begin in Q2, with launch expected by late summer / back to school time (around August). If that's what's being rumored, then I don't expect it to happen earlier.

There is also the need to have enough HBM2 for the higher end cards, both Arctic Islands and Pascal will use it.
 
Your posts make it sound like companies just have to decide to release something in some timeslot and then that will happen.

These are new GPU microarchitectures, not a pair of socks. There are a lot of things which can go wrong and delay a release.

They also have new memory designs and are manufactured on a new process node. There's still a lot which can go wrong between now and Q2. I'm glad I have my 980 Ti to ride out any delays in Pascal which are likely to occur.
 

slapnuts

Junior Member
Huh I thought we were getting a new iteration of GCN, this is a nice surprise.

You really think Polaris is a new, build from ground up? Dude it's rebrand of GCN with some new perks added on top...in other words, its GCN 2/1.3 with a new name slapped on top... Think about it. Maybe they can fool the mainstream into thinking Polaris is some new cutting edge technology but those of us here should know better.
 
I'm working with a 970 right now. But I'll be upgrading at some point next year. Probably at the holiday time. AMD might just have my business next holiday. At the end of next year we'll have seen the smoke clear and who's created a superior product. I'm positive AMD will be able to deliver it at a better price point, but unfortunately for AMD it looks like they will release after Nvidia which puts them in a bad position. They need to release at or around the same time as Nvidia or they'll bw unable to improve their market share (provided Nvidia doesnt screw up royally somehow). The high end users looking at these GPU's won't care enough about the price difference to wait an additional 3-6 months.
 

Meh3D

Member
I'm generally very happy with the Hawaii based cards. They're fantastic for GPGPU development. I need every bit of that nice 8 GB of device memory. With the way the Fury cards went Hawaii/Granada will be the last breed of inexpensive GPU compute development boards with no compromises.
 
Don't really care about the battle between Nvidia and AMD. I'm more interested in seeing AMD making improvements to tech that will hopefully end up in the next generation of consoles.
 

orochi91

Member
Wasn't Sony thinking about releasing a more powerful PS4?

Maybe they are both now.

The logistics just don't make sense though.

What about the 10s of millions that have a PS4 right now?

A PS4.1 would fragment their own market.
 

elohel

Member
The logistics just don't make sense though.

What about the 10s of millions that have a PS4 right now?

A PS4.1 would fragment their own market.


If they had everything in place then it would maybe be okay but yeah as it stands probably not doable
 

McHuj

Member
In theory they could leverage some of the micro-architectural improvements of Polaris that increase power efficiency, but don't impact performance. As long as the performance and behavior is the same to the user and programmer, doesn't matter what's under the hood.
 

Octavia

Unconfirmed Member
Lol, exactly.

A new APU would essentially be an XB1 successor.

It's not always about brute force calculations. Greater efficiency in the architecture allows for smaller units and less power consumption. More like a slim is coming with the same power, less wattage draw and heat.
 

orochi91

Member
In theory they could leverage some of the micro-architectural improvements of Polaris that increase power efficiency, but don't impact performance. As long as the performance and behavior is the same to the user and programmer, doesn't matter what's under the hood.

It's not always about brute force calculations. Greater efficiency in the architecture allows for smaller units and less power consumption. More like a slim is coming with the same power, less wattage draw and heat.

Oh, so this is probably about a die shrink.

So basically the chances are high that Sony and/or MS will unveil a slim version of their consoles sometime this year.
 

blastprocessor

The Amiga Brotherhood
A PS4.1 would fragment their own market.

One could just run games in 4K and have BluRay 4K. Imagine a patch for your existing PS4 games and Driveclub running in 4K o_O

I would like to see Sony and Microsoft bring updated models out every year just like Apple with their iPad and iPhones. Dream on but it would be nice.
 

Sarobi

Banned
[Insert line about Xbox One slim with no disc drive]

I honestly can't see them releasing an upgraded model. Developers would have to fiddle around with two versions of a game, for two versions of the same console. Gen 1 Xbox One owners would get the shaft at some point (if this happened, that is)
 

PFD

Member
It would be perfect for PSVR. Also, i'm OK with choosing between a 30 FPS console and a more expensive 60 FPS console.
 

AmyS

Member
http://wccftech.com/xbox-one-may-be...d-on-amds-polaris-architecture/#ixzz3w2TldL8T

Xbox One May Be Getting A New APU Based On AMD’s Polaris Architecture

This is nonsense.

Xbox One and PS4 APUs both use Southern Islands (GCN 1.0) GPUs. No way they're changing that with future revisions of these consoles.

Using Polaris (next gen "GCN 2.0") in a new revision of Xbox One would split the userbase if Microsoft encouraged devs to take advantage of it. Same thing with PS4.

No doubt, AMD is working on 14/16nm FinFET shrinks of Xbox One and PS4 APUs, but they'll undoubtedly have the same Southern Islands / GCN 1.0 architecture and Jaguar CPU cores. When Microsoft moved to the Xbox 360 Slim in 2010 the Xenon CPU, Xenos GPU and EDRAM were combined onto a single 45nm chip (Vejle'), with no underlying architecture changes. Some of the clockspeeds within the new combined chip had to be throttled to keep them equal to all other 360 consoles.

99.9% chance we are not going to see higher performance Microsoft and Sony consoles until XB4 / PS5 around ~2019.
 

Locuza

Member
I'm generally very happy with the Hawaii based cards. They're fantastic for GPGPU development. I need every bit of that nice 8 GB of device memory. With the way the Fury cards went Hawaii/Granada will be the last breed of inexpensive GPU compute development boards with no compromises.
How do you know that and what exactly do you mean bei GPU compute development boards?

In theory they could leverage some of the micro-architectural improvements of Polaris that increase power efficiency, but don't impact performance. As long as the performance and behavior is the same to the user and programmer, doesn't matter what's under the hood.
If Polaris changes a lot, it's nearly impossible to have zero impact on performance.

Xbox One and PS4 APUs both use Southern Islands (GCN 1.0) GPUs.
Small correction here, they both use Sea Islands (GCN 1.1/GCN Gen 2) GPUs.
 

Inuhanyou

Believes Dragon Quest is a franchise managed by Sony
Small correction here, they both use Sea Islands (GCN 1.1/GCN Gen 2) GPUs.

IIRC, Phil confirmed that.

But i thought that Sea islands was just a refresh of the southern islands architecture, so its still the same technically. So what makes it GCN 1.1 instead of GCN 1.0? Something to do with the APU improvements and such like fine grain compute?
 

Nikodemos

Member
IIRC, Phil confirmed that.

But i thought that Sea islands was just a refresh of the southern islands architecture, so its still the same technically. So what makes it GCN 1.1 instead of GCN 1.0? Something to do with the APU improvements and such like fine grain compute?
Dedicated sound processor IIRC, along with some smaller changes.
 

Locuza

Member
Somebody from Microsoft confirmed it on an interview with Eurogamer about the whole tech behind Xbox One.

The difference according to the documentation:
GCN Gen 2:
Multi queue compute
Lets multiple user-level queues of compute workloads be bound to the device
and processed simultaneous. Hardware supports up to eight compute pipelines with up to eight queues bound to each pipeline.

System unified addressing
Allows GPU access to process coherent address space.

Device unified addressing
Lets a kernel view LDS and video memory as a single addressable memory.
It also adds shader instructions, which provide access to “flat” memory space.

Memory address watch
Lets a shader determine if a region of memory has been accessed.

Conditional debug
Adds the ability to execute or skip a section of code based on state bits under
control of debugger software. This feature adds two bits of state to each wavefront; these bits are initialized by the state register values set by the debugger, and they can be used in conditional branch instructions to skip or execute debug-only code in the kernel.

Support for unaligned memory accesses
Detection and reporting of violations in memory accesses

So right, the biggest difference is found on the ACEs, which can dispatch up to 8 compute-queues vs. 1 on GCN Gen 1.
And unified memory addressing.

GCN Gen 3:
Data Parallel ALU operations improve “Scan” and cross-lane operations.

Scalar memory writes.
In Generation 2, a kernel could read from a scalar data cache to retrieve
constant data. In Generation 3, that cache now supports reads and writes.

Compute kernel context switching.

Compute kernels now can be context-switched on and off the GPU.

Improvements on the context-switching side, scalar-unit and cross-lane operations.
Also color-compression for better bandwidth utilization was added.
They improved the Front-End, for better Tessellation-Performance.
On Tessmark it was up to 30% faster on factor 32:
http://techreport.com/review/28513/amd-radeon-r9-fury-x-graphics-card-reviewed/4
And I've heard that geometry-shaders might got faster.

http://developer.amd.com/resources/documentation-articles/developer-guides-manuals/


Nothing too shocking, but AMD did improve the Architecture over time.
But Polaris should be a really big step forward.
 

pottuvoi

Banned
Nothing too shocking, but AMD did improve the Architecture over time.
But Polaris should be a really big step forward.
Really hope this is the case and that they will have API in which they can open all of it for develpers.
DX11 was not very kind for GCN.
 

Locuza

Member
DX11 wasn't kind to any modern architecture.

With GPUOpen AMD wants to start a whole bunch of stuff, including extensions and APIs to get more out of the GPUs.
We will see if Developers will using any of it.
 
Somebody from Microsoft confirmed it on an interview with Eurogamer about the whole tech behind Xbox One.

The difference according to the documentation:
GCN Gen 2:


So right, the biggest difference is found on the ACEs, which can dispatch up to 8 compute-queues vs. 1 on GCN Gen 1.
And unified memory addressing.

GCN Gen 3:

Improvements on the context-switching side, scalar-unit and cross-lane operations.
Also color-compression for better bandwidth utilization was added.
They improved the Front-End, for better Tessellation-Performance.
On Tessmark it was up to 30% faster on factor 32:
http://techreport.com/review/28513/amd-radeon-r9-fury-x-graphics-card-reviewed/4
And I've heard that geometry-shaders might got faster.

http://developer.amd.com/resources/documentation-articles/developer-guides-manuals/


Nothing too shocking, but AMD did improve the Architecture over time.
But Polaris should be a really big step forward.
The Xbox1 rumor that it would use a Polaris GPU comes from the Netflix Polaris build having the same name. "Xbox One Polaris" flashed during Netflix usage with their Xbox One. Thread discussing Netflix using Star names for their builds.

Zen and Polaris on Interposer with HBM is also designed for servers and servers need Network on Chip which treats the unified (HSA) memory via IOMMU like a network to transfer data. This is the next logical step for the GPU and follows ARM which has already implemented NOC in the AXI bus. NOC provides an efficiency if it's used by the OS.

Both Zen and Polaris should support NOC and mixing a Jaguar CPU with a Polaris GPU should eliminate the ability to use NOC efficiently. Since the ARM AXI buss in AMD APUs including the XB1 supports NOC, the IOMMU host-guest bridge between the ARM bus and the X-86/GPU bus should be more efficient when both support NOC which should benefit dGPUs.
 

Panajev2001a

GAF's Pleasant Genius
You really think Polaris is a new, build from ground up? Dude it's rebrand of GCN with some new perks added on top...in other words, its GCN 2/1.3 with a new name slapped on top... Think about it. Maybe they can fool the mainstream into thinking Polaris is some new cutting edge technology but those of us here should know better.

Rebranding a new GCN architecture revision does not necessary infer the new product is not cutting edge technology. GCN 4 could have a copious amount of important changes, so what if they also change the architecture name if they believe that the changes the made to GCN are big enough.
 
How do you know that and what exactly do you mean bei GPU compute development boards?

GCN 1.0 and previous had an FP64 rate of 1/4. GCN 1.1 it went down to 1/8 but gave an 8GB frame buffer in exchange. Fury it went down to 1/16. The guts of double precision compute performance is being ripped out of the consumer line. Compared to the S line of HPC GPUs which are all 1/2 and 12GB frame buffers minimum.
 

Adry9

Member
Wccftech gonna Wccftech. Nonsense.

Well nonsense for a regular thinking people. It could be real if MS is insane and wants to damage Xbox brand even further.
Why? I don't think they'll introduce a version that runs games better but they could use it for a faster OS or a smaller cooler console.
 

Locuza

Member
GCN 1.0 and previous had an FP64 rate of 1/4. GCN 1.1 it went down to 1/8 but gave an 8GB frame buffer in exchange. Fury it went down to 1/16. The guts of double precision compute performance is being ripped out of the consumer line. Compared to the S line of HPC GPUs which are all 1/2 and 12GB frame buffers minimum.
But Fijis Maximum Rate is 1/16, it doesn't got cut for the consumer products.
For Polaris the DP-Rate is probably going to be 1/2 for the Top-Model.
For the consumer line we have to wait, what AMD will do.
Maybe it's again only 1/8 like on Hawaii, maybe they really cut it down to the minimum with 1/16.
 
Top Bottom