• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Leaked AMD roadmap schedules 14-nm bonanza for 2016

Irobot82

Member
As for AMD's near term plans, the dual-Fiji card, codenamed Gemini (which is widely expected to be R9 Fury X2) its release as been delayed pushed into 2016, to be aligned with VR headset launches.

Anandtech has a big article up on the problematic state of multi-GPU SLI, CrossFire and Alternate Frame Rendering tech.

http://www.anandtech.com/show/9874/amd-dual-fiji-gemini-video-card-delayed-to-2016

This makes it sound like it'll be one core per "eye" in VR. Is that possible?
 

Nikodemos

Member
Yes, rumour has it that AMD switched from GloFo/TSMC to GloFo/Samsung due to both foundries using the same process. And (even more unsubstantiated rumours) AMD was the client which demanded that GloFo licence Samsung's process after the terrible fuckup which was GloFo 20 nm.
 

Irobot82

Member
Yes. Using split frame rendering.

However, for most practical purposes it also means using dx12, unless LiquidVR has some capability to do that on dx11.

At this point, what VR only or VR centric games are in developement? I can't imagine they wouldn't be DX12 based. That sounds really cool though I bet it really help reduce the latency.

Yes, rumour has it that AMD switched from GloFo/TSMC to GloFo/Samsung due to both foundries using the same process. And (even more unsubstantiated rumours) AMD was the client which demanded that GloFo licence Samsung's process after the terrible fuckup which was GloFo 20 nm.

I read an article somewhere that said AMD was planning on having their 300 series on 20nm until GloFo messed it all up, I don't think AMD has recovered very well from that.
 
What's interesting is what using Samsung/GF will mean in comparison to Nvidia, since they've already stated they'll use TSMC. The Samsung node is supposed to be smaller, but there might be many other factors to consider, biggest being time to market. Theoretically if other things were equal, AMD might have a serious advantage in terms of chip manufacturing tech. I think Apple is the only company using both TSMC and Samsung for their chips, but comparing those SoCs might not be relevant to high power GPUs.
 

finalflame

Gold Member
Anyone buying a GPU made on 14nm and HBM2 is throwing away money. Wait for 5nm and HBM5.
Terrible comparison as the jump from 28nm -> 14/16nm will be the most drastic we've been in a long while (been on 28nm since 6xx). Not to mention the proven gains of 3D Memory with the greater capacity of HBM2.

Buying current top-tier cards that close to a major leap is, indeed, throwing away money.
 

dr_rus

Member
Terrible comparison as the jump from 28nm -> 14/16nm will be the most drastic we've been in a long while (been on 28nm since 6xx). Not to mention the proven gains of 3D Memory with the greater capacity of HBM2.

Buying current top-tier cards that close to a major leap is, indeed, throwing away money.

Yeah, no.

6deformation.jpg


The jump from 28nm to 16/14 is the exact same jump as any previous node switch. The only difference is that this jump is overdue by a year or so.
 

AmyS

Member
Not really new beyond yesterday, given all the links I provided above, but:

http://wccftech.com/amd-greenland-14nm-production-q2-2016/

[UPDATE#1 10:32 AM ET December 22 2015] South Korea’s Electronic Times and Reuters both have picked up this story since yesterday and it no longer appears to be strictly a rumor.

————————————————————————————————————---
[UPDATE#2 10:57 AM ET December 22 2015]
We reached out to AMD and received the following comment.

As a matter of policy we don’t comment on rumor or speculation. At this time we have not disclosed the foundry partner for our forthcoming FinFET products. In early November, we announced with GLOBALFOUNDRIES we have taped out multiple products using GLOBALFOUNDRIES’ 14nm Low Power Plus (14LPP) process technology and we are currently conducting validation work on 14LPP production samples.

Seemingly there's not much reason to doubt the rumor. Nothing that doesn't make sense.

ExtremeTech posted the report also: http://www.extremetech.com/gaming/219859-samsung-globalfoundries-to-fab-next-gen-amd-gpus-apus
 

Nikodemos

Member
I read an article somewhere that said AMD was planning on having their 300 series on 20nm until GloFo messed it all up, I don't think AMD has recovered very well from that.
They haven't recovered at all. Their arch is extremely process-dependent (a large part of why Construction cores are so shit is due to them being designed for an aborted 22 nm SOI process; well, apart for AMD gimping the FPU).
 

finalflame

Gold Member
Yeah, no.

6deformation.jpg


The jump from 28nm to 16/14 is the exact same jump as any previous node switch. The only difference is that this jump is overdue by a year or so.

I'm familiar with how die-shrink works. Overdue for a year + 3D Memory = well above-average performance gains.

A 980 Ti will, quite simply, be put to shame by Pascal, far more so than Pascal by its incremental 16nm successor.

AMD not doing good is bad for the industry.


I say this as a Intel / Nvidia guy.



Please AMD, rock everyone!
I've been hoping for AMD to release a competitive product for a while now, and was excited by the FuryX until the official specs were announced. Now I just have no more faith in AMD, as they let the market slip further and further away from their grasp. They're slowly sinking into irrelevance and it's getting exhausting trying to hope they'll take market share back when they make terribly calculated moves like the FuryX.
 

AmyS

Member
They haven't recovered at all. They arch is extremely process-dependent (a large part of why Construction cores are so shit is due to them being designed for an aborted 22 nm SOI process; well, apart for AMD gimping the FPU).

Man, those Construction cores certainly were not Devastating to the competition.

Much higher hopes for the Zendacons :p
 

Pjsprojects

Member
Well I've been very happy with my AMD CPU and GPU setup. All I wanted was to play the latest games at Max setting,high frame rates at 1080p and that's what it does.
Even Star citizen plays well,walking round the hanger or other nice detailed areas are smooth enough so why spend more?

Sure if you want to play at higher res then yeah the extra power would warrant the higher price Intel stuff.
 

dr_rus

Member
I'm familiar with how die-shrink works. Overdue for a year + 3D Memory = well above-average performance gains.
You got it wrong. No matter how often you will say "3D memory" it won't change the fact that what we get with HBM2 still won't be enough to offset the rise in processing power on a new node relative to the available memory bandwidth. Because of HBM2 we will have a rather regular transition to a new node and in its absence we would get lower than usual results because of GDDR5 being at the top of its capabilities on 28nm products.

A 980 Ti will, quite simply, be put to shame by Pascal, far more so than Pascal by its incremental 16nm successor.
Pascal is an architecture, it's not even a chip much less a product which you will be able to buy. When you say that do you mean that a $650 Pascal product of 2016 will put 980Ti to shame? And how much is "shame" exactly?
For reference: 980Ti is +50% to 780Ti on average, original Titan is ~35% faster than GTX 680 and GTX 680 is ~35% faster than GTX580 (the latter is a node transition).

So is AMD 14nm and Nivida 16nm?
These are marketing node names. If you look at the image above you'll see that TSMC's 16FF+ and GloFo's 14LPE are very close in their physical specs.
 

tuxfool

Banned
So is AMD 14nm and Nivida 16nm?

Doesn't really make much of a difference. Both technologies are still using 20nm feature sizes for the metal layers. What matters more is specifically the power/performance profile of each of the fabrication techniques.

e: beaten above. The size is more of a marketing description, somewhat accurate but irrelevant in the way people are thinking.
 

AmyS

Member
I'm familiar with how die-shrink works. Overdue for a year + 3D Memory = well above-average performance gains.

A 980 Ti will, quite simply, be put to shame by Pascal, far more so than Pascal by its incremental 16nm successor.

As already pointed out, Pascal is neither a card nor a GPU, it's a GPU architecture that will span a wide range of GPUs and cards, just like Maxwell, Kepler and Fermi.

The successor to the Pascal architecture is Volta, and it is meant to be on 10nm according to Bill Dally, Nvidia's Chief Scientist and senior VP of NVIDIA Research.

https://youtu.be/IIzjMr4f-8U?t=16m7s
 
Top Bottom