• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Leaked AMD roadmap schedules 14-nm bonanza for 2016

amd_roadmap_mobile.jpg


Apparently K12 is a much 'fatter' core compared to A57, given how AMD pack 2x K12 (14nm) vs 4x A57 (20nm) in the same SDP (God, I hate this metric).

(a cross-post from the other similar thread)



I think it's the same way as Nvidia Denver or Apple Cyclone. Bigger cores, customised but based on ARMv8.
It's interesting to note that AMD gave up on x86 in the ultra low power segment.
I'm wondering if we could see Amur in the next nintendo handheld.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
I think it's the same way as Nvidia Denver or Apple Cyclone. Bigger cores, customised but based on ARMv8.
It's interesting to note that AMD gave up on x86 in the ultra low power segment.
I'm wondering if we could see Amur in the next nintendo handheld.
It's perfectly logical to me : )
 

thuway

Member
The bigger question is will this architecture officially make AMD's offerings comparable on a price/perf level to Intel?
 

Nachtmaer

Member
Can someone please explain me the benefits of building CPU's in smaler nm sizes? Thanks :)

Putting it simply, smaller transistors means you can either pack more of them in the same area or the same amount of transistors take up less space. It also leads to lower power draw and/or higher performance.

We're slowly heading towards an age where the methods to shrink nodes become increasingly more expensive and harder to pull off. This is why companies who can't afford these new bleeding edge nodes are sticking to older ones; they're a lot cheaper even when your chips end up faster/smaller. At these small scales quantum mechanics are starting to become a problem too. Electrons are harder to contain to make clear on or off states (0 or 1). Seriously, transistors are becoming so small you can almost count the individual atoms.

In reality this stuff is extremely complicated and it even boggles my mind. If you, or anyone else, wants to read up a bit more about how these things get designed and produced, read this TechReport article.

As for Zen, if AMD can come close to mainstream platform Skylake (probably Cannonlake by then) at a good price, they did well in my book. Skylake was my original plan to consider upgrading, but I might as well just wait to see what Zen does before I retire my 2500k.
 
Putting it simply, smaller transistors means you can either pack more of them in the same area or the same amount of transistors take up less space. It also leads to lower power draw and/or higher performance.

We're slowly heading towards an age where the methods to shrink nodes become increasingly more expensive and harder to pull off. This is why companies who can't afford these new bleeding edge nodes are sticking to older ones; they're a lot cheaper even when your chips end up faster/smaller. At these small scales quantum mechanics are starting to become a problem too. Electrons are harder to contain to make clear on or off states (0 or 1). Seriously, transistors are becoming so small you can almost count the individual atoms.

In reality this stuff is extremely complicated and it even boggles my mind. If you, or anyone else, wants to read up a bit more about how these things get designed and produced, read this TechReport article.

As for Zen, if AMD can come close to mainstream platform Skylake (probably Cannonlake by then) at a good price, they did well in my book. Skylake was my original plan to consider upgrading, but I might as well just wait to see what Zen does before I retire my 2500k.

Thank you really much for your explanation :)
 

QaaQer

Member
Everytime I see one of these charts from either AMD or Nvidia, I just say to myself - "don't buy....anything."

Its crazy. Its like they're advertising how out of date their tech is going to be before we even have a chance to buy it in the first place.

& I ask myself "can I do everything I want with my 5 year old 875k + 4 year old 7970 and 4 year old mb-pro?". So far the answer has been yes. Back in the day when I'd encode videos, games pushed hw, and tabs/smphones weren't a thing, I'd want new some new piece of PC kit every six months or so. Anand is right, enthusiast PC stuff is dull as wet paint now.
 

LordOfChaos

Member
Some games are mandating 4 cores, and since the i3 has 4 threads that's a non-issue for it. Far Cry 4 and DA:I had no issues. Only the Pentium's were affected by that.

Source? I was under the impression that it would not be faked out by hyperthreading.
 

Nachtmaer

Member
https://www.youtube.com/watch?v=JxUPJdcChzE

They needed 4 threads, not 4 physical cores. So it can be 2 core 4 threads (i3) or 4 core 4 threads (i5).

Does this have anything to do with the way console versions are designed? I remember reading a post about this somewhere a while ago. The person mentioned that some PC ports straight up don't work on anything that has less than four cores/threads because they're designed to run off of core 3 (core 0 and 1 being reserved for the OS). I don't know if that's the reason, but it would explain why recent games require four cores/threads. Obviously four cores will give you more performance if the game takes advantage of them, but that seems like such an oversight.

I don't know an awful lot about the software side of things, which is why I'm asking.
 

LordOfChaos

Member
https://www.youtube.com/watch?v=JxUPJdcChzE

They needed 4 threads, not 4 physical cores. So it can be 2 core 4 threads (i3) or 4 core 4 threads (i5).

Hmm. So a dual socket P4 with HT would boot that thing, but that Pentium Anniversary wouldn't :p

Anyways. I do hope that the hard minimum for these things doesn't become a trend, just trust us to know if we want to run it or not. They could lock it to four threads, but one could be using a very powerful dual core that bests their requirement for a quad core.
 
Could just be PR control, nobody likes to have their big reveal stolen from under them.
Makes sense. I plan on replacing my 2500K next year and I would love a competitive market.I'm really hoping AMD bring their A game and learned from the mistakes on their current CPUs.
 
On topic:
Please please please please have IPC that rivals sandy bridge, that is all I ask for amd...
It would save the cpu market for consumers...

Oh pretty please
The proper core talk and the 'high IPC gains' make me hopeful.




None of this is true if you play games
the fx cpus have the same ipc than the phenom II
I have a phenom II, a friend of mine has an fx 8350

We both bitch to eachother about the cpus being guttertrash for certain games

fx8350 friend gets 30-40 fps in dirty bomb with megastuttering, which is the same I get with my 100 euro 3 core phenom II from 2009.
Every game that puts most of the load on one core (still quite a lot of games) and that is cpu demanding will run like trash on these things.

Also in the low end market the pentium anniversary and the i3 beat the pants off the amd variants in the same price bracket

Only thing amd beats intel at is integrated gpu performance, both are still shit.

*sigh* You sound hostile. And your comments don't jive with reality when it comes to actual, real benchmarks that are available all over the web. I'm getting smooth frame-rates on my HTPC in every game I throw at it. Battlefield 4 is pegged at 60fps/1080P/Ultra. I turn down the settings to high & the fps stays pegged at 60fps in online MP with up to 64 players. This is on a $65 760K CPU coupled with a GTX 960. Sure, my main gaming rig hooked up to a 30" Dell (3007 IPS, oldie but goodie) is a different story because I run games on that at 2560 x 1600. That's where the extra IPC grunt really comes into play. It also costs about $180 more for the i5-4690K CPU I put in that system. Plus the motherboard was about $100 more expensive than typical AMD offerings. I love both systems. They both have their place in my household. Heck, I built my fiance' a little HTPC/light gaming rig using a A8-7600. She mostly plays Dungeon Defenders. I set the rez to 1280 x 800/medium quality & it looks a lot better than the same game on the 360, runs at a stable 30fps even on the most busy maps with us playing split screen. No bad for a $90 APU. Sure, if she continues gaming on Steam, I'll probably upgrade it later by adding in a GPU...probably a single slot/low profile GTX 750 Ti. All in all, not bad for a system that is actually smaller than a Xbone.

People will always want to cherry pick Crysis 3 which is extremely CPU dependent or unoptimized shite like Unity which can bring any CPU/GPU combo to its knees. But all in all AMD still produces price/performance competitive parts at or under the $100 price point which will generally provide a smooth 30-60fps in most games at 1080P when paired with a good GPU (AMD is arguably application specific competitive with the 8320 at ~$130). *Most games* of the 300+ I own & have played on Steam will play smoothly. If I hook my most expensive system up to my big screen it is still limited to 1080P/60Hz (Vizio M series advertised at 240Hz, which is marketing nonsense). It just depends on the application/need.

AMD's problem is remaining a viable business due to the difficulty of squeezing profit margins out of extremely low cost chips (hard to do). Switching to SMT+CMT in Zen should help them be more competitive and get those margins back on track.
 

Renekton

Member
Does this have anything to do with the way console versions are designed? I remember reading a post about this somewhere a while ago. The person mentioned that some PC ports straight up don't work on anything that has less than four cores/threads because they're designed to run off of core 3 (core 0 and 1 being reserved for the OS). I don't know if that's the reason, but it would explain why recent games require four cores/threads. Obviously four cores will give you more performance if the game takes advantage of them, but that seems like such an oversight.

I don't know an awful lot about the software side of things, which is why I'm asking.
I think the devs checked the various HW surveys, saw a good enough installed base of 4-core/thread users, decided to use it as base spec.
 

Thanks for the link...I always find these sorts of discussions interesting.

Makes sense. I plan on replacing my 2500K next year and I would love a competitive market. I'm really hoping AMD bring their A game and learned from the mistakes on their current CPUs.

I'm in the same boat. I'll either be upgrading my 4690K to Zen or Skylake. I'm not sure what to expect from Skylake as the rumor mill right now is just as muddied for it as it is for Zen. We've been getting the tick tock 5% increases with each new revision, so it seems estimates have a fairly wide range from 5% - 20% depending on how optimistic people are. If we split the difference and Skylake achieves a 10% IPC improvement and Zen matches Haswell, then I think AMD is back in the game. Then is just becomes a matter of pricing/binning. If there's only 10% difference at the top end and that filters down to all the price points I'd be pretty tempted to upgrade to Zen assuming there's a tad bit of cost savings to go along with it. If there's a 20%+ IPC difference then Skylake may be in my future, assuming they aren't priced crazy.
 
So what does everyone think AMD have to deliver to become relevant to gamers and enthusiasts again?

For me it would be:

$125
4C/8T
Ivy Bridge level IPC
Unlocked multiplier and easy 4GHz+ clocks

I think that could be a real interesting choice for mainstream "bang for buck" gaming rigs and would be an easy recommendation to anyone that can't afford an i5. You'd take a ~20% hit to IPC vs. a Skylake i3 but being unlocked would make it easy to make that difference up and then you're getting an extra 2 cores and 4 threads at the same price point. If it means offering a mainstream chip without integrated graphics (like their current Athlon range) then so be it.

At the high end a 8C/16T option for ~$200 (unlocked of course) could offer up a good alternative to an i5. DX12 will be common by that point so an extra 12 threads could be a decent alternative to an extra 20% IPC. As long as single threaded performance is good enough for the back catalogue of DX11 games at 60fps (and 4ghz+ with Ivy Bridge level IPC absolutely is) then you'd have a really compelling option for my money.

I just hope we get this level of competition as Intel have been allowed to become complacent over the last few years. We really should have an unlocked ~$100 i3 by now and i5/i7 chips should have moved to 6 cores with Haswell but without any competition there been no need to move things forward.


Budget CPUs have really fallen away in recent times. Back in the Core 2 days you could buy an ~$80 E2180, easily OC it to 3ghz and out perform a stock $300+ e6600. There's just no chips that offer that anymore. The Pentium Anniversary was nice but 4 threads has become basically a minimum spec these days so it really needed hyper threading to let it fly. Even if Zen can't beat Intel at the high end (and I don't expect it to) there's still a big gap in the market for something like a modern E2180.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
So what does everyone think AMD have to deliver to become relevant to gamers and enthusiasts again?

For me it would be:

$125
4C/8T
Ivy Bridge level IPC
Unlocked multiplier and easy 4GHz+ clocks
I think you're a tad optimistic. Intel's current similar offerings run north of $300. AMD pulling that at that price would be a small revolution, and I'd personally buy a handful of those to upgrade all my home stations. Alas.
 

LordOfChaos

Member
$125
4C/8T
Ivy Bridge level IPC
Unlocked multiplier and easy 4GHz+ clocks
.

Ivy Bridge level IPC is essentially saying get to within ~10% (5-15%% on average) of Haswell IPC.... And then saying offer that with four cores, for 125 dollars.

I dunno. I want to be optimistic, but that seems like a hard jump.

AMD is quite happy to throw cores at things, but that's with their Bulldozer "module" architecture with a shared floating point unit per two cores. Four cores with the new architecture would be more costly.
 

SapientWolf

Trucker Sexologist
What are these odd 'fps' metrics that are being thrown around? Are they rendering films or playing games?
c3-50ms.gif
So what does that mean in terms of perceived fluidity? That could all be from one frame of stutter. It is even less clear with gsync/freesync, which is designed to smooth out uneven frametimes.

If games like Ryse are representative of future game CPU performance then all you will need is 8 halfway decent cores.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Holy crap

14 nm

We are getting close to the atomic limit here guys. Whats their next step afterwards?
Down to 10nm they can carry on with the existing FinFET tech. Beyond that come some exotic elements/alloys like Germanium and InGaAs offering high electron mobility. Then comes the Graphene hypothesis, Cthulhu rituals, etc.
 

Ty4on

Member
So what does that mean in terms of perceived fluidity? That could all be from one frame of stutter. It is even less clear with gsync/freesync, which is designed to smooth out uneven frametimes.

If games like Ryse are representative of future game CPU performance then all you will need is 8 halfway decent cores.

Frametimes is the time used to render a frame. 16.2/3 ms is what every frame would render at with perfect 60fps, 33.1/3ms with 30fps and 50ms with 20fps. That slide shows how many milliseconds during the test run that the frametimes where over 50ms.

It is important for percieved fluidity because those bad frames mean a lot more than fast frames (often faster than the screen itself). With FPS everything is averaged out so you could in the worst case have frametimes alternating between 8-33ms where the game would look and feel like 30fps while the FPS counter would show 60fps.
While g-sync helps with uneven framerates it doesn't fix this issue.
I recommend using fraps to record frametimes (it has an option for it and will spit out a spreadsheet) and then use a program like FRAFS Bench Viewer to graph them. If you find a game with an uneven framerate you get to experience what it feels like. Try running a game at around 45fps with vsync on and off if you can't and play around with framelimiters. Source games like TF2 are easy to run and should include a framelimiter in the console so you can see how 16-33ms (30-60fps) alternating (you can get this with 40~45fps with vsync turned on) is compared to solid 30 and 60fps.

Ryse is just one game and one that happens to be very straight forward and predictable. A lot of games are very unpredictable and if one thread suddenly requires a lot of processing (say you blew up a lot of barrels or hit space debris) it's better to have the single threaded performance than to lack it. A lot of old games will also not be updated and need beefy single threaded performance for the best experience. mkenyon did a big experiment showing how seemingly easy to run games like Dota and Tribes Ascend really benefited from an overclocked Intel CPU at 120hz.
 

LordOfChaos

Member
Holy crap

14 nm

We are getting close to the atomic limit here guys. Whats their next step afterwards?

Intel was already there just in case you didn't know, plus their 14nm puts everyone elses to shame.
http://www.extremetech.com/wp-content/uploads/2014/12/Cell-SizeComparison.png

We'll still have a few shrinks after 14 on silicon before hitting those hard limits though, I think. 7nm and 5nm are being worked on, for after 10nm. After that, yeah, everyone will be scrambling for materials other than silicon to use, or other esoteric methods.

http://www.cnet.com/news/end-of-moores-law-its-not-just-about-physics/
 
What about the darn graphics cards?

I mean, cpu wise we've hit the land of massively diminsed returns and I doubt I'll be upgrading my CPU until they bring out some graphene based chips.... 10% gains on cpu speeds is just ridiculous especially since the older chips overclocked so well

I truly believed that the only solution to this no gains problem would be processors with ever increasing numbers of cores but the software simply doesn't take advantage of the multi core cup abilities yet and essentially I figured that we'd be seeing everything multi threaded and cpu's with 32 and 64 cores by now but I guess that's just a stupid idea when 4 cores can easily do any of the jobs currently needed.
 

Hazaro

relies on auto-aim

kharma45

Member
Confirmed 40% IPC increase vs. Excavator.

Focus on FX CPUs first (transistors spent on cores not integrated graphics so a mainstream 8C/16T CPU looks likely) and APUs second. There's some real potential for serious competition now if AMD can comfortably hit mid 3ghz clocks at stock (and 4ghz+ overclocked).

http://www.anandtech.com/show/9231/amds-20162017-x86-roadmap-zen-is-in

That's a fucking huge jump. I would love it if AMD became a rational CPU choice again. Wonder where this would put them vs Intel. IB levels of IPC?
 

LordOfChaos

Member
That's a fucking huge jump. I would love it if AMD became a rational CPU choice again. Wonder where this would put them vs Intel. IB levels of IPC?

Could be above IB and close to Haswell. Excavator is 20% faster then piledriver, which puts it on par with Nehalem. If Zen is 40% faster then excavator/Nehalem that's around Haswell-e single core performance.

That makes me almost giddy. Haswell like IPC would put it in spitting range of Skylake.
 

kharma45

Member
Could be above IB and close to Haswell. Excavator is 20% faster then piledriver, which puts it on par with Nehalem. If Zen is 40% faster then excavator/Nehalem that's around Haswell-e single core performance.

That makes me almost giddy. Haswell like IPC would put it in spitting range of Skylake.

I'll try to reserve judgement though. AMD promised the world with Bulldozer and look what happened there.

Thing too is by the time this launches Intel could be on Cannonlake potentially if they get Skylake out this year.
 

Blanquito

Member

Seems like this should provide a pretty sizable leap in perf/watt.
 

AU Tiger

Member
Is it just me or do AMD chips always look pretty good on paper every year a thread like this rolls around yet always just ends up getting it's ass tore up in real world benchmarks?

I admit that I don't really pay too much attention to CPU benchmarks. I do video and audio rendering/encoding and have always just grabbed the fastest unlocked non extreme i7 at the time and just assumed that it would obliterate anything AMD had to offer me.

Is there any indication of this changing with this new leaked stuff or is it basically just going to continue to be the i7 cleaning house and the "Fast" AMD chips trading blows with the middle of the road i5?

I just want AMD to give me a chip that doesn't have any focus on gaming but rather something like a powerful multithreading Xeon 6 or 8 core equivalent that I can run a couple of virtual servers with or do CPU intense rendering with that doesn't cost an arm and a leg like the consumer octa core Intels.
 

dr_rus

Member
Seems like this should provide a pretty sizable leap in perf/watt.

Well they've promised 2X. That's probably because of the process switch exclusively as I don't expect much changes to the GCN architecture unfortunately.

Is it just me or do AMD chips always look pretty good on paper every year a thread like this rolls around yet always just ends up getting it's ass tore up in real world benchmarks?

I admit that I don't really pay too much attention to CPU benchmarks. I do video and audio rendering/encoding and have always just grabbed the fastest unlocked non extreme i7 at the time and just assumed that it would obliterate anything AMD had to offer me.

Is there any indication of this changing with this new leaked stuff or is it basically just going to continue to be the i7 cleaning house and the "Fast" AMD chips trading blows with the middle of the road i5?

I just want AMD to give me a chip that doesn't have any focus on gaming but rather something like a powerful multithreading Xeon 6 or 8 core equivalent that I can run a couple of virtual servers with or do CPU intense rendering with that doesn't cost an arm and a leg like the consumer octa core Intels.

Most of us hope that Zen core which is launching next year will be able to compete with Intel's i7s.
 
I just want AMD to give me a chip that doesn't have any focus on gaming but rather something like a powerful multithreading Xeon 6 or 8 core equivalent that I can run a couple of virtual servers with or do CPU intense rendering with that doesn't cost an arm and a leg like the consumer octa core Intels.
For the most part the FX series was just this.
 

DonMigs85

Member
Exciting times indeed, but they're gonna go up against Cannonlake and Nvidia's Pascal which is truly built with DX12 in mind. GCN hasn't evolved all that much
 
Please let it be inside Nintendo NX.

NX development is probably too far along for any 16/14nm tech to be used. It's far more likely to be still based on old 28nm tech; it's cheap, reliable and the risk of any unforeseen problems cropping up when going into mass production is basically zero. And, yep, I think that this doesn't sound particularly exciting too... :-/
 
Top Bottom