• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[Speculation] AMD & NVIDIA may forgo 10nm GPUs entirely

Inuhanyou

Believes Dragon Quest is a franchise managed by Sony
PS5 gonna be a BEAST :hurr I don't think MS will hold to their "indefinate forwards compatibility, end of generations" talk by then, which is good news for better, more advanced games.

I know people scoff at that sort of thing, but even without graphene technology, the technological arms race can still continue for a little while
 

TAS

Member
Probably a dumb question...but what happens if/when they reach 1nm? Can they go below that or is that the limit?
 

DieH@rd

Banned
Probably a dumb question...but what happens if/when they reach 1nm? Can they go below that or is that the limit?

They won't go that small. Current transistors and pathways are already "few" atoms wide.

They will start stacking GPUs one on another.
 
Probably a dumb question...but what happens if/when they reach 1nm? Can they go below that or is that the limit?

The basic problem is that at some point these names stopped representing actual wire width and started representing "wire width that would work as well as our chip if it didn't include some of the extra new tech that we decided counts as process improvement". Ignoring that, there is some sort of hard limit but no idea why would it be precisely 1 nanometer.
 

riflen

Member
Probably a dumb question...but what happens if/when they reach 1nm? Can they go below that or is that the limit?

No-one knows at the moment. Some have said that 7nm is as far as silicon can take us. The main problem really is not that new techniques and materials can't be developed, more that they will be cost-prohibitive.
 

Inuhanyou

Believes Dragon Quest is a franchise managed by Sony
Probably a dumb question...but what happens if/when they reach 1nm? Can they go below that or is that the limit?

Without some extra type of technological improvement like GPU stacking, what would happen by reaching the limitations of nanometer miniaturization is simply that GPU and CPU progress would come to a standstill entirely.

Something that can't be allowed to happen.

Unless we find a way to reverse moore's law by finding a new material to use for chips like graphene, silicon will inevitably tap out.
 

Zedox

Member
PS5 gonna be a BEAST :hurr I don't think MS will hold to their "indefinate forwards compatibility, end of generations" talk by then, which is good news for better, more advanced games.

The fuck. Ok, PS5 will be powerful, no doubt, as will the Xbox Scorpio 2...sooo ok. And Microsoft never stated anything related to indefinite forwards compatibility nor end of generations...also, it's x86 continuity that provides the compatibility.
 

Eylos

Banned
Without some extra type of technological improvement like GPU stacking, what would happen by reaching the limitations of nanometer miniaturization is simply that GPU and CPU progress would come to a standstill entirely.

Something that can't be allowed to happen.

Unless we find a way to reverse moore's law by finding a new material to use for chips like graphene, silicon will inevitably tap out.

So in 10 years or more techonology will hit a wall? That's sounds interesting and bad i imagine if takes a long time to discover something new and acessible.
 
And Microsoft never stated anything related to indefinite forwards compatibility nor end of generations

Yes they did.

... For us, we think the future is without console generations; we think that the ability to build a library, a community, to be able to iterate with the hardware -- we're making a pretty big bet on that with Project Scorpio. We're basically saying, "This isn't a new generation; everything you have continues forward and it works." We think of this as a family of devices.
 

rrs

Member
Probably a dumb question...but what happens if/when they reach 1nm? Can they go below that or is that the limit?

5nm is the limit, and even then it may be ignored for cheaper 7nm silicon vs 5nm made out of exotic materials with very expensive fabs
 

Bsigg12

Member

That sounds just like the Pro though. It's a family of Xbox One devices so everything works across any of those devices. They could easily have a hard cutoff with the console after the Scorpio where those games don't work for older systems but you can continue to play everything you already own.
 
That sounds just like the Pro though. It's a family of Xbox One devices so everything works across any of those devices. They could easily have a hard cutoff with the console after the Scorpio where those games don't work for older systems but you can continue to play everything you already own.

'The future is without generations" is an odd statement to say specifically about... one generation of consoles. It's the whole Xbox brand from here onwards. Phil has been saying he sees the future of consoles as more like PCs for basically the entire year, and the concept of hard cutoffs and clean refreshes don't work here (unless you're specifically talking on a software limitation and not a hardware one).
 

AmyS

Member
Hopefully next-gen consoles will use HBM3 or Hybrid Cube Memory.

Going to need one or the other to get bandwidth to achieve a combination of native 4K / 2160p and a significant leap in graphics / lighting. Not to mention next-gen VR.
 

hesido

Member
I'm excited. 7nm node ought to be enough for everybody - until they sort out that quantum computing. Then all our games will be developed for streaming on cloud. That quantum computer should run crysis real fast.

As for mobile, fix the batteries first. They are really lagging the industry and not even completely safe.
 

99Luffy

Banned
So in 10 years or more techonology will hit a wall? That's sounds interesting and bad i imagine if takes a long time to discover something new and acessible.
Theres new technologies on the horizon.

Within five years, Holt said, Intel will have to perfect new techniques – perhaps a technology known as "tunneling transistors," or another called "spintronics" – that will change the basic mechanics of how Intel manufactures its chips. Most of the research will take place at Ronler Acres, where the chipmaker crafts each new generation of microprocessor.

At this point, though, no one knows which new technologies will win out – or what it will take to get there.

"We're going to see major transitions," Holt told the conference. "The new technology will be fundamentally different."
https://www.technologyreview.com/s/...-to-sacrifice-speed-gains-for-energy-savings/
 

Chaostar

Member
Does this mean stuff coming in late 2017 with 10nm might be better off delaying to the next year?


;P
 
Hi guys. I work in semiconductors so let me see if I can give a simple explanation.

Power consumption in semiconductors is the sum of static and dynamic power consumption. Static power consumption is fixed in that is relates to power consumed through the chip being on. Not much we can do here, but it does go down proportially because supply voltage is scaling as well.

Dynamic power is defined as P = CVdd^2f, where C is capacitance, VDD is the supply voltage , and f is the clock frequency. When you see a number or 7nm this refers to the MINIMUM length of the transistor. Why does this matter? It turns out the capacitance of a transistor (more specifically the gate capacitance in a field effect transistor since it tends to dominate) is proportional to CoxWL. Cox is the oxide capacitance (SiO2) and depends on the oxide thickness for the process. W and L refer to the transistor dimensions. So if L goes down with a process shrink so does capacitance. So if we lower C, we can increase Pdynamic with no penalty.

Now the negatives of process shrink. The supply rail corkage does not really scale well with prices shrink. Also at smaller sizing the transistor behaves more linearly instead of quadratically. Other non ideals such as velocity saturation and subthreshold leakages get worse, which of course can make memory cells more unreliable. So that's why we shrink does. Of course since the minimum feature length of the transistor gets smaller you can also fit more die. But you have to be careful because if the defect density yield or number of defective die per wafer increases as well you really gain nothing.
 

tuxfool

Banned
Hi guys. I work in semiconductors so let me see if I can give a simple explanation.

Power consumption in semiconductors is the sum of static and dynamic power consumption. Static power consumption is fixed in that is relates to power consumed through the chip being on. Not much we can do here, but it does go down proportially because supply voltage is scaling as well.

Dynamic power is defined as P = CVdd^2f, where C is capacitance, VDD is the supply voltage , and f is the clock frequency. When you see a number or 7nm this refers to the MINIMUM length of the transistor. Why does this matter? It turns out the capacitance of a transistor (more specifically the gate capacitance in a field effect transistor since it tends to dominate) is proportional to CoxWL. Cox is the oxide capacitance (SiO2) and depends on the oxide thickness for the process. W and L refer to the transistor dimensions. So if L goes down with a process shrink so does capacitance. So if we lower C, we can increase Pdynamic with no penalty.

Now the negatives of process shrink. The supply rail corkage does not really scale well with prices shrink. Also at smaller sizing the transistor behaves more linearly instead of quadratically. Other non ideals such as velocity saturation and subthreshold leakages get worse, which of course can make memory cells more unreliable. So that's why we shrink does. Of course since the minimum feature length of the transistor gets smaller you can also fit more die. But you have to be careful because if the defect density yield or number of defective die per wafer increases as well you really gain nothing.

To add to this: On Finfet while L matters, the gate capacitance should also be a lot higher than a planar methods on account of the greater surface area provided by the various edges of W on a fin. If they decide to shrink a node and add more fins they may get a higher C. It should also be noted that a fin should reduce the amount of sub-threshold leakage along with a change in some yield parameters from dopant variability to fin dimension variability.

IIRC Intel uses 3 fins but everybody else uses 2 fins.

One thing I'm curious about though is why do most estimates for active power just use P ~ V^2 * f, if C is so dominant?
 

AmyS

Member
Intel's gonna save us! (Just look at their r&d, it has to pay off)
http://techreport.com/news/28033/report-amd-r-d-spending-falls-to-near-10-year-low

T4uE0v2.jpg


Whoa!
 

Inuhanyou

Believes Dragon Quest is a franchise managed by Sony
That sounds just like the Pro though. It's a family of Xbox One devices so everything works across any of those devices. They could easily have a hard cutoff with the console after the Scorpio where those games don't work for older systems but you can continue to play everything you already own.

The difference is that Sony are talking about PS4 and Pro in a vacuum. They are both playstation 4's, they both play the same titles and Pro will always be locked to PS4 in every circumstance.

But when push comes to shove, their next numbered unit will not be strictly mandated to work in this way, and developers will have the choice to activate forwards compatibility in a similar way to general types of cross gen development(since IMO PS5 will be BC), and work on completely new games fully taking advantage of the PS5 without having to worry about the PS4/Pro platforms.

House has said that they still fully support generational cycles in regards to hardware and do not think they will compromise that anytime soon. Cerny said the same during the PS meeting and even up until a few days ago when he was explaining the concept behind the Pro in more detail, they are doubling down on a traditional PS5 with a huge jump in power and hardware fidelity as Eurogamer noted.

Phil on the other hand, has been talking about the future and a whole range of devices being FC in a line. Which doesn't work when you take into account XB1 and the power differential they have started this process with.

What they are proposing would only even work development wise, if they were going to flood the market with marginally stronger/weaker devices that were only so different that games would run or look similarly outside of performance and resolution increases.

That approach is something i don't see being feasible either, but its the only thing i can see where it would not fuck over developers.
 

AmyS

Member
But when push comes to shove, their next numbered unit will not be strictly mandated to work in this way, and developers will have the choice to activate forwards compatibility in a similar way to general types of cross gen development(since IMO PS5 will be BC), and work on completely new games fully taking advantage of the PS5 without having to worry about the PS4/Pro platforms.

House has said that they still fully support generational cycles in regards to hardware and do not think they will compromise that anytime soon. Cerny said the same during the PS meeting and even up until a few days ago when he was explaining the concept behind the Pro in more detail, they are doubling down on a traditional PS5 with a huge jump in power and hardware fidelity as Eurogamer noted.

I need to read that article again -- I must have skimmed over the PS5 part
 

Inuhanyou

Believes Dragon Quest is a franchise managed by Sony
I need to read that article again -- I must have skimmed over the PS5 part

They didn't say PS5 explicitly, but Cerny said that traditional generations for them are still a huge priority and something they will continue with, so i would assume that they will separate their numbered units(PS4, PS5 ect) from their "pro" line of marginal iterative versions, if they choose to continue with the "Pro" branding after PS4 Pro
 

AmyS

Member
They didn't say PS5 explicitly, but Cerny said that traditional generations for them are still a huge priority and something they will continue with, so i would assume that they will separate their numbered units(PS4, PS5 ect) from their "pro" line of marginal iterative versions, if they choose to continue with the "Pro" branding after PS4 Pro

Ah, I see.

Yes you're right, and now I remember reading what Cerny said about the importance of traditional generations.
 

Cyanity

Banned
Reading through this thread has been fascinating. It makes you wonder where we will go after 7nm nodes are old hat. Is 5nm feasible? Will we have to switch to graphene afterwards? It's exciting for sure.
 

Inuhanyou

Believes Dragon Quest is a franchise managed by Sony
Reading through this thread has been fascinating. It makes you wonder where we will go after 7nm nodes are old hat. Is 5nm feasible? Will we have to switch to graphene afterwards? It's exciting for sure.

Graphene is the most important next generation shift to this kind of technology outside of quantum computing. We've been running on relative fumes in regards to traditional silicon structure for a while now.
 

AmyS

Member
5nm is feasible and will happen according to the big foundries, it is after that, that is question. New tech will likely be need.

But as far as what next gen consoles will use, if that was the question, I think 7nm is a safe bet, given that any reasonable expectation about launch windows is 2020 at the soonest.
 

AmyS

Member
They won't go that small. Current transistors and pathways are already "few" atoms wide.

They will start stacking GPUs one on another.

Can't wait for that buzzword.

There already is: 'Skyscraper' / 'Highrise' Chips.

http://news.stanford.edu/2015/12/09/n3xt-computing-structure-120915/

Stanford-led skyscraper-style chip design boosts electronic performance by factor of a thousand
In modern computer systems, processor and memory chips are laid out like single-story structures in a suburb. But suburban layouts waste time and energy. A new skyscraper-like design, based on materials more advanced than silicon, provides the next computing platform.


For decades, engineers have designed computer systems with processors and memory chips laid out like single-story structures in a suburb. Wires connect these chips like streets, carrying digital traffic between the processors that compute data and the memory chips that store it.

But suburban-style layouts create long commutes and regular traffic jams in electronic circuits, wasting time and energy.

That is why researchers from three other universities are working with Stanford engineers, including Associate Professor Subhasish Mitra and Professor H.-S. Philip Wong, to create a revolutionary new high-rise architecture for computing.

In Rebooting Computing, a special issue of the IEEE Computer journal, the team describes its new approach as Nano-Engineered Computing Systems Technology, or N3XT.

N3XT will break data bottlenecks by integrating processors and memory like floors in a skyscraper and by connecting these components with millions of “vias,” which play the role of tiny electronic elevators. The N3XT high-rise approach will move more data, much faster, using far less energy, than would be possible using low-rise circuits.

“We have assembled a group of top thinkers and advanced technologies to create a platform that can meet the computing demands of the future,” Mitra said.

Shifting electronics from a low-rise to a high-rise architecture will demand huge investments from industry – and the promise of big payoffs for making the switch.

“When you combine higher speed with lower energy use, N3XT systems outperform conventional approaches by a factor of a thousand,” Wong said.

To enable these advances, the N3XT team uses new nano-materials that allow its designs to do what can’t be done with silicon – build high-rise computer circuits.

“With N3XT the whole is indeed greater than the sum of its parts,” said co-author and Stanford electrical engineering Professor Kunle Olukotun, who is helping optimize how software and hardware interact.

New transistor and memory materials

Engineers have previously tried to stack silicon chips but with limited success, said Mohamed M. Sabry Aly, a postdoctoral research fellow at Stanford and first author of the paper.

Fabricating a silicon chip requires temperatures close to 1,800 degrees Fahrenheit, making it extremely challenging to build a silicon chip atop another without damaging the first layer. The current approach to what are called 3-D, or stacked, chips is to construct two silicon chips separately, then stack them and connect them with a few thousand wires.

But conventional, 3-D silicon chips are still prone to traffic jams and it takes a lot of energy to push data through what are a relatively few connecting wires.

The N3XT team is taking a radically different approach: building layers of processors and memory directly atop one another, connected by millions of electronic elevators that can move more data over shorter distances that traditional wire, using less energy. The N3XT approach is to immerse computation and memory storage into an electronic super-device.

The key is the use of non-silicon materials that can be fabricated at much lower temperatures than silicon, so that processors can be built on top of memory without the new layer damaging the layer below.

N3XT high-rise chips are based on carbon nanotube transistors (CNTs). Transistors are fundamental units of a computer processor, the tiny on-off switches that create digital zeroes and ones. CNTs are faster and more energy-efficient than silicon processors. Moreover, in the N3XT architecture, they can be fabricated and placed over and below other layers of memory.

Among the N3XT scholars working at this nexus of computation and memory are Christos Kozyrakis and Eric Pop of Stanford, Jeffrey Bokor and Jan Rabaey of the University of California, Berkeley, Igor Markov of the University of Michigan, and Franz Franchetti and Larry Pileggi of Carnegie Mellon University.

Team members also envision using data storage technologies that rely on materials other than silicon, which can be manufactured on top of CNTs, using low-temperature fabrication processes.

One such data storage technology is called resistive random-access memory, or RRAM. Resistance slows down electrons, creating a zero, while conductivity allows electrons to flow, creating a one. Tiny jolts of electricity switch RRAM memory cells between these two digital states. N3XT team members are also experimenting with a variety of nano-scale magnetic materials to store digital ones and zeroes.

Just as skyscrapers have ventilation systems, N3XT high-rise chip designs incorporate thermal cooling layers. This work, led by Stanford mechanical engineers Kenneth Goodson and Mehdi Asheghi, ensures that the heat rising from the stacked layers of electronics does not degrade overall system performance.

http://www.popularmechanics.com/technology/a18493/stanford-3d-computer-chip-improves-performance/

'Skyscraper' Chips Could Make Computers 1,000 Times Faster
Two-dimensional chips have long been the norm, but researchers are finally clearing barriers to stack 'em on top of each other.

For years, computer systems have been made of silicon processors and memory chips arranged so they sit next to each other on a single layer. Intricate wiring connects the components so data can be computed on the processors and then stored on the memory chips.

The problem is that this configuration sends digital signals on a longer route than is ideal, and there are common problems with bottlenecking—too much data trying to travel the same circuits simultaneously. Both of these problems can be mitigated by stacking processors and memory chips on top of each other. Stacking chips is how Samsung managed to produce a 16-terabyte hard drive.

It's hard though. To fabricate a silicon chip, you need to heat it up to 1,800 degrees Fahrenheit, a process that torches the chip below if you attempt it directly on a 3D configuration. To avoid this, computer manufacturers have had to construct silicon chips separately and then stack them and connect the thousands of required wires.

But researchers from Stanford and three other universities have developed a new method of stacking chips into 3D configurations called Nano-Engineered Computing Systems Technology, or N3XT. Thanks to a new material for chip fabrication and improved electrical pathways, N3XT high-rise chip designs are 1,000 times more efficient than conventional chip configurations.

The research team has solve stacking problems by using newly developed nano-materials that can be fabricated at lower temperatures so they won't run the risk of frying lower layers. To further improve the performance of their 3D chip structure, the team developed electric "ladders" or "elevators" that can move more data over a short distance than traditional wiring, all while using less energy. To keep the structure from overheating, the team incorporated a cooling system for the chip that is analogous to air conditioning in a real skyscraper.

"There are huge volumes of data that sit within our reach and are relevant to some of society's most pressing problems from health care to climate change, but we lack the computational horsepower to bring this data to light and use it," says Stanford computer scientist and N3XT researcher Chris Ré. "As we all hope in the N3XT project, we may have to boost horsepower to solve some of these pressing challenges." As with all intense microchip tech, it will take years before this makes it to your laptop if it ever even does. But unless we figure out how to get quantum computers into homes first, this could come in handy even a decade from now.

Mark Cerny & Ken Kutaragi will have to work together on PlayStation 6, for 2028.

I suspect PS5 and PS5 Pro won't be quite that ambitious ;)
 

hesido

Member
There already is: 'Skyscraper' / 'Highrise' Chips.

http://news.stanford.edu/2015/12/09/n3xt-computing-structure-120915/









http://www.popularmechanics.com/technology/a18493/stanford-3d-computer-chip-improves-performance/



Mark Cerny & Ken Kutaragi will have to work together on PlayStation 6, for 2028.

I suspect PS5 and PS5 Pro won't be quite that ambitious ;)

The good thing with what you've linked is, this is not just a brute force approach to stacking, but one that seems to reduce the energy loss and make the chips more efficient. Cool!
 
To add to this: On Finfet while L matters, the gate capacitance should also be a lot higher than a planar methods on account of the greater surface area provided by the various edges of W on a fin. If they decide to shrink a node and add more fins they may get a higher C. It should also be noted that a fin should reduce the amount of sub-threshold leakage along with a change in some yield parameters from dopant variability to fin dimension variability.

IIRC Intel uses 3 fins but everybody else uses 2 fins.

One thing I'm curious about though is why do most estimates for active power just use P ~ V^2 * f, if C is so dominant?

C is on the order of fF (10^-15) where as supply voltage and frequency are MANY orders of magnitude larger. That's why it gets dropped I guess? We never drop C in my courses because you can't assume the transistor is necessarily smallest size always. For example if you want an inverter with an equal propagation delay going high or low you have to size the PMOS larger due to the mobility difference.
 
Man, it shows you how much money apple is making on their phones when TSMC is confident they can build a profitable process just for one device, given how much it costs to set up.
 
Not this shit again :/ It'll take another 3 or so years until we'll see sth. better than 10nm I guess?

TSMC has 7nm on tapeout this year. Might see stuff from them in 2018.

But of course all these process names are marketing. One companies 10nm might be better then anothers 7nm.
 

E-Cat

Member
TSMC has 7nm on tapeout this year. Might see stuff from them in 2018.

But of course all these process names are marketing. One companies 10nm might be better then anothers 7nm.
These are the known metal/poly pitches for each node for the four major foundries (7nm projected for all except TSMC, 5nm obviously iffy). Note that in order to get true density scaling, CPPxMMP is not enough but need at least track cell number; i.e., going from 7.5 track cells @10nm to 6 track cells @7nm gives an additional 1.25x density gain that's not deducible from CPPxMMP.

Rj53woN.png


In any case, if Intel is gonna be at "10nm" in 2018 and that is essentially tied with TSMC's "7nm" density, then they'll no longer have the process lead. And soon after, GF/Samsung will blow right past them. And don't forget that going forward, all of these companies have more aggressive cadences than Intel. I expect they'll be debuting their "5nm" long before Intel does "7nm".
 

AmyS

Member
These are the known metal/poly pitches for each node for the four major foundries (7nm projected for all except TSMC, 5nm obviously iffy). Note that in order to get true density scaling, CPPxMMP is not enough but need at least track cell number; i.e., going from 7.5 track cells @10nm to 6 track cells @7nm gives an additional 1.25x density gain that's not deducible from CPPxMMP.

Rj53woN.png


In any case, if Intel is gonna be at "10nm" in 2018 and that is essentially tied with TSMC's "7nm" density, then they'll no longer have the process lead. And soon after, GF/Samsung will blow right past them. And don't forget that going forward, all of these companies have more aggressive cadences than Intel. I expect they'll be debuting their "5nm" long before Intel does "7nm".

Good points. "7nm" FinFET should be good for Zen 2 / Zen 3 and Navi GPU products.

DMlqJbM.png


Edit: Also this -

https://www.overclock3d.net/news/cp...a_exascale_mega_apu_in_a_new_academic_paper/1

One of the largest issues comes when manufacturing large CPU/GPU dies, with yields decreasing and costs rising as you create larger products. Imagine a silicon wafer and imagine that a single wafer has a certain number of defects, each wafer creates a certain number of chips, which means that only a small number of chips will be affected in the whole batch. When creating products with large die sized the number of chips per silicon wafer decreases, which means that defects can destroy a larger proportion of the products in a single silicon wafer.

According to this paper, AMD wants to get around this "large die issue" by making their Exascale APUs using a large number of smaller dies, which are connected via a silicon interposer. This is similar to how AMD GPUs connect to HBM memory and can, in theory, be used to connect two or more GPU, or in this case CPU and GPU dies, to create what is effectively a larger final chip using several smaller parts.

In the image below you can see that this APU uses eight different CPU dies/chiplets and eight different GPU dies/chiplets to create an exascale APU that can effectively act like a single unit. If these CPU chiplets use AMD's Ryzen CPU architecture they will have a minimum of 4 CPU cores, giving this hypothetical APU a total of 32 CPU cores and 64 threads.

This new APU type will also use onboard memory, using a next-generation memory type that can be stacked directly onto a GPU die, rather than be stacked beside a GPU like HBM. Combine this with an external bank of memory (perhaps DDR4) and AMD's new GPU memory architecture and you will have a single APU that can work with a seemingly endless amount of memory and easily compute using both CPU and GPU resources using HSA (Heterogeneous System Architecture).

In this chip both the CPU and GPU portions can use the packages onboard memory as well as an external memory, opening up a lot of interesting possibilities for the HPC market, possibilities that neither Intel or Nvidia can provide themselves.

shgfGqK.png


Right now this new "Mega APU" is currently in early design stages, with no planned release date. It is clear that this design uses a new GPU design that is beyond Vega, using a next-generation memory standard which offers advantages over both GDDR and HBM.

Building a large chip using several smaller CPU and GPU dies is a smart move from AMD, allowing them to create separate components on manufacturing processes that are optimised and best suited to each separate component and allows each constituent piece to be used in several different CPU, GPU or APU products.

For example, CPUs could be built on a performance optimised node, while the GPU clusters can be optimised for enhanced silicon density, with interposers being created using a cheaper process due to their simplistic functions that do not require cutting edge process technology.

This design method could be the future of how AMD creates all of their products, with both high-end and low-end GPUs being made from different numbers of the same chiplets and future consoles, desktop APUs and server products using many of the same CPU or GPU chiplets/components.

Most likely this will become the building blocks of PS5 /.XBNext and next-next generation consumer APUs beyond Raven Ridge (2017) and Gray Hawk (2019).
 

E-Cat

Member
Here's a new graph with an updated Intel 36nm MMP @ 10nm and the cell tracks added. Without factoring in transistor performance or power, we can calculate from [MMP x CPP x Tracks] that TSMC's 16FF+ is 4 percent denser than GloFo's 14nm (which is licensed from Samsung).

Little info is available on GloFo's 7nm process at the moment, but we know that it differs from Samsung where they're thinking of using EUV. Assuming a relaxed MMP to avoid quadruple patterning due to staying on optical litho, TSMC's 7nm would be around 2 percent denser than GloFo's according to the present numbers.

mHn4fv1.png
 
Forgive my ignorance, but what are the benefits of smaller nodes?

This kinda reads like a jeff_rigby thread...

More transistors per area which usually mean more performance per watt of power. They can also shrink current processors to consume less power for the same performance. For GPUs it could mean the same core count but at much higher clock rates compared to a higher process node. They also put in more transistors than the previous generation of processors just like the GTX 1080 compared to the GTX 980.


The problem is that the shrinking process will hit a wall soon(next 10-15 years). They are going to reach a point where transistors won't be able to function in their current forms because of physics tend to be chaotic on the atomic scale.
 
Here's a new graph with an updated Intel 36nm MMP @ 10nm and the cell tracks added. Without factoring in transistor performance or power, we can calculate from [MMP x CPP x Tracks] that TSMC's 16FF+ is 4 percent denser than GloFo's 14nm (which is licensed from Samsung).

Little info is available on GloFo's 7nm process at the moment, but we know that it differs from Samsung where they're thinking of using EUV. Assuming a relaxed MMP to avoid quadruple patterning due to staying on optical litho, TSMC's 7nm would be around 2 percent denser than GloFo's according to the present numbers.

mHn4fv1.png

Any chance of someone explaining this chart to me like the idiot I am?

Thanks :)
 

Datschge

Member
The problem is that the shrinking process will hit a wall soon(next 10-15 years).
It's already hitting the ROI wall which is why smaller nodes are primarily driven by the smaller mass market mobile chips where every reduction of power consumption still counts, while bigger chips skip the nodes as the yield disproportionately worsens by size and porting the chip design itself is very costly.
 
Top Bottom