• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nintendo NX rumored to use Nvidia's Pascal GPU architecture

Status
Not open for further replies.

Luigiv

Member
What is this WiiU is 176 gflops stuff? That is a fair bit lower than the 360 and ps3's gpu ratings.

This has been known for awhile. This was figured out when we first got the photo of the uncapped die (though it did take a fair bit of cross checking and debating to settle on that number).

Anyway, even though it's lower than the 360 and Ps3's theoretical peak numbers, the Wii U's architectural advantages more than made up the difference.
 

Mpl90

Two copies sold? That's not a bomb guys, stop trolling!!!
Since this is the most visited tech-related thread on NX at the moment, a question: Nvidia flops aren't equal to AMD flops, but what's the (possible) exchange rate between them? A few days ago, I was hearing it was 1 Nvidia flop = ~ 1.15 AMD flops, while GhostTrick stated it was "more like 1.3". I'm pretty curious to hear the opinion of other posters with good tech knowledge on the matter.
 

MuchoMalo

Banned
That too, a K1 is still 364 Gflops or more than twice the WiiU. If the console had to be considerably worse than a X1 (which is 512) there's no reason for the devkits to use an X1 instead of a cheaper and closer to actual performances K1.

Actually there is a reason in that the X1's architecture is much closer, but even still they'd have underclocked the chip, not overclocked it or kept it running at stock.
 

G.ZZZ

Member
Actually there is a reason in that the X1's architecture is much closer, but even still they'd have underclocked the chip, not overclocked it or kept it running at stock.

Is Kepler actually that different from Maxwell? I imagined it was just another die shrink "evolution".
 

MuchoMalo

Banned
Since this is the most visited tech-related thread on NX at the moment, a question: Nvidia flops aren't equal to AMD flops, but what's the (possible) exchange rate between them? A few days ago, I was hearing it was 1 Nvidia flop = ~ 1.15 AMD flops, while GhostTrick stated it was "more like 1.3". I'm pretty curious to hear the opinion of other posters with good tech knowledge on the matter.

I've done calculations (using actual numbers after factoring in GPU Boost and not just Nivida's rated numbers), and I'd say that 1.25 is a good estimate or GCN4 (Polaris) vs Pascal. 1.3 is good as well, but there's no exact number and varies by game, so it's difficult to say.

Is Kepler actually that different from Maxwell? I imagined it was just another die shrink "evolution".

Kepler and Maxwell both use the same node, yet the jump in power efficiency from Kepler to Maxwell was as big as or even bigger than the jump from Maxwell to Pascal. I thinbk that should say a lot. It's still an evolution more than a completely new ground-up architecture (I'd say that Fermi was was the last time that has happened), but it's still quite different.
 

wildfire

Banned
dev kits usually have more memory but are not usually more powerful
I guess it was consensus they are aiming more power than Tegra X1... that is why the choose Pascal.

Yeah I was part of this consensus as well but after watching MuchoMalo's link on PAscal flops being the same as Maxwell flops I can believe now that Nintendo is getting X2 simply to get power efficiency for handheld mode.

The Handheld should perform as well as a Wii U but the console mode will fall short of Xbox One.

Semiaccurate's article about Nvidia having contractual obligations to make n amount of X1 chips with their foundry parnter still works out with X2 if it simply is just a die shrink of Maxwell with barely any tweaks.
 

Ck1

Banned
It seems that we just don't have enough info yet on what state the NX is primarily being developed for spec wise. Like is the main focus of the system to include less cores at higher clocks in preference for mobile mode, or will the NX have more cores (say 3-4) down clocked to lower frequencies for mobile, but at standard clocks rates in dock mode.

It seems there are 2 direct roads for Nintendo to take to achieve the optimal performance and power draw envelope, but which one would be the most feasible for them to pull this unified platform off respectively?
 

MuchoMalo

Banned
Yeah I was part of this consensus as well but after watching MuchoMalo's link on PAscal flops being the same as Maxwell flops I can believe now that Nintendo is getting X2 simply to get power efficiency for handheld mode.

The Handheld should perform as well as a Wii U but the console mode will fall short of Xbox One.

Semiaccurate's article about Nvidia having contractual obligations to make n amount of X1 chips with their foundry parnter still works out with X2 if it simply is just a die shrink of Maxwell with barely any tweaks.

Keep in mind that SemiAccurate is an Nvidia hate parade, so that part was most likely just made up, also to make the article longer to help justify paying thousands of dollars for it. Charlie Demerjian is kind of a jackass and a terrible writer.

And the handheld can still outperform Wii U by a significant amount.
 

ethomaz

Banned
Since this is the most visited tech-related thread on NX at the moment, a question: Nvidia flops aren't equal to AMD flops, but what's the (possible) exchange rate between them? A few days ago, I was hearing it was 1 Nvidia flop = ~ 1.15 AMD flops, while GhostTrick stated it was "more like 1.3". I'm pretty curious to hear the opinion of other posters with good tech knowledge on the matter.
FLOPs are equal no matter what... if somebody said the opposite he is dead wrong.

What happens is how nVidia uses these FLOPs to graphics being more efficient than AMD due hardware architecture differences and drivers.
 

ethomaz

Banned
Yeah I was part of this consensus as well but after watching MuchoMalo's link on PAscal flops being the same as Maxwell flops I can believe now that Nintendo is getting X2 simply to get power efficiency for handheld mode.

The Handheld should perform as well as a Wii U but the console mode will fall short of Xbox One.

Semiaccurate's article about Nvidia having contractual obligations to make n amount of X1 chips with their foundry parnter still works out with X2 if it simply is just a die shrink of Maxwell with barely any tweaks.
They should not use an overclocked X1 if they wish to use Pascal at the same raw power than X1.

They are aiming higher than X1 with Pascal.

BTW Charlie was never a thrust source since when he worked in TheInquirer before SemiAccurate (the name says all).
 

Ganondolf

Member
FLOPs are equal no matter what... if somebody said the opposite he is dead wrong.

What happens is how nVidia uses these FLOPs to graphics being more efficient than AMD due hardware architecture differences and drivers.

The important part is not why nvidia has better performance but so once the gflops figure is know we know what the performance is compared to the hdtwins.
 

Ganondolf

Member
They should not use an overclocked X1 if they wish to use Pascal at the same raw power than X1.

They are aiming higher than X1 with Pascal.

BTW Charlie was never a thrust source since when he worked in TheInquirer before SemiAccurate (the name says all).

It's worth noting that no where does it say the x1 is overclocked. People are confusing this because it is actively cooled which it need to run at full speed in mobile form.
 

ethomaz

Banned
The important part is not why nvidia has better performance but so once the gflops figure is know we know what the performance is compared to the hdtwins.
Yeap but it is better do explain why to not make the claim "nVidia meters are different or better than AMD meters" :)

About performance we will have a better ideia after Hot Chips "Parker" presentation/announce in three weeks.

It's worth noting that no where does it say the x1 is overclocked. People are confusing this because it is actively cooled which it need to run at full speed in mobile form.
Eurogamer's rumor says overclocked X1.
 
External GPU in the dock station to help in console mode I guess.
Yeah. The relevant Nintendo patent says "supplemental computing devices", but the technology that already exists is usually called "external GPU". It's also somewhat silly they were granted that patent, but what else is new with patents...

Wouldn't shock me if we get multiple SKUs out of the gate, one sold without an eGPU dock (or without a dock entirely?), and one with. I am, of course, 100% speculating here.
 
They should not use an overclocked X1 if they wish to use Pascal at the same raw power than X1.

They are aiming higher than X1 with Pascal.

I think the question is are the devkits using an overclocked X1 in portable mode or docked mode (if those are even different things)...

In portable mode Pascal may be primarily used to achieve sub X1 power at a much lower power draw, and the overclocked X1 in the devkit may be simulating a full clocked Pascal chip in docked mode. We really don't know if there's any change at this point.

All this assuming active cooling that is fairly loud indicates overclocked, which isn't 100% certain as others have pointed out.
 
Yeap but it is better do explain why to not make the claim "nVidia meters are different or better than AMD meters" :)

About performance we will have a better ideia after Hot Chips "Parker" presentation/announce in three weeks.


Eurogamer's rumor says overclocked X1.
I'm not sure what "overclocked" actually means when talking about a SoC setup. CPU? GPU? Both? Also, we can't rule out them disabling some of the cores in the dev kit (even after OCing) to approximate a final configuration that has less CPUs or GPU cores for power efficiency reasons.

I guess what I'm saying is, an overclocked X1 seems like the realistic upper bound, but there are lots of ways we could end up with a less beefy chip. Might make sense to look at the Shield's TDP and what Nintendo has asked for from their past portables to guess how low they'll want to go for the NX.
 

ethomaz

Banned
I think the question is are the devkits using an overclocked X1 in portable mode or docked mode (if those are even different things)...

In portable mode Pascal may be primarily used to achieve sub X1 power at a much lower power draw, and the overclocked X1 in the devkit may be simulating a full clocked Pascal chip in docked mode. We really don't know if there's any change at this point.

All this assuming active cooling that is fairly loud indicates overclocked, which isn't 100% certain as others have pointed out.
How much the actual Tegra X1 mobile power draw? What clock it runs?

Pascal will have less power draw at higher clock based in the big GPUs.
 

Ganondolf

Member
Eurogamer's rumor says overclocked X1.

This is what the eurogamer article says:

"There's an additional wrinkle to the story too, albeit one we should treat with caution as it is single-source in nature with a lot of additional speculation on our part. This relates to the idea that the Tegra X1 in the NX development hardware is apparently actively cooled, with audible fan noise. With that in mind, we can't help but wonder whether X1 is the final hardware we'll see in the NX. Could it actually be a placeholder for Tegra X2? It's a new mobile processor Nvidia has in its arsenal and what's surprising about it is how little we actually know about it."
 
Yeah I was part of this consensus as well but after watching MuchoMalo's link on PAscal flops being the same as Maxwell flops I can believe now that Nintendo is getting X2 simply to get power efficiency for handheld mode.

The Handheld should perform as well as a Wii U but the console mode will fall short of Xbox One.

Semiaccurate's article about Nvidia having contractual obligations to make n amount of X1 chips with their foundry parnter still works out with X2 if it simply is just a die shrink of Maxwell with barely any tweaks.

The S/A article actually didn't mention anything about contractual obligations for X1 chips. That was conjecture by folks on this board, I believe. But what Malo said about AMD being the favorite over there does seem to have some grounding in truth. I'd take anything from SemiAccurate that seems overly negative re: Nvidia with a massive grain of salt.
 

MDave

Member
What affect on visual quality does FP16 mean compared to using FP32?

According to Nvidia, 1TF would be possible in that case, on the 10 watt X1. If its say, under clocked so its using 5 watts, thats about 500gigaflops still. And this is Maxwell X1, wonder what the watts would be for Pascal (basically X2, right?).

Is FP16 useful in real world scenarios?
 

MuchoMalo

Banned
What affect on visual quality does FP16 mean compared to using FP32?

According to Nvidia, 1TF would be possible in that case, on the 10 watt X1. If its say, under clocked so its using 5 watts, thats about 500gigaflops still. And this is Maxwell X1, wonder what the watts would be for Pascal (basically X2, right?).

Is FP16 useful in real world scenarios?

It doesn't make even a slight difference. You can't directly translate FLOPS to gaming performance, and if you could magically double performance by just using FP16 everyone would do it. To make things easier, just read it as it is: 1 FP32 FLOPS = 2 FP16 FLOPS per second. Also, clocks don't scale perfectly with power usage.
 
Oh yeah, if it's using Pascal architecture that would mean X2, right? Or can X1 support it as well?



No, X1 doesn't "support" Pascal. It's not about support, it's a hardware thing. X1 is using Maxwell and that's all. X1 is a product, so it has defined specs. NX using Pascal though doesn't say much.
 

ethomaz

Banned
It doesn't make even a slight difference. You can't directly translate FLOPS to gaming performance, and if you could magically double performance by just using FP16 everyone would do it. To make things easier, just read it as it is: 1 FP32 FLOPS = 2 FP16 FLOPS per second. Also, clocks don't scale perfectly with power usage.
There is a why in gaming development devs choose FP32 over FP16... it is slower.

Tegra is the first case I see a GPU bring faster FP16 than FP32.

Pascal used in desktop GPU for example is 1/64 slower in FP16 than FP32... in flops terms a 1080 has 8.7 TFlops in FP32 and 136 GFlops in FP16.
 

dr_rus

Member
Kepler and Maxwell both use the same node, yet the jump in power efficiency from Kepler to Maxwell was as big as or even bigger than the jump from Maxwell to Pascal. I thinbk that should say a lot. It's still an evolution more than a completely new ground-up architecture (I'd say that Fermi was was the last time that has happened), but it's still quite different.
Fermi is an evolution of Tesla / G80.
 

sfried

Member
Part of me still doesn't believe the Nvidia deal. Otherwise, we should be hearing a partnership agreement announcement by now, or way back then at E3.
 

Durante

Member
Since this is the most visited tech-related thread on NX at the moment, a question: Nvidia flops aren't equal to AMD flops, but what's the (possible) exchange rate between them? A few days ago, I was hearing it was 1 Nvidia flop = ~ 1.15 AMD flops, while GhostTrick stated it was "more like 1.3". I'm pretty curious to hear the opinion of other posters with good tech knowledge on the matter.
Obviously the very concept of "exchange rates" between FLOPs is a bit silly, but I'll still give answering the question a go.

FIrst if all, the basic idea of all of this is getting from FLOPs to some estimate of their relative in-game performance. As such, it's really not about converting FLOPs -- if you run a well-optimized version of a pure FLOPs benchmark like DGEMM on these GPUs, they'll get very close to their theoretical rates, without a significant difference in utilization.

When it comes to games, the situation is more complex. The reasons these rates like "1.15" or "1.30" come about is that people compare the performance of a set of PC games on GPUs by NV and AMD respectively. Now, there are two main issues when converting such an observation (which is valid for the PC side of things) to a situation such as comparing a custom console APU with a custom handheld SoC:
  1. A GPU is not just its shader units. These differences might well come about partially due to a difference in the relative number of e.g. TMUs and ROPs, which might not manifest in the same way when comparing these custom parts.
  2. The driver and optimization situations may be quite different. On PC, one reason that you might get factors as large as 1.3 and as low as 1.15 is that AMD's DX11 (and worse, OpenGL) drivers are comparatively bad at actually extracting the real GPU performance from a given card. On dedicated consoles (or handhelds) you'd hope that at least for high-end games better hardware utilization is achieved.


Is FP16 useful in real world scenarios?
The short answer is yes. The long answer is that programmers would need to take care to consider, for each shader calculation, whether it can use FP16 or needs FP32, which is a very platform-specific optimization which currently would only make sense for Tegra out of all the mainstream gaming platforms, so it's probably not something which will find wide adoption.
 

ethomaz

Banned
The short answer is yes. The long answer is that programmers would need to take care to consider, for each shader calculation, whether it can use FP16 or needs FP32, which is a very platform-specific optimization which currently would only make sense for Tegra out of all the mainstream gaming platforms, so it's probably not something which will find wide adoption.
Yeap... most desktop GPUs has slower FP16 performance than FP32.

Tegra is the opposite... so I can see devs making the code optimized to FP16 when working for a game exclusive for Tegra.

If the same is true for NX then you will see a clear difference between first party (exclusives) and multi platform... the later will choose to use FP32 due compatibility will all platforms.
 

Plinko

Wildcard berths that can't beat teams without a winning record should have homefield advantage
Oh yes there is.

The initial 3DS speculation also contained such rumors, too. Guess which chip Nintendo ended up sticking with?

From verified leakers who have good track records like Nate, OsirisBlack and LCGeek? Because I don't remember that.
 
I'm not sure the x1 in the dev kits are overclocked, they need active cooling to hit full clock speed (or near to) in mobile form. With Pascal I reckon Nintendo will try and get as close to the 500gflop as possible in mobile form but even if they only get half that (250gflops) it's will above Wii u (176gflops). I reckon it will be 350gflops at the lowest and 500gflops at best in mobile mode.

I thought Wii U is around 350 gflops?

https://www.techpowerup.com/gpudb/1903/wii-u-gpu
 
Oh yes there is.

The initial 3DS speculation also contained such rumors, too. Guess which chip Nintendo ended up sticking with?

Even if they switch chips, all they would do is switch to another ARM chip. That's the big takeaway from the Nvidia rumors if anything. They are not using some AMD x64 Polaris PS4-level APU and there's no reason to think otherwise.

NX is using ARM. Deal with it.
 

MDave

Member
It doesn't make even a slight difference. You can't directly translate FLOPS to gaming performance, and if you could magically double performance by just using FP16 everyone would do it. To make things easier, just read it as it is: 1 FP32 FLOPS = 2 FP16 FLOPS per second. Also, clocks don't scale perfectly with power usage.

I'm just going off what Nvidia were saying about it in their X1 unveiling. Guess in the real world it doesn't live up to what they were saying?

Durante said:
The short answer is yes. The long answer is that programmers would need to take care to consider, for each shader calculation, whether it can use FP16 or needs FP32, which is a very platform-specific optimization which currently would only make sense for Tegra out of all the mainstream gaming platforms, so it's probably not something which will find wide adoption.

Ah so it can make a difference, just we haven't really seen it yet? No doubt it's something devs will want to do for the NX if they want better performance.

As far as I understand, the difference between FP16 and FP32 is the precision of floating point numbers used for calculations. I guess that depending on the shader, it can lead to less precision in things like pixel and vertex operations? What sort of shaders absolutely need FP32?
 

dr_rus

Member
The short answer is yes. The long answer is that programmers would need to take care to consider, for each shader calculation, whether it can use FP16 or needs FP32, which is a very platform-specific optimization which currently would only make sense for Tegra out of all the mainstream gaming platforms, so it's probably not something which will find wide adoption.
Recent Radeons (starting with GCN3 I think) support FP16 at single rate just fine and you may get some benefits of less register pressure and smaller data movement in general. Nv's decision to cut double speed FP16 from gaming Pascals is mostly monetization based - they want to sell these fast halves to deep learning crowd for Tesla money - and if they'll see the benefit to this in gaming they'll bring it to GFs as well. And I believe that most modern mobile GPUs support FP16 as well. So the question of support isn't that easy.
 

ethomaz

Banned
As far as I understand, the difference between FP16 and FP32 is the precision of floating point numbers used for calculations. I guess that depending on the shader, it can lead to less precision in things like pixel and vertex operations? What sort of shaders absolutely need FP32?
I believe none.

Fact is most desktop GPUs have FP32 faster than FP16 and that makes everybody uses FP32.

You can read here how slower is FP16 in Pascal Desktop (1/64 compared with FP32): http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/5
 

MDave

Member
I believe none.

Fact is most desktop GPUs have FP32 faster than FP16 and that makes everybody uses FP32.

You can read here how slower is FP16 in Pascal Desktop (1/64 compared with FP32): http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/5

Thanks, that was a good read. It seems the reason FP16 is slower on desktop cards is because of monetization issues in the past, and not technical reasons. The Titan X (Maxwell) was very good at doing FP16 but it canabilized sales of their Tesla cards, so thus they reduced the FP16 cores drastically for their desktop Pascal cards. They artificially made it worse :p

In any case, it will be interesting to see if they have done the same for Tegra Pascal, even though they were making it a bullet point and importance of FP16 in X1.
 

Theonik

Member
There is a why in gaming development devs choose FP32 over FP16... it is slower.

Tegra is the first case I see a GPU bring faster FP16 than FP32.

Pascal used in desktop GPU for example is 1/64 slower in FP16 than FP32... in flops terms a 1080 has 8.7 TFlops in FP32 and 136 GFlops in FP16.
Pascal on the desktop is designed with FP16 performance intentionally gimped to protect the market of the compute cards. Deep learning benefits greatly from it in particular.

For calculations where the benefit of extra precision is not important the it's a useful optimisation. Though it's a very specific optimisation that many devs would not bother with as they need to be sure the lower precision doesn't cause adverse effects.
 

Durante

Member
I believe none.
That's wrong. FP16 is really not a whole lot of precision, and even in some graphics calculations you'll end up with a result that is visibly off (or suffers from artifacts) if you were to just compute everything in FP16.
Determining the correct precision to use for operations is something that can really only be done with care and by taking the requirements of individual shaders (and, possibly, the ones which will continue to work with the resultant values in the pipeline) into account.

There's a reason GPUs settled on FP32 as the standard. People here might not remember, but there was a time when NV GPUs supported both half precision (FP16) and FP32, and AMD (that is, ATI) GPUs were doing almost everything in FP24 (!).
 

wachie

Member
My wild guess

2x Denver + 2x A57
384/512 Core Pascal GPU @ 1000/800MHz
32 Texture Units
16 ROPs
128-bit 4GB LPDDR4 @ 1400MHz

Resulting would be 700-800GFlops of peak FP32 with memory bandwidth of 36GB/s. I don't see the dock as an SCD but just that the clock frequencies would be higher than in tablet mode so 720p in tablet mode could be 900p in docked mode.
 

dogen

Member
Look I've argued my points several times but nobody seems to get it. Every modern consumer electronic with a screen has a resolution of 720p+ and every modern chipset is made to push those resolutions even in gaming. The Nvidia Shield from three years ago had a 720p display and it ran games like pretty darn well. It's also severely ridiculous to think that reducing the pixel count is actually going to give you this magic boost in everything. It's not. If the NX comes out with a low resolution display it will be criticized. No 540p display looks good today, not even the Playstation Vita which came out during a time when HD mobile displays weren't that practical, and the trade off isn't going to be visuals on par with GTA V with extra long battery life. The difference isn't that much and the sharper image from holding a screen 6-12 inches from your face will make it much worthwhile. But sure go ahead and say that resolution is meaningless on mobile devices.

When you scale the resolution up almost 2x you're increasing the rendering load by nearly almost 2x.

There's no fancy "modern chipset magic" that reduces the hit of higher resolutions. They're just more powerful in general. That doesn't mean that 80% more pixels isn't 80% more pixels anymore.

Try it on your pc or laptop or whatever. Load up some game and see how it performs at your normal resolution. Then lower it by half(by pixel count, to make it reasonably accurate). In most cases your performance should go up by nearly 50%, as long as you were gpu limited in the first place. Then lock the fps to 60 or 30 and check how gpu utilization changes between resolution. Pretty huge differenece right? That will translate directly into battery savings. It should also be obvious that you can also now increase detail settings, AA, etc at a lower resolution while still holding similar performance to the higher res.

I'm not even arguing that they shouldn't use a 720p screen. I'm just saying that there are real benefits for choosing a lower resolution one.
 
It's worth noting that no where does it say the x1 is overclocked. People are confusing this because it is actively cooled which it need to run at full speed in mobile form.

People believe the X1 in the NX is overclocked because:

1) Eurogamer source stated that the dev kit was is actively cooled and noisy. The Shield does have a fan, but it is quiet. It is possible that they are being more paranoid about the chips overheating (Wii U's dev kit was infamously known for that), but it is also likely that the X1 are being overclocked.

2) LCGeek's benchmarks showed that the CPU outperformed ones in the XB1 and PS4 by a decent margin. I believe that is a bit beyond the performance of a standard clocked X1*.

* That is assuming that those benchmarks used most of the XB1/PS4 CPU cores, in which I believe is 6 or 7 available for devs Blu's benchmarks showed that X1 already beats the Jaguar's in the XB1/PS4 core to core.
 

MDave

Member
That's wrong. FP16 is really not a whole lot of precision, and even in some graphics calculations you'll end up with a result that is visibly off (or suffers from artifacts) if you were to just compute everything in FP16.
Determining the correct precision to use for operations is something that can really only be done with care and by taking the requirements of individual shaders (and, possibly, the ones which will continue to work with the resultant values in the pipeline) into account.

There's a reason GPUs settled on FP32 as the standard. People here might not remember, but there was a time when NV GPUs supported both half precision (FP16) and FP32, and AMD (that is, ATI) GPUs were doing almost everything in FP24 (!).

Oh man, flashbacks of 2004 with Half Life 2 Geforce FX and Ati Radeon bechmarks back then. Nvidia's FP32 took a huge hit in performance compared to the FP16 they had back then. FP32 is the standard indeed, done some digging and it looks like PowerVR have stuck with FP16 in their mobile GPU's? Quite interesting ...
 

joesiv

Member
The fan being noisy is what leads the assumption on overclocking. Shield TV does indeed have a fan but even under max usage its basically silent.
Yeah, but it's an assumption.

Development hardware is often built more robust than consumer goods. Not to mention that consumer goods like the shield TV are made with custom enclosures and purpose built cooling, development hardware often uses off the shelf parts for such things.

For example:
image.php


Note the beefy heatsink and fan, wouldn't fit in the Shield TV, but is common for development hardware
 

Vena

Member
Yeah, but it's an assumption.

Development hardware is often built more robust than consumer goods. Not to mention that consumer goods like the shield TV are made with custom enclosures and purpose built cooling, development hardware often uses off the shelf parts for such things.

For example, do a google image search for the x1 development board, off the shelf beefy HSF.

Generally just RAM.
 
Status
Not open for further replies.
Top Bottom