• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DF: Nintendo NX Powered By Nvidia Tegra! Initial Spec Analysis

Status
Not open for further replies.

KingBroly

Banned
I think it's very possible people may have been hearing things about a traditional home console, and nintendo has shelved those plans in favor of their portable, and are waiting until ps5 and scorpio to release a home console again. It's actually not a bad plan. Releasing something midcycle during the reign of the ps4 doesnt make a ton of sense.

If they were shelving plans for a home console they'd have to disclose it if they were working on one to begin with.
 

LeleSocho

Banned
WiiU doesn't use R700 GPU (nor is there such a GPU exist), it's using a GPU built on AMD's VLIW5 architecture which certainly was old by the time WiiU launched but it should be noted that AMD's newer GCN architecture wasn't ready to be used in WiiU within its launch frame so it's not like Nintendo had a lot of options with WiiU's h/w (going with NV's Fermi would make even less sense overall).

Sticking to TX1 on the other hand makes little sense as Parker is essentially the same as TX1 but better - built on a better process (16FF vs 20 planar which means more performance and/or better power consumption), using upgraded Pascal GPU (almost the same as TX1's Maxwell but better in general) and upgraded Denver2 CPU (if it's using that and not ARM's A57). So I don't see any reason why NX would stick with TX1 specifically instead of going with Parker.

r700 GPUs exist, they are the HD4000 series by Ati and VLIW5 is the architecture used for them.
The first GCN card was released before the WiiU so one can easily assume the final specification for the architecture were ready well before the console went into production and could've been adopted... as could have been adopted a smaller 28nm node (always ready in late 2011/very early 2012) instead of the ancient 45/40nm they used.

As you can see Nintendo really doesn't give a damn if they are behind technologically speaking so if they found some reason (cheaper license and fabs i assume) to let them think X1 at 20nm is good enough they certainly won't wait for the more modern Pascal SoC.
 

Inuhanyou

Believes Dragon Quest is a franchise managed by Sony
People hoping Nintendo are trying to make a handheld competitive to the XB1 with state of the art Nvidia mobile tech that hasn't even been revealed...i think your being silly, personally.

The leak of EG explicitly states that they don't care about power and are actively sacrificing it to go with this form factor and reach a different market entirely. Reggie, Miyamoto, Ubisoft's statements all collaborate on the fact that a semi customized X1(perhaps downclocked) variant is in play here, so it doesn't have to be actively cooled.

They have no need to try and get a contract with the new X2 that isn't even revealed, and has no actual release date as of yet outside of the speculation of this thread.

Saying Nvidia are 'desperate' for console contracts even though they are blowing anyone who could remotely be a competitor away is also strange logic to me. Their Tegra line may not be widely used, but they aren't going to just give away their newest mobile tech they haven't even finished working on for pennies so Nintendo can sell it at a bargain casual entree price.

They could just ask tencent for the Minster hunter online Assets.

Why would they do that when they are capcom. They still have to make their own assets to work in a single player game after all. And they aren't going to bother to collaborate on that level when its much easier to pump out low effort low resolution monster hunter games on 3DS hardware.

They don't even bother to go the Toukiden route, which is far more technically accomplished on Vita than any monster hunter game.
 
For people who are saying that Nintendo is backing out of home consoles for good, maybe they're not wrong, but I keep thinking about Iwata's comment that there could be multiple form factors in a family that are "like brothers" and compared to the concept to Apple and iOS.

Maybe this is just the first in a line of new products all within the NX family.
Nintendo was on board with iterative console thing before MS and Sony. Traditional console gens are dead and we'll definitely see everybody go the Apple route. NX2 in 2020 seems like a good idea.
 

BriGuy

Member
It was a long shot, but I was hoping the NX would be powerful enough to pull off VR. I know everyone and their brother is going that route, but Nintendo's worlds are the ones I'd most like to visit.
 

ZOONAMI

Junior Member
If they were shelving plans for a home console they'd have to disclose it if they were working on one to begin with.

By shelving I mean delaying. It was never clear what NX was going to be, so I don't see why they have to disclose anything immediately. Iwata and Kimishima have already talked about multiple form factors and NX as a platform, so thats confirmation already? At some point they will have to talk about if there is a home console coming to satisfy their shareholders though. Basically confirm the wii u was a failure and they are still determining appropriate timeline for future home console release. Please understand.
 

Durante

Member
Even if it uses a custom, high-end Pascal-based SoC on 16nm (a very un-Nintendo thing to do, IMHO), I find it very hard to believe that it would exceed 500 GFLOPs in portable mode.

I don't think 1:1 ports of high-end AAA console games were a significant design goal.
 
People hoping Nintendo are trying to make a handheld competitive to the XB1 with state of the hard Nvidia mobile tech that hasn't even been revealed...i think your being silly, personally.

The leak of EG explicitly states that they don't care about power and are actively sacrificing it to go with this form factor and reach a different market entirely. Reggie, Miyamoto, Ubisoft's statements all collaborate on the fact that a semi customized X1(perhaps downclocked) variant is in play here, so it doesn't have to be actively cooled.

They have no need to try and get a new of the new X2 that isn't even revealed, and has no actual release date as of yet outside of the speculation of this thread.



Why would they do that when they are capcom. They still have to make their own assets to work in a single player game after all. And they aren't going to bother to collaborate on that level when its much easier to pump out low effort low resolution monster hunter games on 3DS hardware.

They don't even bother to go the Toukiden route, which is far more technically accomplished on Vita than any monster hunter game.
Hasn't the "Capcom doesn't make new asset thing" myth been dispelled? Didn't they make a bunch for tri? And if it's one series that will justify the investment, it's monster hunter. If they are going to reuse asset's, I don't see why they wouldn't use MHO or frontier, both look better than MH4.
 

ZOONAMI

Junior Member
Even if it uses a custom, high-end Pascal-based SoC on 16nm (a very un-Nintendo thing to do, IMHO), I find it very hard to believe that it would exceed 500 GFLOPs in portable mode.

I don't think 1:1 ports of high-end AAA console games were a significant design goal.

At 540p, things would look pretty amazing on a sub 6" screen.

If the dock provides any additional cooling and power envelop optimistically if this is a next gen tegra chip, we could be looking at over 1tflop while docked, which should compare favorably to the Xbox and ps4 given this is nvidia vs AMD.

I sort of doubt it, but one can hope.
 
"Turbo powered last gen console."

So basically another Nintendo system that'll mostly only be worth it for the first party games.



It's a handheld. Sheeeeeeeeeesh.


Even if it uses a custom, high-end Pascal-based SoC on 16nm (a very un-Nintendo thing to do, IMHO), I find it very hard to believe that it would exceed 500 GFLOPs in portable mode.

I don't think 1:1 ports of high-end AAA console games were a significant design goal.




Best case handheld scenario is 300gflops max imo. That is with Pascal.
 
Even if it uses a custom, high-end Pascal-based SoC on 16nm (a very un-Nintendo thing to do, IMHO), I find it very hard to believe that it would exceed 500 GFLOPs in portable mode.

I don't think 1:1 ports of high-end AAA console games were a significant design goal.

Assuming a 540p screen though, 500 GFLOPs should be pretty adequate at displaying AAA ports, as it's pushing a quarter of the pixels (of 1080p games). And when docked it can run the clocks at full speed, displaying the games at 1080p, unless I'm mistaken.

Also, recall Nvidia flops "perform" better than AMD flops. I don't think power will be the reason that a Parker chip NX doesn't get AAA ports (install base and audience will be).
 

dr_rus

Member
r700 GPUs exist, they are the HD4000 series by Ati and VLIW5 is the architecture used for them.
The first GCN card was released before the WiiU so one can easily assume the final specification for the architecture were ready well before the console went into production and could've been adopted... as could have been adopted a smaller 28nm node (always ready in late 2011/very early 2012) instead of the ancient 45/40nm they used.

As you can see Nintendo really doesn't give a damn if they are behind technologically speaking so if they found some reason (cheaper license and fabs i assume) to let them think X1 at 20nm is good enough they certainly won't wait for the more modern Pascal SoC.

There are no "R700 GPUs", what you're talking about is Evergreen and Northern Islands families based on AMD's TeraScale architecture (which consist of VLIW5 and VLIW4 subfamilies).

WiiU was released in November 2012 while 7800 series GPUs which might have been used in it were released in March 2012. That's half a year apart and obviously not enough to build a custom SoC or even MXM with IBM Power CPU on it. Add 28nm shortages and higher cost to the mix and you'll see that basically WiiU was out of luck with GCN and had to resort to older TeraScale GPU. This has nothing to do with what Nintendo prefer to use and everything to do with what is actually available to be used.

Smaller node isn't always beneficial compared to an older thicker one. This is a valid reason to prefer TX1 @20nm instead of TX2 on 16nm, yes, but then again if you consider that 16nm was used for mobile SoC production since last year and that NX should launch in 1Q17 giving 16FF node almost two years of maturing time you'll see that it's actually quite possible that they'll find the newer 16nm node to be a better fit for them here. So still, I think that Parker makes a lot of sense considering the time frames and what they want to achieve.

Even if it uses a custom, high-end Pascal-based SoC on 16nm (a very un-Nintendo thing to do, IMHO), I find it very hard to believe that it would exceed 500 GFLOPs in portable mode.

I don't think 1:1 ports of high-end AAA console games were a significant design goal.

Tegra X1 is 500Gflops of FP32 or 1TF of FP16 in "mobile mode" (~10W power consumption). A Tegra Parker chip on 16nm should be able to go way higher than that even in "mobile mode" and in stationary mode with active cooling it may in fact outrun Xbox One's h/w.

Would have been nuts if the wii u had a 7850 equivalent huh. If they called it wii HD I think the whole gen would be dramatically different.

Yeah, they'd probably want to use something much slower like Cape Verde or something. Timings are kinda the same for these though.
 

ZOONAMI

Junior Member
There are no "R700 GPUs", what you're talking about is Evergreen and Northern Islands families based on AMD's TeraScale architecture (which consist of VLIW5 and VLIW4 subfamilies).

WiiU was released in November 2012 while 7800 series GPUs which might have been used in it were released in March 2012. That's half a year apart and obviously not enough to build a custom SoC or even MXM with IBM Power CPU on it. Add 28nm shortages and higher cost to the mix and you'll see that basically WiiU was out of luck with GCN and had to resort to older TeraScale GPU. This has nothing to do with what Nintendo prefer to use and everything to do with what is actually available to be used.

Smaller node isn't always beneficial compared to an older thicker one. This is a valid reason to prefer TX1 @20nm instead of TX2 on 16nm, yes, but then again if you consider that 16nm was used for mobile SoC production since last year and that NX should launch in 1Q17 giving 16FF node almost two years of maturing time you'll see that it's actually quite possible that they'll find the newer 16nm node to be a better fit for them here. So still, I think that Parker makes a lot of sense considering the time frames and what they want to achieve.

Would have been nuts if the wii u had a 7850 equivalent huh. If they called it wii HD I think the whole gen would be dramatically different.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
There are no "R700 GPUs", what you're talking about is Evergreen and Northern Islands families based on AMD's TeraScale architecture (which consist of VLIW5 and VLIW4 subfamilies).

WiiU was released in November 2012 while 7800 series GPUs which might have been used in it were released in March 2012. That's half a year apart and obviously not enough to build a custom SoC or even MXM with IBM Power CPU on it. Add 28nm shortages and higher cost to the mix and you'll see that basically WiiU was out of luck with GCN and had to resort to older TeraScale GPU. This has nothing to do with what Nintendo prefer to use and everything to do with what is actually available to be used.
I've been repeating that for years on these boards. Zero effect - people can be impervious to facts if they chose so.

huh, if this is the case than NX could probably fully emulate PS2 o,O I'm impressed with Nintendo offerings this generation. I was considering buying GPD Win for this purpose alone, but now I think I will just wait for NX to get hacked and enjoy those sweet homebrews on it. Too bad that Tegra is not x86 CPU, though.
I haven't been following the ps2 emu scene, so I cannot comment. Something tells me, though, that if enough effort was put in ps2 emulation on ARMv8 that could go even further than whatever its current state on x86 might be. Just because mips and arm are much closer to each other than to x86.
 

Inuhanyou

Believes Dragon Quest is a franchise managed by Sony
Hasn't the "Capcom doesn't make new asset thing" myth been dispelled? Didn't they make a bunch for tri? And if it's one series that will justify the investment, it's monster hunter. If they are going to reuse asset's, I don't see why they wouldn't use MHO or frontier, both look better than MH4.

Yeah, they make new assets, for 3DS level assets, i'm saying they aren't going to collaborate with tencent making high end assets for Monster hunter
 

Thraktor

Member
Even if it uses a custom, high-end Pascal-based SoC on 16nm (a very un-Nintendo thing to do, IMHO), I find it very hard to believe that it would exceed 500 GFLOPs in portable mode.

I don't think 1:1 ports of high-end AAA console games were a significant design goal.

I agree, but even using a low-clocked X1 is a very un-Nintendo thing to do in terms of handheld hardware design. The X1 should be able to clock the GPU to around 500MHz in a handheld environment, and at 256 Gflops that would put it in the theoretical range of top end mobile chips, while using a full desktop class architecture. The 3DS, by comparison, was using a GPU which was an order of magnitude less computationally powerful than contemporary mobile chips, and on a less capable architecture to boot.
 

tuxfool

Banned
At 540p, things would look pretty amazing on a sub 6" screen.

If the dock provides any additional cooling and power envelop optimistically if this is a next gen tegra chip, we could be looking at over 1tflop while docked, which should compare favorably to the Xbox and ps4 given this is nvidia vs AMD.

I sort of doubt it, but one can hope.

At 540p most games designed for 1080p screens would look pretty ugly. The resolution wouldn't justify the detail in more modern games.

The argument against that is the 3ds, where the shit tier screens on that completely wrecked the IQ of the games, one need only look at Cintra screenshots.
 

Durante

Member
Best case handheld scenario is 300gflops max imo. That is with Pascal.
I wouldn't say that this is the absolute best case possible, that's probably a bit higher, but it's what I'd call decently realistic if Nintendo decides to go for very up-to-date technology. Which in itself would be surprising, considering their past decade of hardware designs.

Tegra X1 is 500Gflops of FP32 or 1TF of FP16 in "mobile mode" (~10W power consumption).
I thought it's 500 FLOPs at 1 GHz, which it only runs at in a stationary device. AFAIK it's ~800 MHz in a relatively large tablet with a huge battery.

In a gaming portable (on which software will run which actually makes full use of the hardware), and where there are different constraints on battery life and size, I don't think it's realistic to expect a sustained 10W budget for the SoC. And since the GPU will likely be working more efficiently whatever the power budget ends up being might actually translate to lower clocks than it would in a phone/tablet.

I agree, but even using a low-clocked X1 is a very un-Nintendo thing to do in terms of handheld hardware design. The X1 should be able to clock the GPU to around 500MHz in a handheld environment, and at 256 Gflops that would put it in the theoretical range of top end mobile chips, while using a full desktop class architecture. The 3DS, by comparison, was using a GPU which was an order of magnitude less computationally powerful than contemporary mobile chips, and on a less capable architecture to boot.
Exactly.
 

Inuhanyou

Believes Dragon Quest is a franchise managed by Sony
At 540p, things would look pretty amazing on a sub 6" screen.

If the dock provides any additional cooling and power envelop optimistically if this is a next gen tegra chip, we could be looking at over 1tflop while docked, which should compare favorably to the Xbox and ps4 given this is nvidia vs AMD.

I sort of doubt it, but one can hope.

At 540p, things would look pretty amazing on a sub 6" screen.

If the dock provides any additional cooling and power envelop optimistically if this is a next gen tegra chip, we could be looking at over 1tflop while docked, which should compare favorably to the Xbox and ps4 given this is nvidia vs AMD.

I sort of doubt it, but one can hope.

You guys are setting yourself up for disappointment.

For someone like myself, who has been expecting a 300 GFLOP handheld output and a 600gflop docking station output thinking i was being a bit too optimistic with docking station metrics, having people thinking Nintendo are going going for a stronger than XB1 level handheld device while selling that at a bargain price casuals will jump on seems extremely grasping personally speaking
 

orioto

Good Art™
You guys are setting yourself up for disappointment.

For someone like myself, who has been expecting a 300 GFLOP handheld output and a 600gflop docking station output thinking i was being a bit too optimistic with docking station metrics, having people thinking Nintendo are going going for a stronger than XB1 level handheld device while selling that at a bargain price casuals will jump on seems extremely grasping personally speaking

Even like that, Vita is 4 times less powerful than a PS3 and could reach visual equality in some cases.

Between 300 and 500 gflops at 540p should give us pretty solid visuals. And next gen games have some margin for downgrades.

I mean, let's imagine you port FFXV on NX, based on the XBO version. I'm pretty sure you can downgrade those rocks and character models pretty much before people notice it, especially on a 5 inches 540p screen. Best thing is, the result will be sharper looking at that size.
 

Thraktor

Member
I've been repeating that for years on these boards. Zero effect - people can be impervious to facts if they chose so.

People also have a very strong tendency to assume that consoles and handhelds are always just slapped together out of off the shelf hardware. Such as in this thread, where everyone is debating whether they will use TX1 or TX2, despite the fact that Nintendo have literally never used an off the shelf CPU or GPU in any of their hardware (even NES, SNES, N64, GB, etc used custom variants of existing designs, not off the shelf chips).
 

Inuhanyou

Believes Dragon Quest is a franchise managed by Sony
Even like that, Vita is 4 times less powerful than a PS3 and could reach visual equality in some cases.

Between 300 and 500 gflops at 540p should give us pretty solid visuals. And next gen games have some margin for downgrades.

I mean, let's imagine you port FFXV on NX, based on the XBO version. I'm pretty sure you can downgrade those rocks and character models pretty much before people notice it, especially on a 5 inches 540p screen. Best thing is, the result will be sharper looking at that size.

Er..where did you get those Vita PS3 comparisons? Vita's GPU is only 27 GFLOPs. Not counting the Cell, the actual GPU of the PS3's GPU significantly more powerful by default, the bandwidth of the unit far more than what the Vita offers too. There is literally no comparison between a Vita level device and a console like PS3.

Also, FF15 is not tenable. The entire design of the world on PS4 and XB1 is taking on far more bandwidth requirements alone than what this NX device will allow in that form factor considering they are limited to mobile DDR4 variants.
 
Even like that, Vita is 4 times less powerful than a PS3 and could reach visual equality in some cases.

Between 300 and 500 gflops at 540p should give us pretty solid visuals. And next gen games have some margin for downgrades.

I mean, let's imagine you port FFXV on NX, based on the XBO version. I'm pretty sure you can downgrade those rocks and character models pretty much before people notice it, especially on a 5 inches 540p screen. Best thing is, the result will be sharper looking at that size.



Vita is more like 6 to 8 times slower than PS3.
 

LeleSocho

Banned
There are no "R700 GPUs", what you're talking about is Evergreen and Northern Islands families based on AMD's TeraScale architecture (which consist of VLIW5 and VLIW4 subfamilies).

WiiU was released in November 2012 while 7800 series GPUs which might have been used in it were released in March 2012. That's half a year apart and obviously not enough to build a custom SoC or even MXM with IBM Power CPU on it. Add 28nm shortages and higher cost to the mix and you'll see that basically WiiU was out of luck with GCN and had to resort to older TeraScale GPU. This has nothing to do with what Nintendo prefer to use and everything to do with what is actually available to be used.

Smaller node isn't always beneficial compared to an older thicker one. This is a valid reason to prefer TX1 @20nm instead of TX2 on 16nm, yes, but then again if you consider that 16nm was used for mobile SoC production since last year and that NX should launch in 1Q17 giving 16FF node almost two years of maturing time you'll see that it's actually quite possible that they'll find the newer 16nm node to be a better fit for them here. So still, I think that Parker makes a lot of sense considering the time frames and what they want to achieve.

I don't know how can you still deny that r700 gpus exist it's the codename for all HD4000 cards ,i even know from memory the name of my old one (RV770 LE aka 4830) only right thing you are saying is that they also have other more or less specific names, heck you also mention Evergreen which is the codename of the HD5000 cards and not the ones i'm talking about.

And again the GCN debuted in Janaury 2012 with the HD7970 (google it) this is a whopping 11 months before WiiU was released, add to that the fact that the design for a gpu doesn't finish the day before it is released you can say that if Nintendo cared enough to have the latest tech possible and worked more closely with AMD we could have had a GCN WiiU.
Your argument is that 28nm had shortages? Still doesn't matter that much since 32nm was a thing in October 2011.

The only valid reason to prefer planar 20nm over 16/14nm FinFET is because the fabs are free of producing and hella cheaper, this is not something the end consumer cares at all and only benefits Nintendo.

Nintendo has shown that it doesn't care about being on the latest tech, period.
 

Discomurf

Member
...we can't help but wonder whether X1 is the final hardware we'll see in the NX. Could it actually be a placeholder for Tegra X2? It's a new mobile processor Nvidia has in its arsenal and what's surprising about it is how little we actually know about it.

103114-so-youre-telling-me-theres-a-c-Raj0.png
 

Thraktor

Member
Er..where did you get those Vita PS3 comparisons? Vita's GPU is only 27 GFLOPs. Not counting the Cell, the actual GPU of the PS3's GPU significantly more powerful by default, the bandwidth of the unit far more than what the Vita offers too. There is literally no comparison between a Vita level device and a console like PS3.

Also, FF15 is not tenable. The entire design of the world on PS4 and XB1 is taking on far more bandwidth requirements alone than what this NX device will allow in that form factor considering they are limited to mobile DDR4 variants.

Not that I expect FF15 on NX, but the "design of the world" shouldn't have any effect on RAM bandwidth requirements, which are almost purely a function of resolution and rendering techniques (stuff like texture fetches have a very high cache hit rate on modern GPUs, so don't actually impact BW requirements all that much). And for a ~300 Gflop GPU rendering to a 540p screen, the 30 GB/s you can get from a single LPDDR4 chip would probably do the job, although if you really need to you can drop two chips in there for 60GB/s. Open world games have a much larger dependence on storage read speeds, where the eMMC/UFS you'd see in NX would actually have a substantial speed advantage over the HDDs used in PS4/XBO, while presumably pulling much lower res assets.
 

Rodin

Member
For people who are saying that Nintendo is backing out of home consoles for good, maybe they're not wrong, but I keep thinking about Iwata's comment that there could be multiple form factors in a family that are "like brothers" and compared to the concept to Apple and iOS.

Maybe this is just the first in a line of new products all within the NX family.
That's what i'm thinking as well, especially considering Iwata talked about introducing new form factors (which this literally is). Nintendo also said multiple times that NX wasn't a successor to Wii U and 3DS, but a new concept.

I'm also convinced that this will be tablet sized (iirc, Tegra isn't feasible in a 5" or less device, not unless you cut its clockspeed to the point that it isn't Tegra X1 anymore), and the market for a dedicated handheld is different from that not only because of portability, but because of cost too: one of the main strenghts of Nintendo handhelds is the sub 200$ price point, and there's no way this new device will be less than 250.

Now they may or may not release a new home that beats this in performance, but i absolutely don't see Nintendo walking out of the cheap dedicated handheld market. They will retire the 3DS and have a new 4.5"-5", 540p <200$ portable that plays any (or most) NX game at lower details. This would fall in line with what they anticipated, and that we discussed many times.

The only weird thing in this scenario is that Eurogamer only heard about this device, but if it's a controlled rumor (like some people implied) then it would make sense.

It was a long shot, but I was hoping the NX would be powerful enough to pull off VR. I know everyone and their brother is going that route, but Nintendo's worlds are the ones I'd most like to visit.
Nintendo thinks VR isn't quite there yet, especially because it currently gives motion sickness to many people. When you think about their main public, it's easy to see why they can't have that yet.

People also have a very strong tendency to assume that consoles and handhelds are always just slapped together out of off the shelf hardware. Such as in this thread, where everyone is debating whether they will use TX1 or TX2, despite the fact that Nintendo have literally never used an off the shelf CPU or GPU in any of their hardware (even NES, SNES, N64, GB, etc used custom variants of existing designs, not off the shelf chips).
Personally, i'm just wondering if the SoC they'll use will based on X1 or X2. I just said "X1 or X2", but that's what i meant.
 

Lonely1

Unconfirmed Member
I thought it's 500 FLOPs at 1 GHz, which it only runs at in a stationary device. AFAIK it's ~800 MHz in a relatively large tablet with a huge battery.

Can I dream on a handheld with a huge 7000+ mAh battery and active cooling, like the original Shield!? It was $199!
 
For some of the tech savvy people here, am I correct in assuming that the clocks can run at full speed (or even overclocked) when docked, assuming the dock includes a power supply and some sort of cooling solution? The downclocking of the portable is mainly an issue of preserving battery life, right?

In that case, it might be prudent to separate discussion about performance into "portable mode" and "console mode" to clarify what the X1 (and potentially X2/semi-custom design) would be capable of in both of those modes. Keeping in mind that the portable screen will likely not be 1080p or even 720p, would it be possible to get ~standard AAA port performance on both the "portable mode" and "console mode" based on what we know?

I ask because of Osirisblack's comment about the NX being quite capable of playing XB1/PS4 games, and how that information reconciles with this new rumor.
 
Not bad for a handheld I suppose. Hope it's not too expensive though.

Perhaps the "dock" doesn't add more power, but maybe with it, the Handheld can be over clocked safely.
 

LeleSocho

Banned
For some of the tech savvy people here, am I correct in assuming that the clocks can run at full speed (or even overclocked) when docked, assuming the dock includes a power supply and some sort of cooling solution? The downclocking of the portable is mainly an issue of preserving battery life, right?

In that case, it might be prudent to separate discussion about performance into "portable mode" and "console mode" to clarify what the X1 (and potentially X2/semi-custom design) would be capable of in both of those modes. Keeping in mind that the portable screen will likely not be 1080p or even 720p, would it be possible to get ~standard AAA port performance on both the "portable mode" and "console mode" based on what we know?

I ask because of Osirisblack's comment about the NX being quite capable of playing XB1/PS4 games, and how that information reconciles with this new rumor.

If as rumored it's actively cooled then you can at least expect the full nominal clock.
 

G.ZZZ

Member
The main issue are power draw and cooling so yeah, i don't think a X1-similar should have any problems being clocked at say, a fourth/fifth of the clock on mobile (at around 2.5-3W) and then go on full 10 W or more on the stand. I think the portable unit has to be reasonably big however to allow this to work. A custom external cooling system could work too but who knows.

This'd mean something around WiiU perfomance in mobile mode i guess, with newer architecture and all that jazz.

Keep in mind the WiiU was the greenest console last gen at around 30 W. This would consume half of that, if even. I don't think Nintendo would skimp on the wattage of this , at least in console mode.
 
If as rumored it's actively cooled then you can at least expect the full nominal clock.

The main issue are power draw and cooling so yeah, i don't think a X1-similar should have any problems being clocked at say, a fourth/fifth of the clock on mobile (at around 2.5-3W) and then go on full 10 W or more on the stand. I think the portable unit has to be reasonably big however to allow this to work. A custom external cooling system could work too but who knows.

This'd mean something around WiiU perfomance in mobile mode i guess, with newer architecture and all that jazz.

Keep in mind the WiiU was the greenest console last gen at around 30 W. This would consume half of that, if even. I don't think Nintendo would skimp on the wattage of this , at least in console mode.

So a Wii U level portable pushing 1/4 of the pixels would be able to create visuals quite a bit more impressive than the Wii U, in addition to the newer architecture and MUCH improved CPU, and a slightly below XB1 level home console seems possible (again, with improved feature set and CPU). I wouldn't be surprised if this did get some PS4/XB1 ports fairly easily, as the draw of saying, "you can play the full CoD experience on this portable or plug it in at home for an even greater experience" seems apparent.

I'll remain in the cautious optimism camp for now, since I'm quite convinced we haven't seen anything close to the full picture. But from what we have here it seems pretty neat.
 

Thraktor

Member
I'm also convinced that this will be tablet sized (iirc, Tegra isn't feasible in a 5" or less device, not unless you cut its clockspeed to the point that it isn't Tegra X1 anymore), and the market for a dedicated handheld is different from that not only because of portability, but because of cost too: one of the Nintendo handhelds strenght is the sub 200$ price point, and there's no way this device will be less than 250.

It absolutely is feasible in a 5" or less device. With the GPU clocked at about 500MHz it draws about 1.5W, which means ~2-2.5W for the full SoC, which is exactly in the range you'd expect for a thin 5" device. And as I mentioned a few points up, this would still put it in the raw performance range of high-end mobile SoCs, while using a desktop-class architecture.

Personally, i'm just wondering if the SoC they'll use will based on X1 or X2. I just said "X1 or X2", but that's what i meant.

Even thinking that it's "based on" TX1 or TX2 is reductionist, though. Nintendo get to lay out this chip from scratch, they're unlikely to just take an existing chip and alter it. If you want to evaluate what they might use, then consider the different options they have when designing the chip:


  1. What manufacturing process will they use?
    This seems to be boiling down to either 20nm or 16nm. Using 16nm is possibly a little more expensive, but allows them to use higher clock speeds within the same thermal limit, and allows them to use the Pascal rather than Maxwell architecture (although on a performance level I'm not sure the difference is all that big).
  2. What CPU cores will they use, and how many of them?
    The options here are basically A35, A53, A57, A72 and Denver, and they could use up to 8, potentially with a mix of cores (although that's less useful in a gaming environment than it is in a phone environment). More powerful cores (everything but A35/A53) will cost more and have to be clocked lower to fit the thermal limit.
  3. How many SMs (streaming multiprocessors) will the GPU use?
    There are 128 CUDA "cores" per SM (true for both Maxwell and Pascal). More SMs means more performance (even at a given thermal limit), but obviously adds to the cost. There obviously can't be less than 1, and I'd say 4 is the absolute practical limit.
  4. How wide a memory interface will be needed?
    Assuming that they're using LPDDR4, you're looking at an interface width that's a multiple of 64 bits, and that provides 25-30 GB/s per 64 bits of bus width. Using a wider interface means using more RAM chips, increasing costs, and an increase to the cost of the SoC itself, as the memory interface takes up die space. A bus width of 64 bits is most likely here, possibly 128. Higher than that only if they're clocking the SoC really high in docked mode. They'll also need to consider the on-chip memory sub-system (i.e. caches) as part of this.
  5. What clock speeds will everything run at?
    This is a function of all the decisions made above, at least in handheld mode where they're going to have to constrain themselves to a 2-3W TDP. In theory it could clock higher in docked mode, although this depends on what if any cooling solution they have in place.

For some of the tech savvy people here, am I correct in assuming that the clocks can run at full speed (or even overclocked) when docked, assuming the dock includes a power supply and some sort of cooling solution? The downclocking of the portable is mainly an issue of preserving battery life, right?

In that case, it might be prudent to separate discussion about performance into "portable mode" and "console mode" to clarify what the X1 (and potentially X2/semi-custom design) would be capable of in both of those modes. Keeping in mind that the portable screen will likely not be 1080p or even 720p, would it be possible to get ~standard AAA port performance on both the "portable mode" and "console mode" based on what we know?

I ask because of Osirisblack's comment about the NX being quite capable of playing XB1/PS4 games, and how that information reconciles with this new rumor.

In theory yes. In fact, in theory there's no reason a 16nm Pascal chip couldn't run as high as 1.6GHz provided suitable power delivery and cooling. The problem is how you engineer that cooling. To take the simplest example, if Nintendo use an aluminium back panel on the handheld which has direct heat transfer from the SoC, and then you just have a fan pointed at it while docked, you may just about sufficiently cool a ~25W chip, but you'll also end up with something which is very hot to the touch after prolonged use. I can't think of a reasonable way to dissipate the heat while docked while still allowing people to pick it up from the dock immediately after use without it being uncomfortably hot.
 
I still need to find out by september whether or not the whole thing is developer friendly. That's the main way it'll have third party support of any kind.
 
In theory yes. In fact, in theory there's no reason a 16nm Pascal chip couldn't run as high as 1.6GHz provided suitable power delivery and cooling. The problem is how you engineer that cooling. To take the simplest example, if Nintendo use an aluminium back panel on the handheld which has direct heat transfer from the SoC, and then you just have a fan pointed at it while docked, you may just about sufficiently cool a ~25W chip, but you'll also end up with something which is very hot to the touch after prolonged use. I can't think of a reasonable way to dissipate the heat while docked while still allowing people to pick it up from the dock immediately after use without it being uncomfortably hot.

Yeah that's something I was thinking about too. I can think of a potential way to have heat sink contacts in the docking connection connect to an actively cooled heat sink, but that doesn't seem like something which could dissipate heat fast enough for a "pick up and play" type system like we are told this is.

Will be very interesting to see the tech behind the whole dock idea.
 

Rodin

Member
It absolutely is feasible in a 5" or less device. With the GPU clocked at about 500MHz it draws about 1.5W, which means ~2-2.5W for the full SoC, which is exactly in the range you'd expect for a thin 5" device. And as I mentioned a few points up, this would still put it in the raw performance range of high-end mobile SoCs, while using a desktop-class architecture.
Thanks for the clarification, that's pretty cool. I still think that this hybrid system will end up being too expensive to "replace" the 3DS though, so it can still make sense for them to ship this as a tablet sized console (when standalone) and still have a smaller handheld with another (lower clocked) Nvidia chip down the line. A "full portable" that still plays the same games but at lower resolution/details (e.g. if the hybrid is a 720p tablet that plays games at mid settings and 1080p high settings when docked, the portable can play the same titles at 540p/low details) and that will cost ~149-179$, in line with DS and 3DS.

This would also match what Iwata and other Nintendo exec hinted at multiple times. It wasn't about having only one system that plays all of their games, but a moltitude of systems (with new form factors too, like this one) that share the same architecture/OS and are able to play the same games.

Even thinking that it's "based on" TX1 or TX2 is reductionist, though. Nintendo get to lay out this chip from scratch, they're unlikely to just take an existing chip and alter it.

Oh i know, but we're still discussing a digital foundry article that literally mentions tegra x1 and tegra x2.

If you want to evaluate what they might use, then consider the different options they have when designing the chip:


  1. What manufacturing process will they use?
    This seems to be boiling down to either 20nm or 16nm. Using 16nm is possibly a little more expensive, but allows them to use higher clock speeds within the same thermal limit, and allows them to use the Pascal rather than Maxwell architecture (although on an performance level I'm not sure the difference is all that big).
  2. What CPU cores will they use, and how many of them?
    The options here are basically A35, A53, A57, A72 and Denver, and they could use up to 8, potentially with a mix of cores (although that's less useful in a gaming environment than it is in a phone environment). More powerful cores (everything but A35/A53) will cost more and have to be clocked lower to fit the thermal limit.
  3. How many SMs (streaming multiprocessors) will the GPU use?
    There are 128 CUDA "cores" per SM (true for both Maxwell and Pascal). More SMs means more performance (even at a given thermal limit), but obviously adds to the cost. There obviously can't be less than 1, and I'd say 4 is the absolute practical limit.
  4. How wide a memory interface will be needed?
    Assuming that they're using LPDDR4, you're looking at an interface width that's a multiple of 64 bits, and that provides 25-30 GB/s per 64 bits of bus width. Using a wider interface means using more RAM chips, increasing costs, and an increase to the cost of the SoC itself, as the memory interface takes up die space. A bus width of 64 bits is most likely here, possibly 128. Higher than that only if they're clocking the SoC really high in docked mode. They'll also need to consider the on-chip memory sub-system (i.e. caches) as part of this.
  5. What clock speeds will everything run at?
    This is a function of all the decisions made above, at least in handheld mode where they're going to have to constrain themselves to a 2-3W TDP. In theory it could clock higher in docked mode, although this depends on what if any cooling solution they have in place.
What do you think is the best balance possible in terms of cpu cores/clock and gpu SM/clock (when docked and standalone), memory configuration, screen size, screen resolution and battery, while keeping an eye on cost? I think this sounds like a very delicate thing, and it could be difficult to balance it out in the best possible way. Then of course i'm not an engineer, but i was thinking about how the reception of the public can vary based on how they do it.
 
I really, seriously, hope they use a x2 or a boosted version of the x1, fabbed at 14/16nm.

Almost everyone outside of apple kinda shat out products at the 20nm level, Qualcomm had a bad year in 2015 thanks to 20nm and all the bad publicity the snapdragon 810 had. The gpu part of x1 is strong but the cpu part is nothing much to write home about, plenty of cores but largely off the shelf standard parts. Watching the x1 gaming videos also underwhelmed me, it struggling to run RE5 of all things has put a damper of the power expectations.

To be honest as well, with the rapid advancement of mobile tech and that the Apple A9 and the samsung s7 processor is out on the market, with successors soon to follow, the x1 would be outdated quickly by a whole slew of mobiles and tablets releasing in the coming months and years ahead. 14/16nm is also a big boon for battery life and overall efficiency of the mobile chipset, that alone should push them towards using the process due to the mobile nature of the device.

I also hope they do NOT cheap out on ram, a 4-6 gb device will essentially kil any chance of any other 3rd party games coming out on the system due to a insufficient memory allowance.
 

Thraktor

Member
Yeah that's something I was thinking about too. I can think of a potential way to have heat sink contacts in the docking connection connect to an actively cooled heat sink, but that doesn't seem like something which could dissipate heat fast enough for a "pick up and play" type system like we are told this is.

Will be very interesting to see the tech behind the whole dock idea.

Yeah, if they do use higher clocks while docked I suspect that it will be a fairly modest boost, maybe bringing the power draw up to 4-5W, where it can still be passively cooled and shouldn't get too hot (although it may get a little warm).

Edit: Keep in mind that "the tech behind the whole dock idea" may simply be no more than the Wii U gamepad's charging dock but with a HDMI out. In fact that's what I'd probably put my money on at the moment.

Oh i know, but we're still discussing a digital foundry article that literally mentions tegra x1 and tegra x2.

That's fair enough, and Eurogamer/DF really shouldn't be perpetuating the notion that it's just going to be one off the shelf chip or the other. Granted they're using the TX1 in dev kits, but it's entirely normal to use off the shelf parts that deliver approximate performance in dev kits, and final hardware often doesn't arrive with third parties until a few months before launch.

What do you think is the best balance possible in terms of cpu cores/clock and gpu SM/clock (when docked and standalone), memory configuration, screen size, screen resolution and battery, while keeping an eye on cost? I think this sounds like a very delicate thing, and it could be difficult to balance it out in the best possible way. Then of course i'm not an engineer, but i was thinking about how the reception of the public can vary based on how they do it.

Off the top of my head, absolute best case scenario would be a 16nm chip, running 8x A72 at about 750MHz, with 3x SMs running at about 500MHz and a 128 bit LPDDR4 interface for about 50-60 GB/s BW. Then let's say 6GB RAM (with 2GB for the OS). That'll end up quite a bit more expensive than the 3DS hardware (but much cheaper than a home console SoC+RAM), but by using a cheap <5" 540p screen they could probably squeeze it into $199 while just about breaking even.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
I don't know how can you still deny that r700 gpus exist it's the codename for all HD4000 cards ,i even know from memory the name of my old one (RV770 LE aka 4830) only right thing you are saying is that they also have other more or less specific names, heck you also mention Evergreen which is the codename of the HD5000 cards and not the ones i'm talking about.
R600/R700/R800 (Evergreen) are all iterations of the TeraScale architecture with only incremental performance advancements. That said..

And again the GCN debuted in Janaury 2012 with the HD7970 (google it) this is a whopping 11 months before WiiU was released, add to that the fact that the design for a gpu doesn't finish the day before it is released you can say that if Nintendo cared enough to have the latest tech possible and worked more closely with AMD we could have had a GCN WiiU.
A stand-alone GPU "whooping 11 months before wiiU was released" means zilch in the contexts of wiiU - an MCM design much closer to an APU. AMD themselves continued launching APUs with TeraScale as late as mid-2013, while their first GCN APU (Kabini) came out in Q2'13 - AMD were not capable of launching a GCN APU any earlier. And they never released a GCN APU on a node larger than 28nm. And there was no 28nm when wiiU was entering production.

Your argument is that 28nm had shortages? Still doesn't matter that much since 32nm was a thing in October 2011.
How many 32nm GCN APUs has AMD released? That's right - a "whooping" 0.

The only valid reason to prefer planar 20nm over 16/14nm FinFET is because the fabs are free of producing and hella cheaper, this is not something the end consumer cares at all and only benefits Nintendo.
20nm was never "cheaper" - the reason practically everybody skipped it. I can count all 20nm mass-production SoCs on the fingers of my hand.

Nintendo has shown that it doesn't care about being on the latest tech, period.
Perhaps they don't, but your reasoning has not proven that in the least.
 
After this announcment i wonder how many of those ps4/xbox one/ and nx titles are still coming to nx? If they will have to be significantly watered down to run on nintendos under powered platform now.
 

Rodin

Member
Off the top of my head, absolute best case scenario would be a 16nm chip, running 8x A72 at about 750MHz, with 3x SMs running at about 500MHz and a 128 bit LPDDR4 interface for about 50-60 GB/s BW. Then let's say 6GB RAM (with 2GB for the OS). That'll end up quite a bit more expensive than the 3DS hardware (but much cheaper than a home console SoC+RAM), but by using a cheap <5" 540p screen they could probably squeeze it into $199 while just about breaking even.

That's 384gflops when used standalone, right? Would be an absolutely insane jump from the 3DS (which should be 6.4 GFLOPS on a much older architecture), possibly one of the largest ever seen in terms of raw numbers (we're talking about 60x more powerful without even considering the huge architectual advantages). I wonder if 4 SM*400MHZ is feasible though (that's 409.6 GLOPS).

At the same time, assuming that the dock allows the GPU to reach 1GHZ (and a higher clock for those A72 cores), we would be looking at 768 GFLOPS, which isn't too hot for a home, but i'd take it at the right price and considering the handheld side of the picture. 1.3GHZ would give us 1 TFLOP but it doesn't sound realistic at all.

About the RAM, 4GB for games should be enough but hopefully the can deal with 1-1.5GB for the OS, so that they can have 4.5-5GB to use for games.
 
After this announcment i wonder how many of those ps4/xbox one/ and nx titles are still coming to nx? If they will have to be significantly watered down to run on nintendos under powered platform now.
Would they really have to be that watered down? This seems like it could be much closer than the gap between Wii and PS360. Even if games have to run at 720p the port could be quite decent.
 

Inuhanyou

Believes Dragon Quest is a franchise managed by Sony
Would they really have to be that watered down? This seems like it could be much closer than the gap between Wii and PS360. Even if games have to run at 720p the port could be quite decent.

Some games already run at 720p on XB1. Anything less than a tflop and your probably going to have to start significantly losing graphical fidelity if you don't want to go subHD.

If its 500~gflops like some are speculating, that is going to be a decent culling in certain aspects
 
Status
Not open for further replies.
Top Bottom