• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

A Nintendo Switch has been taken apart

KingSnake

The Birthday Skeleton
I don't think a teardown qualifies as a preview. Plus, is DF doing teardowns usually, I don't remember seeing one from them.
 

Hermii

Member
I don't think a teardown qualifies as a preview. Plus, is DF doing teardowns usually, I don't remember seeing one from them.

Pretty sure they did one for Wii U, there is little point in doing a teardown for any other company as they will tell you the exact specs.
 
And the last sentence you wrote tells us that it can't be eurogamer's clocks on 16nm because that draws under 1.5w for the Soc and couldn't have a heat profile like that with the active cooling. Foxconn's clocks are also pretty much impossible on 20nm because the battery would drain in just over a hour.

Aren't the Foxconn clocks for docked? 🤔
I thought the Shield TV could run as fast or faster than those figures.

I may be getting my wires crossed, were there any specific docked and undocked performance claims from the Foxconn guy?
 
Aren't the Foxconn clocks for docked? ��
I thought the Shield TV could run as fast or faster than those figures.

I may be getting my wires crossed, were there any specific docked and undocked performance claims from the Foxconn guy?

MDave did tests on his Shield TV and found out it throttles either the GPU or CPU (or both to a lesser extent) to levels below the Foxconn clocks after a few minutes at max clocks.

The Foxconn leaker made no mention of docked or undocked clocks. In fact, we still have no official word on whether or not clocks change when docked, although Eurogamer's report says they can.
 

KingSnake

The Birthday Skeleton
Pretty sure they did one for Wii U, there is little point in doing a teardown for any other company as they will tell you the exact specs.

Are you sure? We wouldn't have needed the high res pictures of the chips right here on GAF and the hundreds of posts debating those if DF would have done that.
 

sits

Member
If not a literal tear-down with high-res photos, maybe someone comes out with the complete hardware profile in the next 24 hours, no longer needing to hide behind anonymity?
 

Hermii

Member
Are you sure? We wouldn't have needed the high res pictures of the chips right here on GAF and the hundreds of posts debating those if DF would have done that.

A teardown does not equal chipworks die shots. And I wasn't sure, I tried to google it and didn't find it so probably not. They made an article about the Iwata Asks that contained pictures of the mcm though.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
That was in fact, the exact point I was replying to from the poster: "100% of pixel work can be done in fp16". Sweeping generalities in these enthusiast discussions are painful to witness. I'm not sure anyone in this thread has even attempted to understand long pole on performance due to memory latency in the shader microcode, launch rate, etc., none of which will be helped by fp16. But even that's a generalization, because we have that situation already, which is why we sometimes fold passes because we have idle ALU cycles, etc.
Ok, I didn't pay close attention to that post by z0m3le and missed that generalisation, which was indeed way too broad, but for some categories (read: BRDF model complexity and target dynamic ranges) of pixel shaders the entire shader can get away with fp16. For instance, every single pixel shader in this video, as last-gen as they are, runs at fp16. And don't worry about wasting your points in this thread - while I've seen my share of ISA-level shader optimisations, I'm not currently on the GPU side of the building in the organisation I work at but I still do low-level optimisations on inherently visual stuff where loss of precision is not taken lightly. Suffice to say I wish I could get rid of all the fp64 in our codebase but I can't ; )

Sure, but they are the bread and butter of a console GPU programmer.
True. And yet, that's somewhat orthogonal to the precision topic.
 
MDave did tests on his Shield TV and found out it throttles either the GPU or CPU (or both to a lesser extent) to levels below the Foxconn clocks after a few minutes at max clocks.

The Foxconn leaker made no mention of docked or undocked clocks. In fact, we still have no official word on whether or not clocks change when docked, although Eurogamer's report says they can.

Thanks, I thought so. I am curious if the performance/throttling is identical between the 2017 shield TV and the previous one, I think I read that the chip don't look identical?
 

Mr Swine

Banned
I do hope that with the fan inside the switch, Nintendo will increase the COU/GPU clock speeds for docked and undocked down the road. Maybe not much, but 10-15% would be a size able boost?
 

AmyS

Member
I haven't been keeping up with this thread for the last day or so.

questions:

1. Have we learned anything new about Switch's final specs.

2. What are the latest reasonable, plausible rumors.
 

Zedark

Member
I haven't been keeping up with this thread for the last day or so.

questions:

1. Have we learned anything new about Switch's final specs.

2. What are the latest reasonable, plausible rumors.
Nothing new, really. Basically it is still a battle between Eurogamer and Foxconn, and we will most likely only know the answer when someone scans the SOC.
 
Suffice to say I wish I could get rid of all the fp64 in our codebase but I can't ; )

Are you watching the 8-bit ALU developments in the DNN-on-GPU space? I can't tell if that will thrill or horrify you. I'm just morbidly curious I suppose...

True. And yet, that's somewhat orthogonal to the precision topic.

Sure, but the context here was whether half-precision ALU is some sort of fountain of performance equalization. For effective, measurable improvements in throughput, the devil is in the architectural details, and there are no easy answers in real-world (e.g. non-contrived) workloads. If you had a dollar for every time I said "ah, improving this factor of hardware utilization will finally reduce our net wall time", you could buy a nice sandwich and I would still be full of sad/interesting stories. (If you will be at GDC I will buy you said sandwich.)
 
It's really weird Nintendo didn't make an OS direct and leaving it up to the press to show it of. I assume that goes under "preview". I would also assume a teardown goes under "preview" but I have no idea.
Its like they're rushing everything, which explains the lackluster lineup and info on online content.
 
I haven't been keeping up with this thread for the last day or so.

questions:

1. Have we learned anything new about Switch's final specs.

2. What are the latest reasonable, plausible rumors.

Based on the pictures in OP we have reasonably identified (like 90%) the RAM as two 2GB modules for a total of 4GB with 25.6GB/s bandwidth.

Beyond that we aren't really any closer to figuring out final specs. I think we determined the SoC in the OP is almost exactly the same size as a Tegra X1, though that doesn't really mean anything beyond any more than 256 CUDA cores is theoretically impossible.

This gives us no indication of 20nm vs 16nm and certainly doesn't tell us anything about CPU cores or clock speeds.

It's also possible that the unit in the OP is not a final retail unit. That's about it I think.

The latest specs are: we have no idea. Could be anywhere from 4x A57s with 3 usable at 1GHz, 256CC at 768MHz docked to 4x A72s at 1.78GHz and 256CC at 921MHz docked. Or anywhere in between. Or the launch clocks are the former but could be raised after launch, potentially up to the latter.
 

Zedark

Member
Based on the pictures in OP we have reasonably identified (like 90%) the RAM as two 2GB modules for a total of 4GB with 25.6GB/s bandwidth.

Beyond that we aren't really any closer to figuring out final specs. I think we determined the SoC in the OP is almost exactly the same size as a Tegra X1, though that doesn't really mean anything beyond any more than 256 CUDA cores is theoretically impossible.

This gives us no indication of 20nm vs 16nm and certainly doesn't tell us anything about CPU cores or clock speeds.

It's also possible that the unit in the OP is not a final retail unit. That's about it I think.

The latest specs are: we have no idea. Could be anywhere from 4x A57s with 3 usable at 1GHz, 256CC at 768MHz docked to 4x A72s at 1.78GHz and 256CC at 921MHz docked. Or anywhere in between. Or the launch clocks are the former but could be raised after launch, potentially up to the latter.
I think the final bandwidth is still up in the air, even with that find, right? They could have added modifications to the board that increase the bandwidth, like the WiiU had.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Are you watching the 8-bit ALU developments in the DNN-on-GPU space? I can't tell if that will thrill or horrify you. I'm just morbidly curious I suppose...
To quote a colleague from the GPU department: 'Guys, we need to figure out something to do in 8-bit, lest we'll soon be using the minority part of the GPU die.'

Sure, but the context here was whether half-precision ALU is some sort of fountain of performance equalization. For effective, measurable improvements in throughput, the devil is in the architectural details, and there are no easy answers in real-world (e.g. non-contrived) workloads.
Oh, definitely. No silver bullets in any of the computational domains I've ever touched upon.

If you had a dollar for every time I said "ah, improving this factor of hardware utilization will finally reduce our net wall time", you could buy a nice sandwich and I would still be full of sad/interesting stories. (If you will be at GDC I will buy you said sandwich.)
No clue if and who on the team will go to GDC this year, but I'd love to get to SIGGRAPH this year (again, no idea who from the team will attend this year, but chances are not on my side (again), unless I manage to come up with a paper, which is utterly improbable, given my lowly coder status vis-a-vis the math-wiz kids I'm surrounded by).
 

Shahadan

Member
I'm still not sure why docked specs matter that much. I mean most if not all of the difference will be used to bump up the resolution.
 
I'm still not sure why docked specs matter that much. I mean most if not all of the difference will be used to bump up the resolution.

You say like that isn't better graphics. The thing is, games should be made like that. Graphics that work in good resolution and with a good frame rate. To me that is good graphics. Resolution and frame rate shouldn't be sacrificed over "better graphics". At least not frame rate.
 

antonz

Member
so tomorrow ends the hardware embargo... can we expect any news on the specs?

Not likely as that's not info that would be in press material. Pretty much have to wait for a full scale teardown where they scan chips etc or a 2017 devkit guide leaks that has all the details
 

mario_O

Member
I'm still not sure why docked specs matter that much. I mean most if not all of the difference will be used to bump up the resolution.

It matters because it will look better on a bigger screen, and maybe devs will have room to add a few more things besides resolution. The better question is why do we need that expensive piece of plastic to connect the Switch to a TV.
 

jts

...hate me...
It matters because it will look better on a bigger screen, and maybe devs will have room to add a few more things besides resolution. The better question is why do we need that expensive piece of plastic to connect the Switch to a TV.

Because your TV doesn't have a USB-C input and the Switch doesn't have an HDMI port, so you need a dock or a hub to expand the connectivity options of the console and enable that.

As well as enable the usage of common USB-A accessories at home, like ethernet adapters.
 

mario_O

Member
Because your TV doesn't have a USB-C input and the Switch doesn't have an HDMI port, so you need a dock or a hub to expand the connectivity options of the console and enable that.

As well as enable the usage of common USB-A accessories at home, like ethernet adapters.

you can buy a usb c to hdmi cable for 4 euros.
 

Shahadan

Member
You say like that isn't better graphics. The thing is, games should be made like that. Graphics that work in good resolution and with a good frame rate. To me that is good graphics. Resolution and frame rate shouldn't be sacrificed over "better graphics". At least not frame rate.

I don't mean bumping the resolution doesn't matter, I mean the "real" power of the switch will always lie in its handheld mode as it is the required mode around which everything was built.

It matters because it will look better on a bigger screen, and maybe devs will have room to add a few more things besides resolution. The better question is why do we need that expensive piece of plastic to connect the Switch to a TV.

The one you get with the switch doesn't cost as much as one bought separately anyway, so expensive is relative. Also it's way more convenient and useful than just a cable or something.
 

mario_O

Member
The one you get with the switch doesn't cost as much as one bought separately anyway, so expensive is relative. Also it's way more convenient and useful than just a cable or something.

Nintendo is charging 90 euros for the dock. Not sure how much it adds to the cost of the console.
 

Shahadan

Member
Nintendo is charging 90 euros for the dock. Not sure how much it adds to the cost of the console.

Far less, packaging a big piece like that and shipping it separately in fewer quantities costs more. Also I imagine retailer cut is likely to be a good part of the price.
Not that Nintendo is above inflating prices, of course.

(Also my personal theory is that they're planning a 4k dock somewhere down the line and want the price difference to reflect positively, but that remains to be seen)
 
I think the final bandwidth is still up in the air, even with that find, right? They could have added modifications to the board that increase the bandwidth, like the WiiU had.

I think the effective bandwidth is up in the air because we don't know what's on the die, but the maximum bandwidth for the RAM modules we can see is 25.6GB/s as far as I know. I could be wrong there.
 

MDave

Member
MDave did tests on his Shield TV and found out it throttles either the GPU or CPU (or both to a lesser extent) to levels below the Foxconn clocks after a few minutes at max clocks.

The Foxconn leaker made no mention of docked or undocked clocks. In fact, we still have no official word on whether or not clocks change when docked, although Eurogamer's report says they can.

The results I got from the Shield TV were that even when locked to a maximum of 1GHz, if the CPU were under load and the GPU was getting pushed hard too, the GPU would fluctuate between 614MHz and 1GHz, averaging about 768MHz most of the time. If the CPU was allowed to go at its maximum, the GPU would get throttled more.

On the topic of 25.6GB/s bandwidth on a 64bit bus;

I thought I would see if my Shield TV could handle a 4K 60fps HDR 10bit video (on my shiny new LG C6 TV), and it handles it perfectly. Would that mean anything useful in how the X1 is optimised well for high bandwidth usage?
 

LordOfChaos

Member
I thought I would see if my Shield TV could handle a 4K 60fps HDR 10bit video (on my shiny new LG C6 TV), and it handles it perfectly. Would that mean anything useful in how the X1 is optimised well for high bandwidth usage?

Not really though I appreciate the effort, if you think about a 4K blu ray specification being 50GB single layer, 66GB dual-layer and 100GB triple-layer each with 82Mbit/s, 108Mbit/s and 128Mbit/s data read speeds, then even the modest 25GB/s memory bandwidth is magnitudes away from that scale. Playing it is more hardware decoder and/or CPU bound. Doesn't tell us much about gaming or TX1s texture compression or anything of that nature, as you could transfer a whole blu ray in two seconds on internal memory bandwidth ;)

25GB/s eh, so we're about standard for mobile, I was rooting for a 128 bit bus to alleviate that long standing mobile bottleneck (iPad Pro 12" does this). Well, now to see if there's any fancy on-die SRAM blocks by the GPU.
 
I don't mean bumping the resolution doesn't matter, I mean the "real" power of the switch will always lie in its handheld mode as it is the required mode around which everything was built.

I don't think games will be made for handheld mode paticularly. Handheld and tv goes hand in hand because of the power difference. So actually developers don't have to target some specs and then port to other. It's just resolution difference. Not saying it's that easy but you get what I mean. And I still don't really get what you are saying. Because still to me resolution is as big part of graphics as anything else. It's improvement as much as better lightning or more polygons would be. And developers actually have that choice if they want to. So tv mode isn't gimped by handheld mode. You can see it both ways. Handheld mode that has been boosted for TV or a TV mode that has been downgraded for handheld. But in the end the automatic power difference is just made for the both modes. So you don't have to sacrifice one or another. What I'm trying to say is that you would probably get pretty much the same results without the handheld mode being an option.
 

AlStrong

Member
I don't think games will be made for handheld mode paticularly. Handheld and tv goes hand in hand because of the power difference. So actually developers don't have to target some specs and then port to other. It's just resolution difference.

Clock speed doesn't only just affect pixel-related throughput. There's still the front end of the GPU to consider i.e. geometry/triangle setup.
 
The results I got from the Shield TV were that even when locked to a maximum of 1GHz, if the CPU were under load and the GPU was getting pushed hard too, the GPU would fluctuate between 614MHz and 1GHz, averaging about 768MHz most of the time. If the CPU was allowed to go at its maximum, the GPU would get throttled more.

Thanks. Presumably this is 4 cores active too? I'm wondering if foxconn was doing some stress test where possibility of throttling was disabled, I guess they give retail chips at least a bit of leeway to avoid melting them too often in the real world.
Curious if these could have been made to hit a higher performance bar (Nvidia making shield TV 2017 from binned switch chips?). But I don't really know what I'm talking about 🐵
 
I'm still not sure why docked specs matter that much. I mean most if not all of the difference will be used to bump up the resolution.

When it comes to Foxconn vs Eurogamer's number, the GPU clockspeeds difference is only 20%. It will not be that notable to the end user. The difference in the CPU clockspeed and chipset, however, is a 75% difference or higher if we are discussing A57s vs A72s.
The CPU speed doesn't change between docked and undock mode, so that is one reason why this remains as a hot topic.

Nintendo is charging 90 euros for the dock. Not sure how much it adds to the cost of the console.
I believe USB-C docking stations are relatively new technology, so they are generally not that cheap to get at this time.

Clock speed doesn't only just affect pixel-related throughput. There's still the front end of the GPU to consider i.e. geometry/triangle setup.
Right. So, it is possible that there will be upgraded/downgrades polygon models and/or viewable geometry between modes, correct? It is possible that we may be seeing something like that with Zelda, but we need to look at the difference between the final versions.
 
This is one of the things I'm wondering is still true, was this only confirmed via Eurogamer?

Eurogamer is actually still the only source who has said there are different clock speeds between docked and undocked to begin with. I still think there's a (admittedly small) chance that there is no clock speed change at all, which would explain why titles like Zelda and Splatoon 2 are not 1080p when docked.

But yeah EG is the only source saying the CPU clock will not change when docked/undocked.
 
This is one of the things I'm wondering is still true, was this only confirmed via Eurogamer?
Actually, that is true. We don't have official confirmation of that. Changing the clockspeed between modes could lead to issue with game logic, physics, etc, so it would make sense for it not to change.
 

mario_O

Member
Eurogamer is actually still the only source who has said there are different clock speeds between docked and undocked to begin with. I still think there's a (admittedly small) chance that there is no clock speed change at all, which would explain why titles like Zelda and Splatoon 2 are not 1080p when docked.

But yeah EG is the only source saying the CPU clock will not change when docked/undocked.

wut? you're saying both things at the same time.

Eurogamer said CPU clocks won't change, always at 1 ghz. And GPU clocks will change from 307.2 to 768MHz docked.
 
wut? you're saying both things at the same time.

Eurogamer said CPU clocks won't change, always at 1 ghz. And GPU clocks will change from 307.2 to 768MHz docked.

Sorry, I didn't specify: Eurogamer is still the only source we have that actually says the GPU clocks will change between docked mode and undocked mode, meaning they are the only source which says anything at all changes when docking the Switch.

Which makes me still a bit wary of the possibility that this isn't going to be the case for the final hardware, even though we all treat it as though it is.
 
Eurogamer is actually still the only source who has said there are different clock speeds between docked and undocked to begin with. I still think there's a (admittedly small) chance that there is no clock speed change at all, which would explain why titles like Zelda and Splatoon 2 are not 1080p when docked.

But yeah EG is the only source saying the CPU clock will not change when docked/undocked.
Yeah, I seriously doubt that. The patent for the Switch mentions that the system fan is faster in docks mode, and that wouldn't seem necessary if the specs are the same between modes. One of the source from Eurogamer also stated that the difference between docked and Undocked was like making a game with two difference specs. If this isn't true, Eurogamer's info is very wrong, and I don't think anything we know about the system so far discredits their info yet.
 
Actually, that is true. We don't have official confirmation of that. Changing the clockspeed between modes could lead to issue with game logic, physics, etc, so it would make sense for it not to change.
Yes but could at least be an optional mode for more experienced teams who might need it, boost mode on PSpro (which iirc digital foundry says boosts CPU clock too) sounds pretty successful without devs having had foreknowledge of it.
 

Oregano

Member
I'm skeptical that Nintendo would put out games like MK8 at 1080p but not make the screen 1080p if that was the case.

In the long run sourcing 1080p screens would be cheaper than 720p ones.
 
Top Bottom