The original Chinese leak was wiped from the Internet, which is curious itself, I'm not sure we could get a better translation than what exists, but if we do it will be because someone saved the original leak that everyone pretty much thought was fake at the time.
That's disappointing to hear, I feel like we would have clarified quite a few things with a better translation (particularly whether the 200mm² chip is in addition to, or a replacement for, the 100mm² one).
Thraktor, I agree with the cpu power consumption you have assumed, but did you take into account the power saving from the gpu, as the clock is smaller than the gains from moving to 16nm allows, there should be additional savings there, making the power consumption difference smaller.
My point, though, was that if Switch is actually using a 16nm chip, but back in October dev kits were still using the 20nm TX1, Nintendo still would have had engineering samples of the 16nm chips at the point they were telling Eurogamers source of the "final" clocks. So both CPU and GPU clocks at the time would have been based on Nintendo's own internal testing of the final chip, even if third parties didn't actually have dev kits using them yet. Now, it's possible that between the engineering samples in Sept/Oct and the final chips in Nov/Dec there was enough of an improvement that Nintendo decided they could afford to bring clocks up a bit, but that decision would have been a comparison of 16nm (engineering sample) to 16nm (final product), not a comparison from 20nm to 16nm. There's certainly scope for clock increases from ES to final chips, if manufacturing goes well, but I would expect something on the order of 10-20% perhaps, not 75%. The GPU clock of 921MHz, for example, is something I wouldn't rule out, but unless they dropped the link of 1:1 CPU clock speed between handheld and docked mode I can't imaging things improving enough to clock the CPU up that high while running games.
I could give you that on the CPU, but at the same time I have NEVER heard of any one stress testing a chip (specially one in a finished product) at an almost 100% over clock, while only over clocking the GPU 20%. Plus that extra 20% in the GPU isn't going to make development between the two tiers go from super easy to OMG so hard.
All I'm saying is that what you guys are suggesting for reasons to run the system at these speeds just aren't done.
Again I have no idea if these are the final clocks or not, all I'm saying is the reasons to test a system for 8 days at these speeds and not ship them at those speeds are 0. No one does that.
I absolutely agree that stress-testing a CPU at 75% higher clock speeds than you plan to do it is crazy, but I think that telling developers that "final clocks" are 1GHz only a couple of months before launch and then suddenly jumping to 1.78GHz is equally crazy.
The best explanation I could give would be if the CPU does clock up that high, but just not during (normal) games. The could use a dynamic clock while browsing the OS, eShop, etc., similar to smartphones where it would jump between 600MHz (or similar) and 1.78GHz depending on usage, to ensure a smooth user experience without too much power draw. The other possibility is that they clock just one or two cores up that high for Gamecube or Wii emulation, with the GPU clocked down to accommodate.
A less likely, but somewhat possible explanation would be that they've given developers an extra performance profile in games to run a smaller number of cores at a higher frequency. So, they could use 4 cores at 1Ghz, 2 at 1.4GHz or 1 at 1.8GHz, or something like that. Some developers may prefer to work in a more lightly parallelised environment like that.
As some of you are probably aware of by now, several Jpanese developers were asked about their thoughts on the Nintendo Switch presentation, and what things they found interesting about the device.
(If you haven't read it by now, hop of over to
Nintendo Everything and give it a go.)
What caught my attention, though, was how some devs thought the concole was cheap. Not just that, but on more than one occasion, it came in context of them being especially surprised because they were aware of what the hardware brings to the table. This could be purely from a feature standpoint, or it could be processor related, or both. Either way, it's very interesting that some expressed their thoughts in this manner. For example:
Ganbarion
Bandai Namco
PlatinumGames
These comments do seem to lean in favor of speculation based on the Foxconn leak. Or to say it another way; I find it hard to believe that these developers were so impressed by 4 A-57's and an underclocked X1, even to the point where the $299.99 price tag came as a complete surprise/shock. Am I wrong to believe that this wouldn't make much sense?
It's true that none of us know what's really under Switch's hood, but all the circumstantial evidence makes it hard - for me at least - to lean in favor of speculation heavily built upon Eurogamer's report. Thankfully, not too long before we are able to get some teardowns. In the meantime, how do you interpret those comments(especially with the two main opposing leaks in the current discussion)?
Higher clocks don't actually (directly) cost any more money though, and A72s are actually smaller (i.e. cheaper) than A57 cores. The Foxconn leak also mentions that the SoC in Switch is about the size of the TX1, so it would rule out something like a larger GPU (i.e. more SMs).
My guess is they're talking about the HD rumble. Although we don't have a full breakdown of the technology, it seems to be using linear actuators similar to what's used in the iPhone (hardly a cheap piece of hardware), but Nintendo are using at least two, possibly more (if there's also one in the main body for touch feedback), and may well be using more advanced models than Apple are using (given the more diverse applications for it in Switch). This is something we'll get more info on once we have a teardown, but I'd expect it to be pretty high up the list of component costs in the system.