Well, it really depends on the size of the floorplan and the points they're monitoring. Approaching 10C of delta is achievable on the some dies, if they monitored t-junction at sufficiently remote locations. Now, whether they could load _all_ CPU cores, along with the GPU (and thus saturate the thermal map), and still get large differences is really up to the peculiarities of the specific floorplan. For instance, IIRC, I can get deltas of ~7C across certain cores on my Xeon without much issue, but that's at largely asymmetrical loads and at 22nm a xeon is not a small die.
A fair point, and I can imagine for a large Xeon die (particularly with only one core active) there may be a larger difference in temps, but it seems a bit of a stretch for a ~120mm² die like this, especially when you would expect a large portion of the silicon to be in use. I'd also expect the largest deltas under load, whereas here we're supposedly seeing a 14 degree difference while idle.
Right, clusterization can go in pairs as well, but it's traditionally done for binning purposes - when you have a SKU of 2 cores and a SKU of 4 cores, and you want to bin the smaller from the larger. I can't see such a use case for nintendo.
Yeah, it doesn't make much sense for Nintendo. I can't really think of other situations where it makes sense, though. If this is a benchmark of something (as opposed to someone just plucking numbers off the top of their head), then what chip would have that kind of odd clocking arrangement? Is it possible that the clock speed switching is sufficiently rapid that the clocks actually changed between readouts for each core?
Thor is the Norse god of war. Odin is the god of wisdom. So it's a wise choice by Nintendo.
Ah, thanks for the clarification. I'm clearly going to have to study up on Norse mythology if I'm to decipher the secret of the Switch.
There is no way in hell that the Switch will have UHS II support in speed even if it can access it, the pins are not the same. They would not shell out the money for it, at least not the first edition ones, maybe, and that's a big maybe, they will have it in later revisions or a Nintendo NEW(xN) Switch.
There are extra pins, but it's compatible with regular microSD (i.e. you can use a regular microSD in a UHS-II slot, and a UHS-II microSD in a regular microSD slot). As I've said though, I think it's extremely unlikely, but the photos we've got don't give us any confirmation one way or the other (removing the metal shield would allow us to see if the UHS-II pins are there).
Yeah, I'm with Durante on this one. I think we're most likely looking at a lightly modified X1. They may have tinkered with the caches or (if you're willing to be pie in the sky) shrunk the design down to 16nm for power efficiency, but the dies are too close in size to expect any radical departures.
Final clock speeds and hopefully microscopic analysis at some point will be fun to peek at.
The other possibility is that the new Shield TV uses Switch's SoC, rather than the other way around.
The 2017 Shield TV does have a revised SoC, with visibly different chips and a change from model number TM670D-A1 to TM670M-A2, and Nvidia have in the past often used the same Tegra chip name for multiple variants (most notably the K1 with A15 cores and the other K1 with Denver cores). The commonality for Tegra chips of a given name seems to be their GPU configuration, with the two K1s both having 192-core Kepler GPUs, but vastly different CPUs. On
Nvidia's page for the new Shield TV you'll see a much simpler description of the Tegra X1 than they gave before:
NVIDIA® Tegra® X1 processor with a 256-core GPU and 3 GB RAM
There's no mention of the CPU at all here. As with the variants of the K1, there may be changes outside the GPU config which they don't feel warrant a new code. Unlike the K1, though, these changes may not have been driven by Nvidia's internal requirements, but rather by Nintendo's. Nintendo may have requested customisations such as a change in CPU config, a different manufacturing process, or perhaps just some low-level tweaks here and there, and Nvidia may have decided it was simpler to just brand the new chip as Tegra X1 for use in their own device, rather than continue manufacturing two separate chips. This would be particularly true if they were moving to 16nm, as I'm sure Nvidia would be eager to leave 20nm behind.
The one issue I have with such a theory, though, is it would mean Switch's SoC still has the full 4K h.265 encode/decode functionality, which is pretty much the first thing I would have imagined Nintendo getting rid of were they to customise the TX1.
Edit: Although, I suppose it's possible that they actually dropped the 4K h.265 encoding, as the Shield TV doesn't make use of it, afaik. Decode likely takes up less die space.