• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

A Nintendo Switch has been taken apart

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
I agree that, in theory, a pool of fast embedded memory plus a larger, slower main memory pool seems well suited to a game console, but I'm becoming less convinced that it's a smart choice in practice, when faced with a limited budget. Of course it's near impossible to say whether a, say, 128-bit GDDR5 pool would have been cheaper for Nintendo than the eDRAM + DDR3, but we do have a near-perfect case study of the two approaches in the PS4 and XBO.

Both had access to identical CPU/GPU architectures and a very similar BoM (judging by sales price once MS dropped Kinect). Pretty much the only meaningful high-level distinction between the two designs was MS's teams's decision to combine embedded memory with DDR3, and Sony's decision to go with a single GDDR5 pool. The results are quite obvious; the single pool was the better decision. That's not to say that MS's embedded memory approach didn't have its advantages, but they were obviously heavily outweighed by the extent to which they had to cut back on GPU logic in order to accommodate the memory pool on-die.

Granted, XBO's 32MB may be less than ideal for its target resolutions, and they used SRAM rather than Wii U's (presumably) cheaper eDRAM, but in the absence of other evidence I'd err on the side of a single fast memory pool being the better approach, either to maximise performance at a given cost, or minimise cost at a given level of performance.

This, of course, excludes the possibility of a tile-based GPU like Nintendo has moved to with Switch, which would obviously have a different set of trade-offs.
Let me ask you this -- would you still think the same if:
a) sony did not get lucky with GDDR5 at the last moment, and
b) ms did not use SRAM for their embedded pool, by means of some miraculous 28nm eDRAM tech?
 

Hermii

Member
That's just not an appropriate reading of the Switch. The system is full of impressive tech, and isn't really "cheap" in any way.

Could it have been more powerful? Yes. But that's true of every system ever released.

Yes or no question: Do you know exactly how this is a custom tegra?
 

Durante

Member
Let me ask you this -- would you still think the same if:
a) sony did not get lucky with GDDR5 at the last moment, and
b) ms did not use SRAM for their embedded pool, by means of some miraculous 28nm eDRAM tech?
I'm not Thraktor, but personally, I'm of the opinion that in a modern console design convenience and sustainability (in terms of die shrinks and forward/backward compatibility) trump technical hardware superiority, at least to some extent. It would be different if you have transparent automatic tiled rasterization in hardware, but anything which requires developers to manually juggle a small memory pool is a disadvantage.
 

Luigiv

Member
I think part of the decision was that they believed they could get a memory capacity advantage over Sony with 8GB of DDR3 vs 4GB of GDDR5. Sony bumped up to match them at 8GB quite late on, and presumably at that point countering with 16GB of DDR3 would have been either too expensive or simply not feasible given available parts.
They probably didn't bother go 16GB of DDR3 because they figured that would be total overkill (and they didn't need to drive up the costs any further). 8GBs is already on the generous side for a system of the Xbone's capability.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
I'm not Thraktor, but personally, I'm of the opinion that in a modern console design convenience and sustainability (in terms of die shrinks and forward/backward compatibility) trump technical hardware superiority, at least to some extent. It would be different if you have transparent automatic tiled rasterization in hardware, but anything which requires developers to manually juggle a small memory pool is a disadvantage.
But developers already have to manually juggle small memory pools in the GPGPUs. And write their code in data-driven manner (thus potentially mind caches et al) if they want to get opt performance. Generally, the times when devs could get performance at no memory/data-flow optimisation effort are long gone. Of course, you could argue that not everybody needs optimal performance, but on consoles performance is always scarce.
 

Durante

Member
But developers already have to manually juggle small memory pools in the GPGPUs. And write their code in data-driven manner (thus potentially mind caches et al) if they want to get opt performance. Generally, the times when devs could get performance at no memory/data-flow organisation effort are long gone.
While this is true, I'd argue that even GPU computing is moving towards more convenience, and not just in terms of software interfaces but also hardware design.
E.g. it used to be the case that not doing perfectly coalesced main memory accesses would reduce your effective bandwidth by a factor of 16 -- now the hardware will probably give you decent bandwidth unless you really mess up horribly. Similarly, manual scratchpad memory usage was completely essential for anything which was data-intensive and had re-use. Now you have automatic caching on GPU memory accesses, and rather than orders of magnitude manual usage of on-chip memory might "only" give you a factor of 2 performance improvement.
Of course, these are still hugely significant from the perspective of someone who spends their life doing performance tuning, and probably also for the large technical teams at massive first party "AAA" studios, but maybe not for most developers.

Of course, you could argue that not everybody needs optimal performance, but on consoles performance is always scarce.
I'd even go so far as to argue that the vast majority of devs have more important things to do than getting optimal performance. As such, an architecture which allows them to get good enough perfromance with low effort is preferable to one which gets them great performance with considerable effort.
 

Donnie

Member
So one thing I noticed so far that's come from Switch previews is that with everything maxed in Zelda battery life is 2 and a half hours. Which means the system is using 6.4w in handheld mode.

Lets say that everything outside of the Switch SoC is using 2w (seems generous considering the entire 3DSXL system uses 2w at max settings and has pretty big screens). Leaving 4.4w for the Switch SoC.

Can anyone think of a way to make that work with a Tegra Maxwell GPU and 4 x A57 CPU cores at Eurogamers clock speeds, even at 20nm? Or even get close?, I can see the SoC being about 2.5w at 20nm with Eurogamers clocks.
 

Hermii

Member
So one thing I noticed so far that's come from Switch previews is that with everything maxed in Zelda battery life is 2 and a half hours. Which means the system is using 6.4w in handheld mode.

Lets say that everything outside of the Switch SoC is using 2w (seems generous considering the entire 3DSXL system uses 2w at max settings and has pretty big screens). Leaving 4.4w for the Switch SoC.

Can anyone think of a way to make that work with a Tegra Maxwell GPU and 4 x A57 CPU cores at Eurogamers clock speeds, even at 20nm? Or even get close?, I can see the SoC being about 2.5w at 20nm with Eurogamers clocks.

I dont know, but I would think the joycons are relative power hogs with all that tech packed into them especially hd rumble. Also the screen is not comparable to the 3ds screen.
 
Of course, these are still hugely significant from the perspective of someone who spends their life doing performance tuning, and probably also for the large technical teams at massive first party "AAA" studios, but maybe not for most developers.

There is nothing per se difficult about organization for e.g. ESRAM. It's more that it implies a specific architectural notion/ceiling (c.f. 10M on Xenon vs. early MSAA resolution goals, and the consequent predication mess, or the development of wide gbuffers), that reduces flexibility in terms of experimenting and improving on renderer design.

I'd even go so far as to argue that the vast majority of devs have more important things to do than getting optimal performance. As such, an architecture which allows them to get good enough perfromance with low effort is preferable to one which gets them great performance with considerable effort.

Do "some" people prefer cheap Android phones with non-responsive user interfaces to low-latency iPhone interfaces? Sure, at "some" price. You could make a more specific hardware point, say, that devs today on Jaguar find attempts to outwit the cache prefetcher more difficult due to sophisticated strided access logic, or similar.
 

Donnie

Member
I dont know, but I would think the joycons are relative power hogs with all that tech packed into them especially hd rumble. Also the screen is not comparable to the 3ds screen.

JoyCons have there own batteries, even if they can potentially charge from Switch they won't do so often. In fact it's been reported they can actually assist in keeping the Switch going by allowing it to draw on the JoyCon batteries.

Also while the main 3DSXL screen isn't comparable to Switches, it has two screens. Surely both 3DSXL screens combined don't use a similar amount of power as Switches single screen? Having said that I ready allowed for Switch to use extra, assuming it uses 2w for everything outside of the SoC. While 3DSXL uses 2w for everything including the SoC.
 
I dont know, but I would think the joycons are relative power hogs with all that tech packed into them especially hd rumble. Also the screen is not comparable to the 3ds screen.

I don't think the Joycons are power hogs whatsoever. They are tiny yet have 20 hours of battery life, and also don't drain power from the main unit in most circumstances.

The screen likely will draw a good deal of power though, especially with max brightness.

Also while the main 3DSXL screen isn't comparable to Switches, it has two screens. Surely both 3DSXL screens combined don't use a similar amount of power as Switches single screen? Having said that I ready allowed for Switch to use extra, assuming it uses 2w for everything outside of the SoC. While 3DSXL uses 2w for everything including the SoC.

You may have a point here, since the top 3D screen should also theoretically be drawing double the power of a normal 240p screen. It still might not combine to reach the levels of a 720p screen with max brightness, especially since the Switch screen appears to go brighter than the 3DSXL's do.
 
I actually think Sony got really lucky with the PS4, they wanted GDDR5 but at the time it wasn't available in the densities required....
Wasn't Sony really really lucky...?
Let me ask you this -- would you still think the same if:
a) sony did not get lucky with GDDR5 at the last moment...?
I'm pretty sure Sony wasn't "lucky". I remember reports after the fact that said Sony had been working closely with Hynix all along to try and ensure higher densities would be available on time (and they'd be the first and preferential customer).
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
I'm pretty sure Sony wasn't "lucky". I remember reports after the fact that said Sony had been working closely with Hynix all along to try and ensure higher densities would be available on time (and they'd be the first and preferential customer).
'Lucky' as in being able to get viable yields of a brand new IC technology. And surely they worked closely with the mem vendor - they had some volumes to secure.
 

LordOfChaos

Member
How would a 4 gig ps4 perform compared to an Xbox One?



Like a system with half the RAM and more bandwidth ¯\_(ツ)_/¯

The first run of games were actually planned around 4GB I believe as most developers only learned of the 8GB upgrade at the same time as consumers. The PS4 would still likely be able to hit 1080p more of the time, but probably with culled in draw distances in larger games. With 4GB, 1GB gone to the OS, 3GB between the game memory and VRAM use wouldn't be much to play with.

2GB video cards at 1080p are now starting to hit that limit more and more for a point of comparison. More games would likely have to dip down below that as textures and effects got larger.
 

Mameshiba

Neo Member
The Joy-cons have their own battery though, so they shouldn't draw any power from the main unit battery, unless the tests were done with depleted Joy-con batteries.

I even read somewhere (maybe Reddit?) that once the main console runs low on battery, the console starts sipping power from the Joy-con batteries, if they have enough charge left. As the Joy-cons supposedly last ~20 hours, that should be the norm if both are fully charged, therefore further increasing the available power for the main console from 16 Wh to ~19 Wh.

In general I would be really interested in trying to calculate the power consumption for all of the parts based on the Zelda runtime, so we can have a better guess on the actual hardware and clockrates of the SoC.

For the Display, the closest one with power measurements i could find is the Mate 8 Display http://www.anandtech.com/show/9878/the-huawei-mate-8-review/6
Only 6" instead of 6,2", but also 1080p instead of 720p, so power consumption should be fairly close. I haven't seen a switch in person yet, but from reading a lot of first impressions, i would guess that the switch is brighter than the 3DS and Wii U Display, but doesn't come close to modern smartphones. So somewhere between 200 and 300 nits sounds reasonable, which would correlate to between 0.8 and 1 watt.
(For comparison, the 3DS got around 150 nits, while the new 3DS managed 161 nits according to https://www.welt.de/wirtschaft/webw...eue-Nintendo-3DS-hat-den-Zocker-im-Blick.html)

The SoC calculations were already done before in this thread (assuming 4 Cores), iirc that resulted in:

~2,5 watt: Eurogamer clocks, A57 @20nm (1,83 + ~0,7)
~1,5 watt: Eurogamer clocks. A57 @16nm (1,16 + ~0,4)
~6,5 watt: Foxconn clocks, A57 @20nm (5,46 + ~1)
~4,3 watt: Foxconn clocks, A72 @20nm (~3,3 + ~1)
~4,2 watt: Foxconn clocks, A57 @16nm (3,65 + ~0,6
~2,8 watt: Foxconn clocks, A72 @16nm (2,2 + ~0,6)

All the values with a ~ are estimations, mostly based on http://www.anandtech.com/show/8811/nvidia-tegra-x1-preview/3
The other values come from:
http://www.anandtech.com/show/9330/exynos-7420-deep-dive/5
http://www.anandtech.com/show/8718/the-samsung-galaxy-note-4-exynos-review/6
http://www.anandtech.com/show/9878/the-huawei-mate-8-review/3

I think those numbers might be accurate up to +- 20% of the actual value

Wlan shouldnt be active while playing Zelda, or atleast power draw should be negligible
Joy-cons have their own battery.
Do we have any information about the Speakers?
No idea about the Ram, that might be one of the bigger power consumers while playing.
Nand and microsd should also be negligible.

Did i forget anything?
 

LordOfChaos

Member
It would seem completely pointless to have the A53s on the die completely dormant lol... Isnt that terrible from a design standpoint?

Nah. Even on 20nm they're 0.7mm2. For that little space the cost equation of reworking the die configuration to remove them may not be worth it over disabling them.



Even if they don't update the interconnect of TX1 to allow using both at the same time (to free an A57 core of the OS), leaving them in could at least allow an update that uses those cores for maybe when a future web browser is added/puttering about doing tablety things.
 
The Joy-cons have their own battery though, so they shouldn't draw any power from the main unit battery, unless the tests were done with depleted Joy-con batteries.

I even read somewhere (maybe Reddit?) that once the main console runs low on battery, the console starts sipping power from the Joy-con batteries, if they have enough charge left. As the Joy-cons supposedly last ~20 hours, that should be the norm if both are fully charged, therefore further increasing the available power for the main console from 16 Wh to ~19 Wh.

In general I would be really interested in trying to calculate the power consumption for all of the parts based on the Zelda runtime, so we can have a better guess on the actual hardware and clockrates of the SoC.

For the Display, the closest one with power measurements i could find is the Mate 8 Display http://www.anandtech.com/show/9878/the-huawei-mate-8-review/6
Only 6" instead of 6,2", but also 1080p instead of 720p, so power consumption should be fairly close. I haven't seen a switch in person yet, but from reading a lot of first impressions, i would guess that the switch is brighter than the 3DS and Wii U Display, but doesn't come close to modern smartphones. So somewhere between 200 and 300 nits sounds reasonable, which would correlate to between 0.8 and 1 watt.
(For comparison, the 3DS got around 150 nits, while the new 3DS managed 161 nits according to https://www.welt.de/wirtschaft/webw...eue-Nintendo-3DS-hat-den-Zocker-im-Blick.html)

The SoC calculations were already done before in this thread (assuming 4 Cores), iirc that resulted in:

~2,5 watt: Eurogamer clocks, A57 @20nm (1,83 + ~0,7)
~1,5 watt: Eurogamer clocks. A57 @16nm (1,16 + ~0,4)
~6,5 watt: Foxconn clocks, A57 @20nm (5,46 + ~1)
~6 watt: Foxconn clocks, A72 @20nm (~3,3 + ~1)
~4,2 watt: Foxconn clocks, A57 @16nm (3,65 + ~0,6
~2,8 watt: Foxconn clocks, A72 @16nm (2,2 + ~0,6)

All the values with a ~ are estimations, mostly based on http://www.anandtech.com/show/8811/nvidia-tegra-x1-preview/3
The other values come from:
http://www.anandtech.com/show/9330/exynos-7420-deep-dive/5
http://www.anandtech.com/show/8718/the-samsung-galaxy-note-4-exynos-review/6
http://www.anandtech.com/show/9878/the-huawei-mate-8-review/3

I think those numbers might be accurate up to +- 20% of the actual value

Wlan shouldnt be active while playing Zelda, or atleast power draw should be negligible
Joy-cons have their own battery.
Do we have any information about the Speakers?
No idea about the Ram, that might be one of the bigger power consumers while playing.
Nand and microsd should also be negligible.

Did i forget anything?

Very interesting post, thanks! RAM should be LPDDR4 based on the OP and I believe Eurogamer leaked the clock speed of that, so we should be able to determine a reasonable power draw.

I would think the wifi would be active even when playing Zelda though, specifically for OS functions. Also the BT connection from the Switch to the joycons should draw a bit of power, though likely quite negligible.
 

Donnie

Member
You may have a point here, since the top 3D screen should also theoretically be drawing double the power of a normal 240p screen. It still might not combine to reach the levels of a 720p screen with max brightness, especially since the Switch screen appears to go brighter than the 3DSXL's do.

Yeah Switch screen may consume more power than both. But the difference will be at most a few hundred mW.
 

z0m3le

Banned
Yeah not to try and confirm anything, but it seems like if the chip moved to 16nm, A57 would work fine with the battery performance we are seeing.

Interesting that our estimations line up so close.

The July devkits had a version of the switch with the correct form factor right? Maybe they ran those clocks because they didn't want to give developers a false impression of the concept of the device. A 1 hour handheld would be unreasonable as a concept, so it could have just been battery constraints
 

Polygonal_Sprite

Gold Member
You guys seen the Zelda handheld vs dock frame rate comparisons yet ?

https://youtu.be/Z9t2uY91kyg

There seems to be dynamic lighting missing from the shrines aswell compared to the Wii U build from last E3.

Unless there are big improvements from a day one patch I think it points to Eurogamere clocks being final does it not?
 

KingSnake

The Birthday Skeleton
You guys seen the Zelda handheld vs dock frame rate comparisons yet ?

https://youtu.be/Z9t2uY91kyg

There seems to be dynamic lighting missing from the shrines aswell compared to the Wii U build from last E3.

Unless there are big improvements from a day one patch I think it points to Eurogamere clocks being final does it not?

The difference between the foxconn clocks and Eurogamer clocks for the GPU is not that big anyhow, but whatever clocks is running in docked mode I'm not even sure Zelda is taking full advantage of the GPU with that aliasing, lighting and low res textures.
 
You guys seen the Zelda handheld vs dock frame rate comparisons yet ?

https://youtu.be/Z9t2uY91kyg

There seems to be dynamic lighting missing from the shrines aswell compared to the Wii U build from last E3.

Unless there are big improvements from a day one patch I think it points to Eurogamere clocks being final does it not?

The Wii U version may also be lacking that indoor lighting. It could be a design change.

And like KingSnake said, the bump in the clocks wouldn't really account for any of this. They began porting the Switch version less than a year ago so I would imagine they just weren't able to fully optimize it for that platform.

I wonder if they might continue to work on optimization and send out a patch within the next year.
 

Polygonal_Sprite

Gold Member
The difference between the foxconn clocks and Eurogamer clocks for the GPU is not that big anyhow, but whatever clocks is running in docked mode I'm not even sure Zelda is taking full advantage of the GPU with that aliasing, lighting and low res textures.

The Wii U version may also be lacking that indoor lighting. It could be a design change.

And like KingSnake said, the bump in the clocks wouldn't really account for any of this. They began porting the Switch version less than a year ago so I would imagine they just weren't able to fully optimize it for that platform.

I wonder if they might continue to work on optimization and send out a patch within the next year.

Sounds about right. Did they actually say it's only been in development on Switch for a year?
 
There have been some Switch ads and parts of the trailer where the game runs at a suoer clean 1080p. Maybe those come from a newer version, that will presumably come in the form of a patch?
 

Hermii

Member
The Joy-cons have their own battery though, so they shouldn't draw any power from the main unit battery, unless the tests were done with depleted Joy-con batteries.

I even read somewhere (maybe Reddit?) that once the main console runs low on battery, the console starts sipping power from the Joy-con batteries, if they have enough charge left. As the Joy-cons supposedly last ~20 hours, that should be the norm if both are fully charged, therefore further increasing the available power for the main console from 16 Wh to ~19 Wh.

In general I would be really interested in trying to calculate the power consumption for all of the parts based on the Zelda runtime, so we can have a better guess on the actual hardware and clockrates of the SoC.

For the Display, the closest one with power measurements i could find is the Mate 8 Display http://www.anandtech.com/show/9878/the-huawei-mate-8-review/6
Only 6" instead of 6,2", but also 1080p instead of 720p, so power consumption should be fairly close. I haven't seen a switch in person yet, but from reading a lot of first impressions, i would guess that the switch is brighter than the 3DS and Wii U Display, but doesn't come close to modern smartphones. So somewhere between 200 and 300 nits sounds reasonable, which would correlate to between 0.8 and 1 watt.
(For comparison, the 3DS got around 150 nits, while the new 3DS managed 161 nits according to https://www.welt.de/wirtschaft/webw...eue-Nintendo-3DS-hat-den-Zocker-im-Blick.html)

The SoC calculations were already done before in this thread (assuming 4 Cores), iirc that resulted in:

~2,5 watt: Eurogamer clocks, A57 @20nm (1,83 + ~0,7)
~1,5 watt: Eurogamer clocks. A57 @16nm (1,16 + ~0,4)
~6,5 watt: Foxconn clocks, A57 @20nm (5,46 + ~1)
~6 watt: Foxconn clocks, A72 @20nm (~3,3 + ~1)
~4,2 watt: Foxconn clocks, A57 @16nm (3,65 + ~0,6
~2,8 watt: Foxconn clocks, A72 @16nm (2,2 + ~0,6)

All the values with a ~ are estimations, mostly based on http://www.anandtech.com/show/8811/nvidia-tegra-x1-preview/3
The other values come from:
http://www.anandtech.com/show/9330/exynos-7420-deep-dive/5
http://www.anandtech.com/show/8718/the-samsung-galaxy-note-4-exynos-review/6
http://www.anandtech.com/show/9878/the-huawei-mate-8-review/3

I think those numbers might be accurate up to +- 20% of the actual value

Wlan shouldnt be active while playing Zelda, or atleast power draw should be negligible
Joy-cons have their own battery.
Do we have any information about the Speakers?
No idea about the Ram, that might be one of the bigger power consumers while playing.
Nand and microsd should also be negligible.

Did i forget anything?
Hmm so given these figures and the battery life, Foxconn clocks at 20nm seems likely. That's interesting.
 

z0m3le

Banned
Hmm so given these figures and the battery life, Foxconn clocks at 20nm seems likely. That's interesting.

With A72, but 16nm with A57 and Foxconn clocks also work. Eurogamer's clocks don't see to fit with the battery life we are seeing with any combination.
 

Hermii

Member
With A72, but 16nm with A57 and Foxconn clocks also work. Eurogamer's clocks don't see to fit with the battery life we are seeing with any combination.
I meant with Donnies estimation of 6.5w for the soc.

Edit: forget what I said, his estimate was 4.4 watt
 

Donnie

Member
I meant with Donnies estimation of 6.5w for the soc.

Edit: forget what I said, his estimate was 4.4 watt

Yeah, so really only 2 estimates match up really well. Either 16nm A57 Foxconn clocks or A72 20nm Foxconn. With a outside chance of A72 16nm Foxconn clocks (looks a bit low, but maybe possible). Everything else is either way to high or way to low.

I suppose if the JoyCons do power Switch when it gets low that would also play a part and mean that power usage is actually a bit higher than we think.
 

KingSnake

The Birthday Skeleton
We're talking about power consumption of these cores at a given frequency vs the overall power consumption of Switch. What does throttling have to do withwuth it?

That the power draw is not the only restriction. But fine, if you want to play around just with it, then yeah.
 

Hermii

Member
I'm starting to lean towards a57 16nm Foxconn.

But then there is the Nintendo characteristic embedded memory. How would that affect power consumption?
 

z0m3le

Banned
I'm starting to lean towards a57 16nm Foxconn.

But then there is the Nintendo characteristic embedded memory. How would that affect power consumption?

I am too, and very small for embedded memory, especially if it's only 4MB, I'm not sure but doesn't the X1 already have 2MB of L2 cache? Maybe they extended it to 6MB or 8MB so they can still do other L2 tasks and send a TBR through it.
 

KingSnake

The Birthday Skeleton
I don't see the LPDDR4 memory included in the power draw calculation or did I missed it?

Edit: I found this table, but it's from 2014, I don't know if things evolved meanwhile:

0914-POVsynopsys_Table.gif
 
That the power draw is not the only restriction. But fine, if you want to play around just with it, then yeah.

I don't think that's what this conversation is about though. We're working under the assumption that the unit consumes 6.4 watts in handheld mode with the brightness at max, and then working backwards from there to see if we can deduce the power consumption of each part, including the SoC. Heat doesn't matter for this particular thought experiment.

I just think we don't know enough about the power consumption for the non-SoC components to make a reasonable conclusion.
 

z0m3le

Banned
I don't think that's what this conversation is about though. We're working under the assumption that the unit consumes 6.4 watts in handheld mode with the brightness at max, and then working backwards from there to see if we can deduce the power consumption of each part, including the SoC. Heat doesn't matter for this particular thought experiment.

I just think we don't know enough about the power consumption for the non-SoC components to make a reasonable conclusion.

Well... Eurogamer's clocks would mean nearly 4 watts the rest of the entire device, and that seems extremely unlikely imo.
 

Donnie

Member
That the power draw is not the only restriction. But fine, if you want to play around just with it, then yeah.

I get that but I don't think it's being ignored. The X1 SoC will use around 10w at full load, that's why it's throttling. What we're looking at here is around 4w which seems reasonable.
 
I wouldn't be surprised if Nintendo didn't increase GPU clockspeeds for the switch at all for docked mode for botw. It could explain the framerate dropping down to the low 20s on docked mode at times with 900p, while handheld mode is a stable 30 at 720p.

I mean its possible that it could be a bug, and we will indeed have a patch on day 1.. But knowing how Nintendo treated TP on Wii was exactly like the cube minus controls, Nintendo doesn't seem like they took advantage of Switch's hardware at all because they were rushing it for launch, and maybe didn't want to alienate the gamers that was looking forward to the game on the original console(the Wii u) Certainly not in docked mode.
 

LordOfChaos

Member
I'm starting to lean towards a57 16nm Foxconn.

If it was on 16nm, I'd be pretty flabbergasted at it being an A57. A72 is taped out and productionized on 16nm already and is superior in literally every way, its not a matter of the A57 being smaller and cheaper, the A72 beats it in every aspect.

The big reason we were thinking A57 rather than A72 was that A72 had never been made for the 20nm fab, and it would make little sense to put all that effort into porting it to an older fab thats' on its way out and was always pretty leaky.

Imo, A57 and 20nm would near sure go hand in hand, and 16nm and A72/A73 would near sure go hand in hand.
 

Durante

Member
There is nothing per se difficult about organization for e.g. ESRAM. It's more that it implies a specific architectural notion/ceiling (c.f. 10M on Xenon vs. early MSAA resolution goals, and the consequent predication mess, or the development of wide gbuffers), that reduces flexibility in terms of experimenting and improving on renderer design.
Yes, that's one of the general problems with low-level architecture-dependent optimizations encoded in user-level code, and doesn't apply only to games or ESRAM: it (a) obfuscates the actual algorithm, making it harder to understand and maintain, (b) greatly reduces performance portability, if not actual portability, and (c) reduces malleability, which makes it harder to experiment with and test algorithmic improvements that might be more meaningful than low-level architecture-specific tweaking.

Given all these factors, for general applications an architecture which doesn't impose hardware-specific user-level optimization concerns to achieve "good enough" performance seems to have a market advantage, even if it is less optimal from a pure hardware perspective.
 

z0m3le

Banned
If it was on 16nm, I'd be pretty flabbergasted at it being an A57. A72 is taped out and productionized on 16nm already and is superior in literally every way, its not a matter of the A57 being smaller and cheaper, the A72 beats it in every aspect.

The big reason we were thinking A57 rather than A72 was that A72 had never been made for the 20nm fab, and it would make little sense to put all that effort into porting it to an older fab thats' on its way out and was always pretty leaky.

Imo, A57 and 20nm would near sure go hand in hand, and 16nm and A72/A73 would near sure go hand in hand.

16nm A57 is in Tegra Pascal, so it shouldn't surprise anyone. Truth is, it matches the power envelope here, same with A72 on 20nm being very close to battery life performance we see. Thing is, 20nn doesn't make much sense for such a high bandwidth product like Switch. Nothing about the Eurogamer clocks at 20nm makes sense with engineering the device and battery life really doesn't work with those clocks either.
 

Hermii

Member
Forgive my ignorance, but would this be good news, or not-as-good news if true?
Great news compared to Eurogamer but not as good as a72s.

16nm A57 is in Tegra Pascal, so it shouldn't surprise anyone. Truth is, it matches the power envelope here, same with A72 on 20nm being very close to battery life performance we see. Thing is, 20nn doesn't make much sense for such a high bandwidth product like Switch. Nothing about the Eurogamer clocks at 20nm makes sense with engineering the device and battery life really doesn't work with those clocks either.
Tegra pascal? You mean Parker?
 
Top Bottom