• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nintendo Switch Dev Kit Stats Leaked? Cortex A57, 4GB RAM, 32GB Storage, Multi-Touch.

Status
Not open for further replies.

KingSnake

The Birthday Skeleton
Cool can you point them out so I can remove them?.

Xbone's GPU - 1.31 TFLOPS, Switch - 393 GFLOPS docked. That's exactly 30%.
Xbone's CPU - 8 (but only 6 for games) Jaguars clocked at 1.75 GHz vs. 4 A57 clocked at 1 GHz.

How do you end up from this to 45% ?

My point was that XB1 struggles to reach 1080p. So Switch will struggle to bring that to 720p. Which will drag down handheld res even further..

Yes, to 540p most likely.
 

z0m3le

Banned
Yeah... More like between 40 and 45.
That is, if XB1 targets 1080p, which it does like 1 game for 10 ?
That is... For Switch docked to target 720p (which makes you wonder what handheld would be targeting).
Let's be fair, with the current hardware available, it could've been a better machine for these ports. Not that it matter for me. But itns clear Switch wasnt made with that in mind. It's a handheld which ups its clocks when docked to up the resolution on TV. Nth more, nth less.

As for the CPU cores, no, there's no hidden CPU cores or whatsoever to devs. Nintendo usually document that kind of stuff. They did for 3DS when 1core out of 2 was available, describing one as syscore and the other as appcore.

I agree with you, but we can split hairs over the %, it should be 45% unless you are talking about the slightly faster xb1s. It would only need to be 40% to do 720p of 1080p games though, much like the difference between switch portable and switch docked.

As for the CPU cores, we are speculating until we hear about a split, which we haven't. If and when we do, we can look at those numbers, currently what we know is Developers have 4 A57 cores, which is what I was saying.



480p would be the correct ratio, so similar to what you get on Wii U gamepad but without all the extra blurriness from streaming (think how reds looked vs every other color) It isn't idea, I'm not trying to say this is what I would have made, but it isn't a lost cause either.
 
So it's not impossible but crazy to suggest? lol

Maxwell originated on a 28nm node, 2nd Gen Maxwell may have occurred on a 20nm node but considering that it's of the same architecture suggests it was more of a die shrink than a new design that required more transistors.

If the Switch SoC is 16nmFF then it really shouldn't need fans for the clocks it is running at. If anything it should be clocked higher while still requiring fans and yet that isn't the case.

Maybe they have stuck with 20nm and 3 SM but, it is looking more and more unlikely when a 28nm node suggests that they had to clock it very low to get it around 1.5W when running the GPU.



Considering every insider and Eurogamer didn't debunk the leaked specs, it is unlikely. They never even said, "Hey, the GPU clock is wrong. It's supposed to be around 1.2GHz not 1GHz."

They even said the Switch final specs would be similar and yet we have a severely underclocked system.

Fans would be required for the Switch to not get throttled while docked and played in a hot room
 

EatChildren

Currently polling second in Australia's federal election (first in the Gold Coast), this feral may one day be your Bogan King.
CPU looks to be Switch's strongest point, competitively.

I'll drink to the day Nintendo releases any hardware where the CPU isn't by far the most absurdly weakest link. It's borderline historic at this point how routinely Nintendo fuck this up.
 

Rodin

Member
I don't think you guys have any idea of how ridiculously awful sub res games can look on a portable screen.

There's this thing called lowering details, and with today's temporal filtering techniques, releasing a game that is straight up 540p would mean not having a fucking clue about what they're doing.

I'll drink to the day Nintendo releases any hardware where the CPU isn't by far the most absurdly weakest link. It's borderline historic at this point how routinely Nintendo fuck this up.
The funniest thing here is thinking back to those wsj early articles that talked about Nintendo using industry leading chips in response to the criticism they got for the Wii U hardware not being able to keep up with the competition. Reading that quote after seeing the CPU clockspeed is both hilarious and infuriating.
 

Zedark

Member
45% of XBO? Which stage of grief is this now? Delusion? Must be an extra stage for Nintendo diehards. lolz

Accounting for Nvidia architectural advantages over AMD. The ratio is about 4:3 in NVIDIA's favour, so Switch would have 393*4/3 = 524 GFLOPS AMD equivalent. 524/1300*100% = 40%, so we are far along already. Don't know why 45 instead of 40, though.
 

Rolf NB

Member
I'll drink to the day Nintendo releases any hardware where the CPU isn't by far the most absurdly weakest link. It's borderline historic at this point how routinely Nintendo fuck this up.
People have been complaining about "tablet CPUs" in the PS4 and Xbone, so it may not be that Nintendo went out of their way; they just had a really low target.
You'll have your day of drinking.
 

TLZ

Banned
The funniest thing here is thinking back to those wsj early articles that talked about Nintendo using industry leading chips in response to the criticism they got for the Wii U hardware not being able to keep up with the competition. Reading that quote after seeing the CPU clockspeed is both hilarious and infuriating.

Yes. Very :(
 

Oregano

Member
I don't think you guys have any idea of how ridiculously awful sub res games can look on a portable screen.

There's this thing called lowering details, and with today's temporal filtering techniques, releasing a game that is straight up 540p would mean not having a fucking clue about what they're doing.


The funniest thing here is thinking back to those wsj early articles that talked about Nintendo using industry leading chips in response to the criticism they got for the Wii U hardware not being able to keep up with the competition. Reading that quote after seeing the CPU clockspeed is both hilarious and infuriating.

Industry Leading chips*

*downclocked to oblivion
 

Zedark

Member
Isn't that valid only on PC, though?

Hmm, I couldn't say. I know it was used multiple times before in these discussion to compare with XB1 and PS4, but maybe I am wrong? Could use clarification from someone who is sure about whether this is true for consoles as well, please.
 
Accounting for Nvidia architectural advantages over AMD. The ratio is about 4:3 in NVIDIA's favour, so Switch would have 393*4/3 = 524 GFLOPS AMD equivalent. 524/1300*100% = 40%, so we are far along already. Don't know why 45 instead of 40, though.

Right. And we can disregard the RAM bandwidth, RAM amount, CPU core count etc.?
A system's ability ain't just the floating point operations numbers, yeah?
 

ggx2ac

Member
Fans would be required for the Switch to not get throttled while docked and played in a hot room

Yes, I pointed that out back in page 98.

http://m.neogaf.com/showpost.php?p=226673825

I was arguing about fab nodes to disprove that it can't be running on a 16nm node considering the low clock speeds while also having a fan.

I should have been more specific and said that it shouldn't need a fan while it is portable at those clock speeds considering its Wattage should be around <5W. (With reference to one of the recent patents that state that the fan is active even while portable.)

Speaking of which for anyone else reading.

http://www.anandtech.com/show/9289/the-nvidia-shield-android-tv-review/9

Here is a picture that shows the thermals of the Shield TV. We know it uses a TX1 and is at its stock clock speed, it uses around 20W while running games and it uses a fan.

They showed it at 34°C which is only around a 10°C increase compared to room temperature which means it's warm but not in anyway hot compared to say the PS4 which can get up to 56°C which is noticeably hot when you feel the air coming from the vents but not near overheating obviously.

http://wccftech.com/playstation-4-heat-regulation-test-thermal-camera/
 

manuel

Neo Member
Switch might have RAM bandwidth advantage.
So yes? I hope it is a bit more, this chip is realy not suited for gaming. I know you can not compare a pc to a dedicated machine but... I do not see it being able to run AAA games reasonably. At any settings or resolutions.
 

Mokujin

Member
I think that once the device is out and we see what's inside, we'll have a good laugh on how Nintendo managed to screw up another hardware design.

Some baffling decisions here and there. Not having a big.LITTLE setup sounds stupid in this day and age.

I think most people doesn't understand what BIG.little means, most devices and specifically Tegra use cluster migration, meaning that it either uses 4 a57s or 4 a53 in low power mode, but in a device that is going to work at high power mode most of the time, having those a53 means almost nothing aside from saving a bit of battery.

On the other hand, this matter is the missing piece in the Switch puzzle, in my eyes if there has been some kind of customization over the Tegra X1 is in the CPU setup, my Espresso theory maybe too wild but if it's not true I would expect a couple of extra a53 working in an HMP setup along with the a57s (Nvidia has already made an HMP setup for Tegra Parker, so they know how to do them)

But I would advise against expecting higher clocks or extra SMs, that seems like wishful thinking and setting yourselves for extra disappointments.
 

KingSnake

The Birthday Skeleton
Accompanied by what OS overhead ;P

Yeah, using 1-2 CPU cores and 2-3 GB RAM is like no overhead at all.

The point is that it's not that easy to provide a ratio outside PC where you can use similar specs and just switch the GPU.

If you add on top the memory and memory bandwidth and compensate with tile base rendering it becomes even blurrier.

Still don't see how all these could account for an increase to 45% of Xbone. Not at these clocks. Maybe if you use a bit of FP16, but then it becomes even blurrier.
 

KingSnake

The Birthday Skeleton
Everything lines up if these were the CPU and GPU clocks since the beginning in the devkits and the fan has a different purpose than cooling an overclocked or a normal TX1.
 

z0m3le

Banned
45% of XBO? Which stage of grief is this now? Delusion? Must be an extra stage for Nintendo diehards. lolz

Read on.

Xbone's GPU - 1.31 TFLOPS, Switch - 393 GFLOPS docked. That's exactly 30%.
Xbone's CPU - 8 (but only 6 for games) Jaguars clocked at 1.75 GHz vs. 4 A57 clocked at 1 GHz.

How do you end up from this to 45% ?
XB360 is 240gflops with the ancient AMD architecture from 2005. Wii U is 176gflops with the ancient AMD architecture from 2008. Yet []quote[]"Some of the developers we spoke to indicated to us that the console will have 50% more processing power compared to the PlayStation 3 or Xbox 360. This is yet to be confirmed by Nintendo."[/quote]
XB1 is 1330gflops with the ancient GCN architecture from 2011. Wii U's 176gflops is only worth about 128gflops of GCN. putting XB1 at about 10x the Wii U's performance. enjoy Now for the next part, Maxwell and Pascal have identical flop performance, this is also true of GCN and Polaris (GCN 4.0) which is lucky for us because we can then match up the GTX 1060 and the RX 480 to find that flop for flop, the 4.3tflop GTX 1060 is 40% faster than the 5.8tflop RX 480 on average.

393+40%= 550+ this is just to match the architecture differences. There is always going to be outliners, but this is true among the largest samples. XB1 at launch was 1228gflops, -55% is 552gflops, putting Switch's performance over 45%. XB1 currently is 1330gflops, -58.5% is 552gflops, putting Switch's performance over 41.5% XB1. XB1s is 1400gflops -61% = 546gflops, putting Switch's performance over 39% XB1s

Again without any new features or mixed precision.

We've been talking about jaguar cores vs A57 cores, I'd compare that to PS4 rather than XB1, but it isn't far off the mark for XB1 either as cores are only clocked ~9% higher.

Right. And we can disregard the RAM bandwidth, RAM amount, CPU core count etc.?
A system's ability ain't just the floating point operations numbers, yeah?

All about context, I'm only talking about the graphical capabilities.
 

KingSnake

The Birthday Skeleton
` So practically you're using a comparison made between 2 PC GPUs and extrapolate to Switch vs. Xbone. I don't think that's correct because of the big difference in the drivers' quality, but it's your opinion.
 

Hermii

Member
I'll drink to the day Nintendo releases any hardware where the CPU isn't by far the most absurdly weakest link. It's borderline historic at this point how routinely Nintendo fuck this up.

To be fair all three went with a weak cpu this gen.

Actually I don't think the Wii U cpu is absurdly weak compared to the rest of the hardware. Its absurdly weak overall.
 

z0m3le

Banned
` So practically you're using a comparison made between 2 PC GPUs and extrapolate to Switch vs. Xbone. I don't think that's correct because of the big difference in the drivers' quality, but it's your opinion.

I'm comparing architectures. Do you think AMD makes worse drivers on PC than for XBox or PS4? they would be well suited. We've also seen maxwell on pc out perform PS4 with far less gpu performance. Do you just not believe in Architectures yielding different results? I've even shown you VLIW5 vs GCN PC to PC, there is literally 100+ benchmarks to compare GCN/polaris to Maxwell/pascal, they point to a 40% advantage on average. Going across to consoles, you have developers optimizing for consoles, so you see a smaller advantage, but it is still ~30%. I don't see why you have such a problem with the idea that Switch's performance can be compared to XB1 through benchmarks of these architectures on PC where they would have the same optimizations.
 

z0m3le

Banned
There is so much stretching and reaching in that post, you really need to start a yoga studio.

I'm doing what I can to add to this thread, we are comparing benchmarks of A57 to Jaguar, but somehow maxwell/pascal vs polaris/gcn isn't valid?

The whole previous architecture stuff was to prove that architectures do provide increased performance. It isn't some fairytale
 
Do you believe there is an advantage of GCN vs NVIDIA Cuda cores, but not to the percentage he is calculating?

I'm doing what I can to add to this thread, we are comparing benchmarks of A57 to Jaguar, but somehow maxwell/pascal vs polaris/gcn isn't valid?

The whole previous architecture stuff was to prove that architectures do provide increased performance. It isn't some fairytale


I will repeat as many times as you guys need to understand it: A system ain't just it's FLOPS number, and the system's graphics pipeline isn't comprised of just a bunch of shader units.
 

z0m3le

Banned
I will repeat as many times as you guys need to understand it: A system ain't just it's FLOPS number, and the system's graphics pipeline isn't comprised of just a bunch of shader units.

We are discussing what we know, we can't discuss what we don't know, but I'm comparing the entire graphics pipeline of Pascal to Polaris and finding 40% better performance on average, that is with Pascal having less memory bandwidth too, so I think you are just missing the concept of what we are doing here. Discussing specs that we do have.
 

KingSnake

The Birthday Skeleton
I'm comparing architectures. Do you think AMD makes worse drivers on PC than for XBox or PS4? they would be well suited. We've also seen maxwell on pc out perform PS4 with far less gpu performance. Do you just not believe in Architectures yielding different results? I've even shown you VLIW5 vs GCN PC to PC, there is literally 100+ benchmarks to compare GCN/polaris to Maxwell/pascal, they point to a 40% advantage on average. Going across to consoles, you have developers optimizing for consoles, so you see a smaller advantage, but it is still ~30%. I don't see why you have such a problem with the idea that Switch's performance can be compared to XB1 through benchmarks of these architectures on PC where they would have the same optimizations.

I believe in architecture yielding different results, but I believe that you can't just use the ratio from PC and apply it to Switch and Xbone. And going for the max ratio and then even round it up to 45%. Especially since the memory bandwidth plays a big role in all of this.
 
We are discussing what we know, we can't discuss made up numbers, but I'm comparing the entire graphics pipeline of Pascal to Polaris and finding 40% better performance on average, that is with Pascal having less memory bandwidth too, so I think you are just missing the concept of what we are doing here. Discussing specs that we do have.

Pascal? Polaris? What the hell are you doing man? As you said, we don't have all the information needed, but you are happily cranking out precise numbers based on Pascal and Polaris which has nothing to do with anything. Lordy... You need to take a break, man. You are just masturbating mathematically at this point.
 

z0m3le

Banned
I believe in architecture yielding different results, but I believe that you can't just use the ratio from PC and apply it to Switch and Xbone. And going for the max ratio and then even round it up to 45%. Especially since the memory bandwidth plays a big role in all of this.

Memory bandwidth of the GTX 1060 vs RX 480 is much lower actually. The 45% was rough estimations done in my head, I'm at a computer right now so I did the extra work and came away with better than 41.5%, but it really only needs to be at ~40% to do 720p versions of 1080p XB1 games. Infact the entire clock speed difference, puts the handheld at 40% the docked speed and does exactly this.

I'm doing my best to give us references we can use to determine Switch's performance since it is more or less known what the base performance is. I've said over and over that these are estimations of baseline performance.
 

czk

Typical COD gamer
Pascal? Polaris? What the hell are you doing man? As you said, we don't have all the information needed, but you are happily cranking out precise numbers based on Pascal and Polaris which has nothing to do with anything. Lordy... You need to take a break, man. You are just masturbating mathematically at this point.

Didnt he get permajunniored during the WUST threads?
 

z0m3le

Banned
Pascal? Polaris? What the hell are you doing man? As you said, we don't have all the information needed, but you are happily cranking out precise numbers based on Pascal and Polaris which has nothing to do with anything. Lordy... You need to take a break, man. You are just masturbating mathematically at this point.

I'm just at work killing time in a speculation thread, speculating. You are brow beating and not reading. Pascal and Polaris have no performance advantage in the metric I'm measuring (flops) over their "previous" architectures. Hell Pascal didn't even exist 2 years ago, it was added to the roadmap because volta was delayed and Nvidia decided to shrink maxwell again.

Here PARKER is maxwell based, it is just called pascal:
TegraRoadmap.jpg

Didnt he get permajunniored during the WUST threads?

Nope, it was a thread I created at 4am, posted a fudzilla article that was anti apple in tone, the mod that juniored me is unknown and when I asked another mod about it, they didn't know anything but didn't want to step on toes. I never asked again because I don't really need to post threads. Also, I'm right here man, you can just ask me about it.
 
I didn't follow all the drama. Thanks for the heads up. I will move on. ;)
Lets change the topic: what do you think is up with the dev kit? They had barely edited TX1 docs, but we now getting these clock frequency numbers and it doesn't seem like it wasn't downgraded throughout its development.
 
Status
Not open for further replies.
Top Bottom