I don't think a teardown qualifies as a preview. Plus, is DF doing teardowns usually, I don't remember seeing one from them.
And the last sentence you wrote tells us that it can't be eurogamer's clocks on 16nm because that draws under 1.5w for the Soc and couldn't have a heat profile like that with the active cooling. Foxconn's clocks are also pretty much impossible on 20nm because the battery would drain in just over a hour.
Aren't the Foxconn clocks for docked? ��
I thought the Shield TV could run as fast or faster than those figures.
I may be getting my wires crossed, were there any specific docked and undocked performance claims from the Foxconn guy?
Pretty sure they did one for Wii U, there is little point in doing a teardown for any other company as they will tell you the exact specs.
Are you sure? We wouldn't have needed the high res pictures of the chips right here on GAF and the hundreds of posts debating those if DF would have done that.
Ok, I didn't pay close attention to that post by z0m3le and missed that generalisation, which was indeed way too broad, but for some categories (read: BRDF model complexity and target dynamic ranges) of pixel shaders the entire shader can get away with fp16. For instance, every single pixel shader in this video, as last-gen as they are, runs at fp16. And don't worry about wasting your points in this thread - while I've seen my share of ISA-level shader optimisations, I'm not currently on the GPU side of the building in the organisation I work at but I still do low-level optimisations on inherently visual stuff where loss of precision is not taken lightly. Suffice to say I wish I could get rid of all the fp64 in our codebase but I can't ; )That was in fact, the exact point I was replying to from the poster: "100% of pixel work can be done in fp16". Sweeping generalities in these enthusiast discussions are painful to witness. I'm not sure anyone in this thread has even attempted to understand long pole on performance due to memory latency in the shader microcode, launch rate, etc., none of which will be helped by fp16. But even that's a generalization, because we have that situation already, which is why we sometimes fold passes because we have idle ALU cycles, etc.
True. And yet, that's somewhat orthogonal to the precision topic.Sure, but they are the bread and butter of a console GPU programmer.
MDave did tests on his Shield TV and found out it throttles either the GPU or CPU (or both to a lesser extent) to levels below the Foxconn clocks after a few minutes at max clocks.
The Foxconn leaker made no mention of docked or undocked clocks. In fact, we still have no official word on whether or not clocks change when docked, although Eurogamer's report says they can.
Hillary Clinton's hopes and dreams.
Thanks, I thought so. I am curious if the performance/throttling is identical between the 2017 shield TV and the previous one, I think I read that the chip don't look identical?
Nothing new, really. Basically it is still a battle between Eurogamer and Foxconn, and we will most likely only know the answer when someone scans the SOC.I haven't been keeping up with this thread for the last day or so.
questions:
1. Have we learned anything new about Switch's final specs.
2. What are the latest reasonable, plausible rumors.
Suffice to say I wish I could get rid of all the fp64 in our codebase but I can't ; )
True. And yet, that's somewhat orthogonal to the precision topic.
Nothing new, really. Basically it is still a battle between Eurogamer and Foxconn, and we will most likely only know the answer when someone scans the SOC.
Its like they're rushing everything, which explains the lackluster lineup and info on online content.It's really weird Nintendo didn't make an OS direct and leaving it up to the press to show it of. I assume that goes under "preview". I would also assume a teardown goes under "preview" but I have no idea.
I haven't been keeping up with this thread for the last day or so.
questions:
1. Have we learned anything new about Switch's final specs.
2. What are the latest reasonable, plausible rumors.
I think the final bandwidth is still up in the air, even with that find, right? They could have added modifications to the board that increase the bandwidth, like the WiiU had.Based on the pictures in OP we have reasonably identified (like 90%) the RAM as two 2GB modules for a total of 4GB with 25.6GB/s bandwidth.
Beyond that we aren't really any closer to figuring out final specs. I think we determined the SoC in the OP is almost exactly the same size as a Tegra X1, though that doesn't really mean anything beyond any more than 256 CUDA cores is theoretically impossible.
This gives us no indication of 20nm vs 16nm and certainly doesn't tell us anything about CPU cores or clock speeds.
It's also possible that the unit in the OP is not a final retail unit. That's about it I think.
The latest specs are: we have no idea. Could be anywhere from 4x A57s with 3 usable at 1GHz, 256CC at 768MHz docked to 4x A72s at 1.78GHz and 256CC at 921MHz docked. Or anywhere in between. Or the launch clocks are the former but could be raised after launch, potentially up to the latter.
To quote a colleague from the GPU department: 'Guys, we need to figure out something to do in 8-bit, lest we'll soon be using the minority part of the GPU die.'Are you watching the 8-bit ALU developments in the DNN-on-GPU space? I can't tell if that will thrill or horrify you. I'm just morbidly curious I suppose...
Oh, definitely. No silver bullets in any of the computational domains I've ever touched upon.Sure, but the context here was whether half-precision ALU is some sort of fountain of performance equalization. For effective, measurable improvements in throughput, the devil is in the architectural details, and there are no easy answers in real-world (e.g. non-contrived) workloads.
No clue if and who on the team will go to GDC this year, but I'd love to get to SIGGRAPH this year (again, no idea who from the team will attend this year, but chances are not on my side (again), unless I manage to come up with a paper, which is utterly improbable, given my lowly coder status vis-a-vis the math-wiz kids I'm surrounded by).If you had a dollar for every time I said "ah, improving this factor of hardware utilization will finally reduce our net wall time", you could buy a nice sandwich and I would still be full of sad/interesting stories. (If you will be at GDC I will buy you said sandwich.)
I'm still not sure why docked specs matter that much. I mean most if not all of the difference will be used to bump up the resolution.
so tomorrow ends the hardware embargo... can we expect any news on the specs?
I'm still not sure why docked specs matter that much. I mean most if not all of the difference will be used to bump up the resolution.
It matters because it will look better on a bigger screen, and maybe devs will have room to add a few more things besides resolution. The better question is why do we need that expensive piece of plastic to connect the Switch to a TV.
Because your TV doesn't have a USB-C input and the Switch doesn't have an HDMI port, so you need a dock or a hub to expand the connectivity options of the console and enable that.
As well as enable the usage of common USB-A accessories at home, like ethernet adapters.
You say like that isn't better graphics. The thing is, games should be made like that. Graphics that work in good resolution and with a good frame rate. To me that is good graphics. Resolution and frame rate shouldn't be sacrificed over "better graphics". At least not frame rate.
It matters because it will look better on a bigger screen, and maybe devs will have room to add a few more things besides resolution. The better question is why do we need that expensive piece of plastic to connect the Switch to a TV.
And the power, usbs?? Also the form factor... Cmon..you can buy a usb c to hdmi cable for 4 euros.
The one you get with the switch doesn't cost as much as one bought separately anyway, so expensive is relative. Also it's way more convenient and useful than just a cable or something.
And the power, usbs?? Also the form factor... Cmon..
Nintendo is charging 90 euros for the dock. Not sure how much it adds to the cost of the console.
power is also a cable...I get the form factor thing, but for 90 euros??
I think the final bandwidth is still up in the air, even with that find, right? They could have added modifications to the board that increase the bandwidth, like the WiiU had.
MDave did tests on his Shield TV and found out it throttles either the GPU or CPU (or both to a lesser extent) to levels below the Foxconn clocks after a few minutes at max clocks.
The Foxconn leaker made no mention of docked or undocked clocks. In fact, we still have no official word on whether or not clocks change when docked, although Eurogamer's report says they can.
I thought I would see if my Shield TV could handle a 4K 60fps HDR 10bit video (on my shiny new LG C6 TV), and it handles it perfectly. Would that mean anything useful in how the X1 is optimised well for high bandwidth usage?
I don't mean bumping the resolution doesn't matter, I mean the "real" power of the switch will always lie in its handheld mode as it is the required mode around which everything was built.
I don't think games will be made for handheld mode paticularly. Handheld and tv goes hand in hand because of the power difference. So actually developers don't have to target some specs and then port to other. It's just resolution difference.
The results I got from the Shield TV were that even when locked to a maximum of 1GHz, if the CPU were under load and the GPU was getting pushed hard too, the GPU would fluctuate between 614MHz and 1GHz, averaging about 768MHz most of the time. If the CPU was allowed to go at its maximum, the GPU would get throttled more.
I'm still not sure why docked specs matter that much. I mean most if not all of the difference will be used to bump up the resolution.
I believe USB-C docking stations are relatively new technology, so they are generally not that cheap to get at this time.Nintendo is charging 90 euros for the dock. Not sure how much it adds to the cost of the console.
Right. So, it is possible that there will be upgraded/downgrades polygon models and/or viewable geometry between modes, correct? It is possible that we may be seeing something like that with Zelda, but we need to look at the difference between the final versions.Clock speed doesn't only just affect pixel-related throughput. There's still the front end of the GPU to consider i.e. geometry/triangle setup.
The CPU speed doesn't change between docked and undock mode, so that is one reason why this remains as a hot topic.
This is one of the things I'm wondering is still true, was this only confirmed via Eurogamer?
Actually, that is true. We don't have official confirmation of that. Changing the clockspeed between modes could lead to issue with game logic, physics, etc, so it would make sense for it not to change.This is one of the things I'm wondering is still true, was this only confirmed via Eurogamer?
Eurogamer is actually still the only source who has said there are different clock speeds between docked and undocked to begin with. I still think there's a (admittedly small) chance that there is no clock speed change at all, which would explain why titles like Zelda and Splatoon 2 are not 1080p when docked.
But yeah EG is the only source saying the CPU clock will not change when docked/undocked.
wut? you're saying both things at the same time.
Eurogamer said CPU clocks won't change, always at 1 ghz. And GPU clocks will change from 307.2 to 768MHz docked.
Yeah, I seriously doubt that. The patent for the Switch mentions that the system fan is faster in docks mode, and that wouldn't seem necessary if the specs are the same between modes. One of the source from Eurogamer also stated that the difference between docked and Undocked was like making a game with two difference specs. If this isn't true, Eurogamer's info is very wrong, and I don't think anything we know about the system so far discredits their info yet.Eurogamer is actually still the only source who has said there are different clock speeds between docked and undocked to begin with. I still think there's a (admittedly small) chance that there is no clock speed change at all, which would explain why titles like Zelda and Splatoon 2 are not 1080p when docked.
But yeah EG is the only source saying the CPU clock will not change when docked/undocked.
Yes but could at least be an optional mode for more experienced teams who might need it, boost mode on PSpro (which iirc digital foundry says boosts CPU clock too) sounds pretty successful without devs having had foreknowledge of it.Actually, that is true. We don't have official confirmation of that. Changing the clockspeed between modes could lead to issue with game logic, physics, etc, so it would make sense for it not to change.