Great news compared to Eurogamer but not as good as a72s.
Tegra pascal? You mean Parker?
Parker is called that because of the P in pascal. I did mean the Pascal version of Tegra
Great news compared to Eurogamer but not as good as a72s.
Tegra pascal? You mean Parker?
I thought Parker was the only version of pascal tegra.Parker is called that because of the P in pascal. I did mean the Pascal version of Tegra
I think we're veering off from the subject. It's not that everybody need to optimise their apps up the wazoo. It's about what one does need to do to hit a performance target in a scenario where a naive port, whatever that entails, misses that. And low and behold, consoles do tend to be underpowered compared to workstations. So unless you're doing a Tetris* you'd likely have to think about optimisations, whether that obfuscates the algorithm or not. We live in a reality where people buy (and generally use) products based on how they perform, not based on how pure their algorithmic implementation is.Yes, that's one of the general problems with low-level architecture-dependent optimizations encoded in user-level code, and doesn't apply only to games or ESRAM: it (a) obfuscates the actual algorithm, making it harder to understand and maintain, (b) greatly reduces performance portability, if not actual portability, and (c) reduces malleability, which makes it harder to experiment with and test algorithmic improvements that might be more meaningful than low-level architecture-specific tweaking.
Given all these factors, for general applications an architecture which doesn't impose hardware-specific user-level optimization concerns to achieve "good enough" performance seems to have a market advantage, even if it is less optimal from a pure hardware perspective.
Exactly. And reducing that amount of effort to get acceptable performance for most games seems like an important hardware design goal even for consoles these days -- much more so than it used to be.I think we're veering off from the subject. It's not that everybody need to optimise their apps up the wazoo. It's about what one does need to do to hit a performance target in a scenario where a naive port, whatever that entails, misses that.
Haven't Nintendo had this goal since they fucked up with the N64? GameCube was maybe the most developer friendly system of that gen, Wii was basically a GameCube, Wii U was fairly developer friendly once the tools were finished and with Switch they chose to work with Nvidia to create a developer environment that's may be above and beyond any other console.Exactly. And reducing that amount of effort to get acceptable performance for most games seems like an important hardware design goal even for consoles these days -- much more so than it used to be.
Just look at the platform development from PS2 and PS3 to PS4.
Sure, you can see that as another example of the same overall development. I chose PS/3 to PS4 since the contrast is particularly stark and the transition is recent.Haven't Nintendo had this goal since they fucked up with the N64? GameCube was maybe the most developer friendly system of that gen, Wii was basically a GameCube, Wii U was fairly developer friendly once the tools were finished and with Switch they chose to work with Nvidia to create a developer environment that's may be above and beyond any other console.
Yeah those properitery mini discs with less space than standard DVDs were real dev friendly!Haven't Nintendo had this goal since they fucked up with the N64? GameCube was maybe the most developer friendly system of that gen, Wii was basically a GameCube, Wii U was fairly developer friendly once the tools were finished and with Switch they chose to work with Nvidia to create a developer environment that's may be above and beyond any other console.
Yeah those properitery mini discs with less space than standard DVDs were real dev friendly!
None of them have been porting friendly. Gc lacked the disc space and the other ones probably including Switch lacks the power.Yeah those properitery mini discs with less space than standard DVDs were real dev friendly!
I don't see the LPDDR4 memory included in the power draw calculation or did I missed it?
Edit: I found this table, but it's from 2014, I don't know if things evolved meanwhile:
I'm sure Peter brown at gamespot said he dropped the res to 720p while docked and still had the framerate drops in the same places.I wouldn't be surprised if Nintendo didn't increase GPU clockspeeds for the switch at all for docked mode for botw. It could explain the framerate dropping down to the low 20s on docked mode at times with 900p, while handheld mode is a stable 30 at 720p.
I mean its possible that it could be a bug, and we will indeed have a patch on day 1.. But knowing how Nintendo treated TP on Wii was exactly like the cube minus controls, Nintendo doesn't seem like they took advantage of Switch's hardware at all because they were rushing it for launch, and maybe didn't want to alienate the gamers that was looking forward to the game on the original console(the Wii u) Certainly not in docked mode.
Huh? How would he do that? I doubt Switch has a setting for that (or did I miss something?).I'm sure Peter brown at gamespot said he dropped the res to 720p while docked and still had the framerate drops in the same places.
Huh? How would he do that? I doubt Switch has a setting for that (or did I miss something?).
Huh? How would he do that? I doubt Switch has a setting for that (or did I miss something?).
It's probably output resolution. It's still rendering at 900p I think. You won't get improved framerate on PS4 (or any console) if you change output to 720pHuh? How would he do that? I doubt Switch has a setting for that (or did I miss something?).
It's probably output resolution. It's still rendering at 900p I think. You won't get improved framerate on PS4 (or any console) if you change output to 720p
It's probably output resolution. It's still rendering at 900p I think. You won't get improved framerate on PS4 (or any console) if you change output to 720p
It's not in the calculation, because i don't know how much power LPDDR4 draws.
The table is certainly an interesting find, but i am not sure how to read it.
K4F6E304HB-MGCH would put us in the "two-channel LPDDR4 at 3200 Mbits/s" column, but how do we know how much energy a bit needs?
I hope it's not the bottom row :>
Was there any mention of the Vdd & Vddq voltages for the table?
Haven't Nintendo had this goal since they fucked up with the N64? GameCube was maybe the most developer friendly system of that gen, Wii was basically a GameCube, Wii U was fairly developer friendly once the tools were finished and with Switch they chose to work with Nvidia to create a developer environment that's may be above and beyond any other console.
It's not in the calculation, because i don't know how much power LPDDR4 draws.
The table is certainly an interesting find, but i am not sure how to read it.
K4F6E304HB-MGCH would put us in the "two-channel LPDDR4 at 3200 Mbits/s" column, but how do we know how much energy a bit needs?
I hope it's not the bottom row :>
Was there any mention of the Vdd & Vddq voltages for the table?
Low-power enhancements and a narrower address bus will reduce the energy required per bit (see the table). Transferring more bits, though, means that the overall power consumption may be higher than an LPDDR3 implementation at the highest speeds of operation. Because LPDDR4 will find a place in the highest-performance, most complex systems, power management becomes even more critical.
To avoid thermal issues, system designers can use strategies such as monitoring the SDRAM's internal die temperature, increasing the refresh rate, and throttling back the SDRAM clock when it is detected that the die is about to overheat. This may only be necessary when the device is used during extremely compute-intensive tasks such as real-time gaming, which draws more power and increases the heat dissipated.
Definitely not, otherwise you'd never get any decent battery life out of Switch even with settings turned right down in less demanding games, as the RAM would constantly be using a third of Switch's total battery power. Plus you've got phones out there that only use 2.5w total under full load with 4GB of LPDDR4. I'm talking about high spec phones with 5-6 inch screens and top end SoC's as well. So I'd think that the RAM is only in the mW power draw range.
NoIt does indeed have a setting to manually adjust resolution.
That said, given the very nature of the Switch's hybrid function, I doubt simply changing the resolution output is the same as the system thinking 'welp, handheld time'. Other settings may still be up that wouldn't be in handheld.
No
Zelda does not have a resolution option. The switch has a resolution output setting but that will probably have no effect on the game's internal resolution which is probably still 900p.
Smartphones might not need the same kind of performance from the memory on their normal use as a gaming device would need.
Still, probably in handheld mode the memory is running in a power saving mode, but this mode still might be more power hungry than the smartphone normal use.
It was only an example to put the idea of 4GB LPDDR4 possibly using 2w into perspective.
My example BTW (2.5w total system power draw) was a top end phone with 4GB LPDDR4 running the Manhattan benchmark constantly, while the SoC may throttle I doubt RAM will. Like I say just an example to put things into perspective not any kind of definitive info for comparison. I still believe RAM in Switch will be in the mW range though.
The hardware was already quite advanced when we started working on it. It made a very solid impression right from the start. When we got the final devkits (which are very close to the retail version) it felt simply perfect to play our game on them From this day we knew the Switch would win the masses.. It was simply so satisfying to use all the control options and to switch seamless between console and mobile mode.
It looks to me like the framerate drops when Link nears the fire, then again around environmental mist and a final time when you get a look out at the vast draw distance.
Aren't the first two tied to the GPU and called alpha effects which are pretty expensive?
In a console isn't 20% more GPU power pretty significant? If BotW is GPU limited then I don't think we should be discounting the fact the frame drops may be down to the fact that the EG clocks are the final clock speeds.
Hopefully it's improved in a patch but have Nintendo ever had a day one patch that improved framerate before?
I dunno if everyone saw this, it was from a shin'nen interview from yesterday on nintendolife:
http://www.nintendolife.com/news/2017/02/feature_fast_rmx_-_the_price_modes_and_performance_of_switchs_futuristic_racer
I dunno if this information could bring something to the foxconn leak or eurogamer frequencies, but I found that interesting.
I dunno if everyone saw this, it was from a shin'nen interview from yesterday on nintendolife:
http://www.nintendolife.com/news/2017/02/feature_fast_rmx_-_the_price_modes_and_performance_of_switchs_futuristic_racer
I dunno if this information could bring something to the foxconn leak or eurogamer frequencies, but I found that interesting.
Question guys. Using Botw as an example what percentage of its total graphics capability potential would you say its using? Is Botw pretty much maxing out Switch hardware?
Question guys. Using Botw as an example what percentage of its total graphics capability potential would you say its using? Is Botw pretty much maxing out Switch hardware?
he key addition is a new mode seemingly designed to beef up handheld performance. Developers can opt for a 384MHz GPU clock - a straight 25 per cent uplift in compute power compared to the default 307.2MHz option. Both frequencies are available to developers in what it calls 'normal mode' operation - and to be clear, users will not be able to choose between them. Additionally, adjustments have been made to available memory bandwidth. In our prior story, we revealed that in undocked mode, developers could choose between running the LPDDR4 memory at either 1600MHz or 1331.2MHz. The 1600MHz option is now only available in 'boost mode' - when Switch is docked - while 1600MHz support in mobile mode is deprecated. As before, developers can opt to run handheld modes while in the dock too, and to be clear, the documentation has no new modes for docked performance. On top of that, we should stress that not all games will use the 384MHz GPU mobile mode - game-makers will choose the best fit for their projects, and 307.2MHz remains the default option.http://www.eurogamer.net/articles/digitalfoundry-2017-new-performance-mode-boosts-handheld-switch-clocks-by-25-per-cent
However, as much as we want the Foxconn clocks to be real, the weight of evidence is stacking up against this aspect of the leak. To maintain meaningful battery life with those clocks, we'd need to be looking at a 16nm FinFET chip and maybe even a new revision of the ARM CPU cores, and the Chinese teardown of the processor confirms that the physical size of the chip is seemingly unchanged from existing 20nm Tegra X1 SoC.
The difference between 16nm and 20nm isn't actually about transistor size, but more about the 3D 'FinFET' transistors on the lower node. A 16nm SoC would be approximately the same size as the existing 20nm Tegra X1, but the difference here is that the teardown reveals a processor with seemingly identical dimensions. Also interesting is that the processor is surrounded by the same surface-mounted arrangement of what are likely to be decoupling capacitors, there to reduce noise on the power lines. The initial conclusion we have is that we are looking at a more lightly modified X1, still on the 20nm process, which ties in more closely with the clocks we reported - and indeed every non-Foxconn spec leak seen to date.
That's pretty big news (even if still a rumour). Should we make a thread for this?upclock in portable mode (new sdk) - 384 mhz gpu and only one mode in memory clocks - 1331.2mhz by digital foundry.
so new portable mode- 384 x 2 x 256 = 196.6 Gflops
upclock in portable mode (new sdk) - 384 mhz gpu and only one mode in memory clocks - 1331.2mhz by digital foundry.
so new portable mode- 384 x 2 x 256 = 196.6 Gflops
So officially >Wii U in pure flops.
I like their theory that Nintendo decided Zelda essentially needed this boost, that seems likely.
But now the docked mode seems to lag behind for the straight 720p to 1080p conversion. I would have expected this increase to be combined with an increase in docked speed as well.
Question guys. Using Botw as an example what percentage of its total graphics capability potential would you say its using? Is Botw pretty much maxing out Switch hardware?
Which could explain why Zelda is unstable at 900p.
Especially if they have upped the landscape effects/draw distance over portable mode (which makes sense to me, as it's a smaller screen).
Might be, the big screen definitely needs higher settings for those effects. I wonder if it is enough to explain the instability of the frame rate, though, since the change isn't that drastic, and they aren't pushing it at 1080p like Mario Kart, but rather at 900p.
Also, what about title like Mario Kart? Will they just underutilise the handheld mode to keep the 1080p docked mode stable? The new clock speed is a suggested spec, so that would be a reasonable assumption, right?
Hmm. I still think Nintendo has got a customized 16nm Tegra in the final retail unit. Otherwise, it really won't make so much sense and isn't in line in what Nintendo wants to achieve with this device.
This device combines handheld and console. With a processor based on new 16nm technology, this allows for higher clocks than 20nm at the same power consumption.
For the future of the device and its concept its absolutely necessary to have a Tegra based on the 16nm process, because this is the technology that further reduces the gap between home console and mobile device. This applies to 128 Bit memory bus as well, 64 Bit & 25 GB/s memory bandwidth wouldn't be enough for a home console this day (remember, Nintendo marketing it as a home console on the go) even with the magic sauce some people here are dreaming of.
If it really is still based on the old 20nm process, the concept of the Switch is not utilized to its full potential and I can't help but feel a little bit bad by paying 320 for older hardware.
This does not mean that the Switch isn't a amazing handheld or that its a waste of money, but Nintendo and Nvidia could have done more to fulfill the concept of building a handheld console hybrid if the Tegra really is based on the 20nm process.
So eurogamer was basically explaining the tear down from last week + got some info from a reliable resource that there is a 25% performance boost for CPU mode..
So err.. We have an actual tear down for a retail switch, right?
Hmm. I still think Nintendo has got a customized 16nm Tegra in the final retail unit. Otherwise, it really won't make so much sense and isn't in line in what Nintendo wants to achieve with this device.
This device combines handheld and console. With a processor based on new 16nm technology, this allows for higher clocks than 20nm and therebefore more power at the same power consumption.
For the future of the device and its concept its absolutely necessary to have a Tegra based on the 16nm process, because this is the technology that further reduces the gap between home console and mobile device. This applies to 128 Bit memory bus as well, 64 Bit & 25 GB/s memory bandwidth wouldn't be enough for a home console this day (remember, Nintendo marketing it as a home console on the go) even with the magic sauce some people here are dreaming of.
If it really is still based on the old 20nm process, the concept of the Switch is not utilized to its full potential and I can't help but feel a little bit bad by paying 320 for older hardware.
This does not mean that the Switch isn't a amazing handheld or that its a waste of money, but Nintendo and Nvidia could have done more to fulfill the concept of building a handheld console hybrid if the Tegra really is based on the 20nm process.