Keep in mind what Thraktor posted was the result from one test, namely Manhattan offscreen 1080p, where the test was set up in such a way so that the performance of TX1 would match that of the A8X, ergo the supposed halving of the GPU clock (emphasis on supposed, the clock was never quoted in the article). We don't know if that is the maximum TDP of the GPU at that supposed clock, unless we assume Manhattan 1080p to be the ultimate GPU load. And generally, the test goal was to show Maxwell's efficiency superiority over IMG's GX6850. Which is a bit funny claim to make, as unless apple published the exact pinout of the A8X, NV engineers could only guess what rails they were measuring on the ipad board (notice the funky fluctuations in the supposed GX6850 measurements, compared to the flattish Kepler band for the exact same workload?). In contrast, what NV
could have demonstrated without actual doubt would have been the doubled GPU efficiency from Kepler to Maxwell - something they just made a claim on a slide, and something which a rudimentary bulk-SoC check does
not confirm for fp32:
Code:
$ echo "scale=4; (365 / 11)" | bc
33.1818
$ echo "scale=4; (512 / 15)" | bc
34.1333