• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DF: Nintendo NX Powered By Nvidia Tegra! Initial Spec Analysis

Status
Not open for further replies.

Schnozberry

Member
Right, but if Nintendo is using 16nmFF+, shouldn't they have told devs by now that the final chip will be Pascal-based? (There's no point in putting Maxwell on 16FF+, since that's already what Pascal basically is.) I want to believe as well, but usually with Nintendo as of late dev kits get weaker as launch nears, not more powerful. Also, shouldn't Nvidia be able to send out Parker chips by now? It's probably just gonna be the X1 with a wider bus or some eSRAM.

We have no idea what Nintendo has or hasn't said to developers about performance targets or final hardware. Eurogamer didn't have any of that information, and what they know about the final design seemed inferred by their source rather than confirmed.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
But can it be supported by Tegra X1's IP? If by some miracle Nintendo is using a Parker-based chip they might be able to work some magic, but I feel that Nintendo would have told devs that they were if that were the plan.
You keep thinking in the same confines. TX1 is just a readily-available chip with desirable performance characteristics for the NX. No TX1 shortcomings need to automatically translate to TN1. Mainly because ARM Ltd are in the business of providing modular IP for exactly this kind of scenario you're concerned about: http://www.arm.com/products/system-ip/interconnect/corelink-cci-family/index.php

Also, I don't believe that rumor from that forum post at all. Who ever heard of predicting leaks months in advance? And leaking price information? And being a developer who knows about a chip which no other dev seems to know about yet, considering that Eurogamer/DF have heard that X1 is all devs have heard about? Seriously, I don't understand why anyone is taking that seriously. It's such an obvious hoax that it's not even funny.
I'm using TN1 just as short-cut for "that Tegra NV made for nintendo".
 

Durante

Member
A quad Cortex-A9 ~444MHz and a quad SGX54x was "as fast as it conceivably could have been" in December '11? Let's see..
[...]
Seems to me sony were just using top-bracket industry-standard design.
Shall we make a bet which one is closer to top-end tablet performance (in portable mode of course) at the respective time of its release? I believe that Nintendo is still, to some extent, Nintendo.

Historically speaking, NVIDIA's FLOPS have been more efficient than AMD's FLOPS (the Wii U's GPU is an AMD chip).
Primarily because AMD's DX9/11/OpenGL drivers suck.
 

dr_rus

Member
The issue is that people are comparing half-precision to single-precision and falling for a marketing trick. And none of it really matters in gaming either way!

FP16 is perfectly usable for gaming even though not universally. It also provide a lot of power savings compared to FP32 which is good for mobile platforms.

So it is worth mentioning that TX1 have 2x FP16 rate as chances that it will be widely used on NX are pretty high.
 

Schnozberry

Member
To my understanding, this is a hard limitation of the chip's hardware. Nintendo will most likely use a custom chip in the end, but I'm not sure if they can change that specific limitation without heavy, costly modifications. I'm also not sure that they have any actual incentive to do so.

The incentive would be thread flexibility, so your low power cores could handle OS tasks while your high power cores handle gaming tasks. As is with the TX1 you have a scheduler that decides which core an operation should run on, but it has to disable the high power core to use the lower one and vice versa. Some of the more complex deca-core designs use three clusters to get around this limitation.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Shall we make a bet which one is closer to top-end tablet performance (in portable mode of course) at the respective time of its release? I believe that Nintendo is still, to some extent, Nintendo.
How about a bet which vendor lied more blatantly about their handheld's performance levels? : )
 

Rodin

Member
Shall we make a bet which one is closer to top-end tablet performance (in portable mode of course) at the respective time of its release? I believe that Nintendo is still, to some extent, Nintendo.
Unless battery life is a major issue, i'd say Nintendo this time. The mobile hardware market is massively different compared to when PS Vita was released, and it's in Nvidia best interest to give Nintendo a good deal considering it's their best bet to push the Tegra line.

Primarily because AMD's DX9/11/OpenGL drivers suck.
Isn't that due to the architecture being extremely different as well? If driver were the "only" issue, i think AMD would've figured it out at this point. Not that it's a small one, but still.
 

z0m3le

Banned
sorry if this has been answered already but how do the flops in tegra compare to the flops in the wii u gpu? as we know the wii u has less than the 360 but its work better by being more modern, so are tegras flops better still or about the same?

PS4/XB1's GCN architecture is about 30% faster than R700 architecture used in Wii U, Pascal is ~40% faster than Polaris which is more or less the same as GCN flop to flop... ROUGH estimate, but Pascal should be about 90% faster than Wii U, flop to flop. (X1 is maxwell based, and again there isn't much difference between performance per flop with pascal)

For illustration purposes, X1 is 512GFLOPs, this is over 5 times faster than Wii U's 176 gflops (nearly 3 times faster than the 352gflops it sometimes gets confused to have) and somewhere around 60% of XB1's performance.

If NX is using Pascal (I don't see Nvidia pushing out another maxwell chip tbh) they can hit much higher clocks when docked, so even in the same configuration as X1 (256 cuda cores) if it is reaching 1.5ghz or 1.6ghz, you'll have 768 to 819GFLOPs (+40%) which gives you 1 to 1.15 TFLOPs, slightly under XB1. If the pascal chip is 3 SM, it would be completely possible to hit 1.228 tflops (+40%) which gives you 1.7tflops, or just under PS4's 1.843TFLOPs.

In order to reach those clocks, NX will have to have a fan in the body of the device, but that fan can be kept off while on the go, and be down clocked to 1/4th it's docked clock, which is perfect for 540p, it is also possible to waste battery and go with 1/2 it's docked clock so that it can have a terrible battery life and hit 720p. The dock can also offer a blower to help move air through the device, as long as the vents allow and the passive cooler fins are designed to be cooled from the side. An active cooler inside a device like this would be very interesting, as they could bridge the gap between devices.
 

LordOfChaos

Member

I like how many people I've encountered that outright won't believe that an ARM chip can beat current consoles because they're x86. You kind of have to look at *which* x86 core architecture they're using, Jaguar was an Atom competitor some generations of Atom back... Some ARM designs like Apples A9X can even make Core M watch over its shoulder. The ISA isn't the bar to scaling either.



My biggest question at this point is if the docking unit itself adds any power to the mix rather than just allowing the mobile SoC to clock higher or enable the large cluster, like their SCD patents would imply. The CPU side would already be competitive as is, but I wonder if Tegra has the ability to tap into a dedicated GPU in the base unit.


Oh god, do I now sound like the "hidden dGPU" XBO folks or the "Wii U overclock is coming" folks?
 
Shall we make a bet which one is closer to top-end tablet performance (in portable mode of course) at the respective time of its release? I believe that Nintendo is still, to some extent, Nintendo.

Primarily because AMD's DX9/11/OpenGL drivers suck.



Then again, unless it's underclocked under 400mhz, X1 should remain competitive compared to say ipad air 2.

Although, I still believe we may end up with a SoC with less cores.
 

MuchoMalo

Banned
Whatever the case is, on the off chance that it is Pascal-pased I wonder what the chances are that Nvidia would hint at it when showing off their new Tegra next month.
 
Whatever the case is, on the off chance that it is Pascal-pased I wonder what the chances are that Nvidia would hint at it when showing off their new Tegra next month.



To be fair, Pascal based doesn't matter. X1 would be faster than Pascal based with 1 SMM at same clock.
What matters is to know the core count and the clockspeed.
 
Primarily because AMD's DX9/11/OpenGL drivers suck.

That doesn't magically make Nintendo's API for Wii U not suck by comparison. The difference in efficiency has more to do with NVIDIA'S proactive optimizations and AMD's lack thereof, which is not something that the Wii U's GPU just automatically inherited just because it's a console GPU.

How about a bet which vendor lied more blatantly about their handheld's performance levels? : )

SAVAGE


Isn't that due to the architecture being extremely different as well? If driver were the "only" issue, i think AMD would've figured it out at this point. Not that it's a small one, but still.

It's a mixture of both, but ultimately boils down to NVIDIA being better at optimizing for their own hardware than AMD is at optimizing for their own hardware.
 

MuchoMalo

Banned
To be fair, Pascal based doesn't matter. X1 would be faster than Pascal based with 1 SMM at same clock.
What matters is to know the core count and the clockspeed.

But it would use more power, which means that the performance potential would be drastically reduced. I also don't understand why you believe that using Pascal would limit them to 1 SM. That makes no sense whatsoever.
 

z0m3le

Banned
To be fair, Pascal based doesn't matter. X1 would be faster than Pascal based with 1 SMM at same clock.
What matters is to know the core count and the clockspeed.

Well X1 (GPU) is 1.5watts @ 500mhz iirc, with 256 cuda cores 2SM (so others can follow along) moving to Pascal and 16nm, should see a ~20% reduction in power consumption, so 1.2 watts, I assume they will go with 400mhz so that they can clock 4 times faster when docked, giving you ~1watt for 2SM, leaving 3SM on the table.
 
PS4/XB1's GCN architecture is about 30% faster than R700 architecture used in Wii U, Pascal is ~40% faster than Polaris which is more or less the same as GCN flop to flop... ROUGH estimate, but Pascal should be about 90% faster than Wii U, flop to flop. (X1 is maxwell based, and again there isn't much difference between performance per flop with pascal)

For illustration purposes, X1 is 512GFLOPs, this is over 5 times faster than Wii U's 176 gflops (nearly 3 times faster than the 352gflops it sometimes gets confused to have) and somewhere around 60% of XB1's performance.

If NX is using Pascal (I don't see Nvidia pushing out another maxwell chip tbh) they can hit much higher clocks when docked, so even in the same configuration as X1 (256 cuda cores) if it is reaching 1.5ghz or 1.6ghz, you'll have 768 to 819GFLOPs (+40%) which gives you 1 to 1.15 TFLOPs, slightly under XB1. If the pascal chip is 3 SM, it would be completely possible to hit 1.228 tflops (+40%) which gives you 1.7tflops, or just under PS4's 1.843TFLOPs.

In order to reach those clocks, NX will have to have a fan in the body of the device, but that fan can be kept off while on the go, and be down clocked to 1/4th it's docked clock, which is perfect for 540p, it is also possible to waste battery and go with 1/2 it's docked clock so that it can have a terrible battery life and hit 720p. The dock can also offer a blower to help move air through the device, as long as the vents allow and the passive cooler fins are designed to be cooled from the side. An active cooler inside a device like this would be very interesting, as they could bridge the gap between devices.

wow so never mind a potential underclock they'd have to pretty much not even switch it on to not perform better than the wii u
 

Lonely1

Unconfirmed Member
Shall we make a bet which one is closer to top-end tablet performance (in portable mode of course) at the respective time of its release? I believe that Nintendo is still, to some extent, Nintendo.
Not fair, PowerVr > anyone else on the mobile space. The NX should compare very well with contemporaneous android hardware, though.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
If you're going to dedicate the die space to A72s you might as well leave them enabled in handheld mode, even if they're running at 500-600MHz or so. I would also be surprised if there's a big difference in CPU performance between handheld and docked mode (if any). Games will have to work in both modes, and I can't imagine developers wanting to add a load of CPU-intensive functionality for docked mode only to have to rip it out while in handheld mode (without significantly altering how the game plays). It would seem a lot simpler to me to leave CPU performance the same (or at least close) in both modes, and use any increase in TDP in docked mode purely for bumped GPU clocks.
Well, you have to draw the line of usefulness somewhere. Surely you can keep those A72s at some 500MHz undocked, particularly if you're conservative with the A53s as well, but why not firmly put those A53s into 2GHz+ territory instead? I know you're really into HH/console CPU parity, but as nice as that sounds, I'm still not seeing it at our current levels of tech.

Not fair, PowerVr > anyone else on the mobile space. The NX should compare very well with contemporaneous android hardware, though.
I'll have to disagree with your ImgTec assessment as I personally think all four big players in the mobile space have their unique strengths and weaknesses.

ImgTec make very well-balanced mobile IP, mainly because they've been doing it for the longest.
Qcomm are their very close competitors, thanks largely to ATI know-how and engineering talent, who might be the second-oldest mobile design-house in the industry.
ARM have the benefit of 'combo discount' - you can always get their Malis as a side order to anything ARM.
NV are the most power-envelope pushing of the entire bunch ('moar power, screw TDP'), and that has been biting them in the ass for generations, until now. It's really a small miracle their philosophy actually found such a good match in the face of current nintendo.

Interesting times.
 

z0m3le

Banned
wow so never mind a potential underclock they'd have to pretty much not even switch it on to not perform better than the wii u

If Nintendo wanted to, they could offer an alternative dock (AKA a console) that uses the same chip and when docked, use an SLI bridge to get very close to NEO. The people not taking this rumor seriously because of performance reasons, just don't understand how much better pascal is than anything AMD has to offer.

Well, you have to draw the line of usefulness somewhere. Surely you can keep those A72s at some 500MHz undocked, particularly if you're conservative with the A53s as well, but why not firmly put those A53s into 2GHz+ territory instead? I know you're really into HH/console CPU parity, but as nice as that sounds, I'm still not seeing it at our current levels of tech.

You'd want the CPU to be the same, there is no real reason the controller batteries couldn't help give the device the extra power it needs when on the go.
 

Ganondolf

Member
I think it will be interesting how thick the tablet part is. if its like the amazon fire hd6 they could have a fan which only activates when docked. if they went with a slim design (I don't think they will) they would need skilled designers to make a slimline fan (like the surface pro 3/4) or not have a fan which would limit the clock speed.
 

Schnozberry

Member
Well, you have to draw the line of usefulness somewhere. Surely you can keep those A72s at some 500MHz undocked, particularly if you're conservative with the A53s as well, but why not firmly put those A53s into 2GHz+ territory instead? I know you're really into HH/console CPU parity, but as nice as that sounds, I'm still not seeing it at our current levels of tech.

A53 Cores are much smaller as well, so you could fit more of them on the same SOC. Multiple clusters of A53 cores at different clock speeds could be useful in achieving decent performance and meeting power requirements.
 
But it would use more power, which means that the performance potential would be drastically reduced. I also don't understand why you believe that using Pascal would limit them to 1 SM. That makes no sense whatsoever.

It's not because of Pascal but because of Nintendo. And yes that is a big enough reason. People should be wary with speculations when they could end up using same GPU cores but not same core count.
Let's remember how Wii U's GPU started at 800ALU, then 480, then 320... To finally be 160.
 

dr_rus

Member
That doesn't magically make Nintendo's API for Wii U not suck by comparison. The difference in efficiency has more to do with NVIDIA'S proactive optimizations and AMD's lack thereof, which is not something that the Wii U's GPU just automatically inherited just because it's a console GPU.

~75% of flops efficiency difference between AMD and NV should be attributed to AMD's crap DX11/OGL drivers.
~25% of flops efficiency difference between AMD and NV should be attributed to GCN's architectural choices.
 
Fair questions right now at least for me, excuse my ignorance on hardware stuff:

What advantage does the dock bring and what is inside? This relates to the SCD thing.
How many SKUs? Several handheld SKUs that could connect to the same docking. I remember Iwata or someone else mentioning several SKUs.
Could the docking support more than one handheld?
Could the docking have several SKUs?
 

MuchoMalo

Banned
It's not because of Pascal but because of Nintendo. And yes that is a big enough reason. People should be wary with speculations when they could end up using same GPU cores but not same core count.
Let's remember how Wii U's GPU started at 800ALU, then 480, then 320... To finally be 160.

If Nintendo is doing that, they'd do it with or without Pascal so I don't know what your point is here. Pascal would still be the better choice. Also, 800 was based on false information, 480 was based on nothing whatsoever (I don't even know where you got that from), and 320 was a potentially* incorrect understanding of the die after launch. Also, there's no way that the dev kit is over 2x as fast as the target spec this close to launch.


*We still don't know for sure if it was 320 or 160, though obviously the more pessimistic one will be taken as fact. The Darksiders remaster definitely gives pause though.
 

LordOfChaos

Member
The X1 running at its standard clock rate is an enormous step up from the Wii U, just read some of the comparison posts in this thread. Also the X1 CPU is a huge step up from even the Xbox One (correct me if I'm wrong there) which will be the baseline for all AAA multiplats this gen.

Per core, but the X1 only has 4 high power A57 cores, paired with significantly lower performance and power draw A53 cores.

It remains to be seen if it's indeed an X1 how the core usage will go. A53 for portable, A57 when plugged? Both all the time? Generally a tablets governor doesn't use the big and little cluster at the same time, though new designs are capable.

T
A console running on a stock Tegra X1 will be very comparable to XB1/PS4 power when considering Nvidia vs AMD flops, the better CPU and the more modern feature sets. How that is used in the NX "portable" and "docked" modes remains to be seen, but the X1 is nothing to sneeze at.


I wouldn't go that far at least if we're talking GPU as well. CPU, maybe, if it uses both 4 core clusters together all the time. AMDs architectures are more floppy for the same performance as an Nvidias, yes, but not to the point of making the 500Gflop SP X1 comparable to the 1.3Tflop XBO. Let alone bandwidth etc. The Maxwell X1 GPUs features aren't going to make it punch that massively above the GCN architectures in the PS4/XBO such that it punches over double it's weight.

I'd call it three Wii Us duct taped together.
 
~75% of flops efficiency difference between AMD and NV should be attributed to AMD's crap DX11/OGL drivers.
~25% of flops efficiency difference between AMD and NV should be attributed to GCN's architectural choices.

OK? That doesn't refute anything that I said in that post.

Thanks for the arbitrary percentage estimates, btw.
 

z0m3le

Banned
~75% of flops efficiency difference between AMD and NV should be attributed to AMD's crap DX11/OGL drivers.
~25% of flops efficiency difference between AMD and NV should be attributed to GCN's architectural choices.

What matters is the end results, a GTX 1060 with 4.3 tflops beats out a RX 480 with 5.8 tflops.
 
It's not because of Pascal but because of Nintendo. And yes that is a big enough reason. People should be wary with speculations when they could end up using same GPU cores but not same core count.
Let's remember how Wii U's GPU started at 800ALU, then 480, then 320... To finally be 160.
Wii U's GPU "downgrades" were due to people's speculation instead of reality.

On the flip side, it does seem that even Nintendo wasn't happy with Wii U's CPU issues, and the N3DS got an unexpectedly massive boost of CPU power. I think Nintendo's concept of hardware has modified a bit, but we will see.
 

00ich

Member
Per core, but the X1 only has 4 high power A57 cores, paired with significantly lower performance and power draw A53 cores.

It remains to be seen if it's indeed an X1 how the core usage will go. A53 for portable, A57 when plugged? Both all the time? Generally a tablets governor doesn't use the big and little cluster at the same time, though new designs are capable.
A53 for background system tasks and stand by, leaving the four A57 cores for games?
The comparison would be 6 (+1/2?) jaguar cores vs four A57 then.
 
I wouldn't go that far at least if we're talking GPU as well. CPU, maybe, if it uses both 4 core clusters together all the time. AMDs architectures are more floppy for the same performance as an Nvidias, yes, but not to the point of making the 500Gflop SP X1 comparable to the 1.3Tflop XBO. Let alone bandwidth etc. The Maxwell X1 GPUs features aren't going to make it punch that massively above the GCN architectures in the PS4/XBO such that it punches over double it's weight.

I'd call it three Wii Us duct taped together.

I meant "comparable" as in overall, not just GPU wise. Also not including RAM, as we really have no info on that and I was really just responding to a poster who assumed a TX1 means barely above Wii U in power. Also when I say comparable I don't mean it's the same level, just at a somewhat similar overall level of performance, MUCH moreso than Wii U to XBone.

Again, it obviously depends on how it's used in the NX, and how it is customized (Pascal/Maxwell, cores, etc.) if we're comparing NX to other consoles, but I really only was discussing how the TX1 is a very modern and capable chip on its own.
 

dr_rus

Member
OK? That doesn't refute anything that I said in that post.

Thanks for the arbitrary percentage estimates, btw.

I should've added that ~0% of flops efficiency should be attributed to "Nv's proactive optimizations" it seems. Which you were talking about.

It's also highly unlikely that NX API will suck worse than AMD's DX11/OGL drivers - that's pretty much impossible to achieve on a fixed platform with NV's h/w.

Percentages are based on whatever benchmarks we have including those where AMD's driver is not an issue.

What matters is the end results, a GTX 1060 with 4.3 tflops beats out a RX 480 with 5.8 tflops.

If you look at pure h/w comparison taking out the AMD's s/w issues then it doesn't beat 480, it's approximately on the same level - meaning that it's ~25% more efficient per one flop. This actually links pretty well with the complexity difference between GP106 and Polaris 10 chips (which is ~30%) meaning that a Pascal GPU with the same complexity and peak rated flops as P10 will probably outperform it by some 25-30% solely because of a higher architectural efficiency.
 

MuchoMalo

Banned
I really wish Nvidia hadn't pulled that mislead "Tegra X1 is 1 TFLOPS" bullshit so that the same thing wouldn't have to be explained a million times.
 
Wii U's GPU "downgrades" were due to people's speculation instead of reality.

On the flip side, it does seem that even Nintendo wasn't happy with Wii U's CPU issues, and the N3DS got an unexpectedly massive boost of CPU power. I think Nintendo's concept of hardware has modified a bit, but we will see.



Wii U's GPU downgrade started as the same as NX. Wii U GPU being based around RV770 (HD4870), 1tflop talk, better GPU than PS360, midstep between this and next generation.

In the end, we all know how it turned out.
 

LordOfChaos

Member
I meant "comparable" as in overall, not just GPU wise. Also not including RAM, as we really have no info on that and I was really just responding to a poster who assumed a TX1 means barely above Wii U in power. Also when I say comparable I don't mean it's the same level, just at a somewhat similar overall level of performance, MUCH moreso than Wii U to XBone.

Again, it obviously depends on how it's used in the NX, and how it is customized (Pascal/Maxwell, cores, etc.) if we're comparing NX to other consoles, but I really only was discussing how the TX1 is a very modern and capable chip on its own.

That's true, funny enough the mobile chip would be closer to it's contemporary consoles than anything since the Gamecube for Nintendo.
 

MuchoMalo

Banned
Wii U's GPU downgrade started as the same as NX. Wii U GPU being based around RV770 (HD4870), 1tflop talk, better GPU than PS360, midstep between this and next generation.

In the end, we all know how it turned out.

The RV770 rumor was a hoax. It was also well over a year and a half before launch. I understand how you feel burned and want to keep your expectations at the absolute minimum possible, but the dev kit being 2-5x faster than the final machine this close to launch is crazy, especially if they bothered overclocking.
 
I should've added that ~0% of flops efficiency should be attributed to "Nv's proactive optimizations" it seems. Which you were talking about.

It's also highly unlikely that NX API will suck worse than AMD's DX11/OGL drivers - that's pretty much impossible to achieve on a fixed platform with NV's h/w.

Percentages are based on whatever benchmarks we have including those where AMD's driver is not an issue.

What a ridiculous assertion. Had it not been for NVIDIA'S superior performance, we would have no frame of reference for AMD's drivers being 'crap' by comparison. If NVIDIA was no better at optimizing for their own hardware, NVIDIA drivers would show similar levels of 'crappiness'.

And I was talking about the Wii U's API, not the NX. And I wasn't saying it was much worse AMD's drivers, just that it wasn't automatically better just because it was on fixed hardware.
 

Lonely1

Unconfirmed Member
ImgTec make very well-balanced mobile IP, mainly because they've been doing it for the longest.
Qcomm are their very close competitors, thanks largely to ATI know-how and engineering talent, who might be the second-oldest mobile design-house in the industry.
ARM have the benefit of 'combo discount' - you can always get their Malis as a side order to anything ARM.
NV are the most power-envelope pushing of the entire bunch ('moar power, screw TDP'), and that has been biting them in the ass for generations, until now. It's really a small miracle their philosophy actually found such a good match in the face of current nintendo.

Interesting times.

Well, I'm basing my assessment on that Tegra K1 was the top performing Android soc by a fair margin until the Adreno 820, while being challenged by contemporary Apple phones and tablets. But maybe is an Apple thing more than a PowerVr one? I expect the NX to be a step or two bellow the iPhone 7.
 
Well, I'm basing my assessment on that Tegra K1 was the top performing Android soc by a fair margin until the Adreno 820, while being challenged by contemporary Apple phones and tablets. But maybe is an Apple thing more than a PowerVr one? I expect the NX to be a step or two bellow the iPhone 7.

What are the projected specs of the iPhone 7? Specifically, how many FLOPS can it push?
 
The RV770 rumor was a hoax. It was also well over a year and a half before launch. I understand how you feel burned and want to keep your expectations at the absolute minimum possible, but the dev kit being 2-5x faster than the final machine this close to launch is crazy, especially if they bothered overclocking.



In any case, we have yet to be sure that X1 isn't Eurogamer speculation.
 

DESTROYA

Member
Fair questions right now at least for me, excuse my ignorance on hardware stuff:

What advantage does the dock bring and what is inside? This relates to the SCD thing.
How many SKUs? Several handheld SKUs that could connect to the same docking. I remember Iwata or someone else mentioning several SKUs.
Could the docking support more than one handheld?
Could the docking have several SKUs?
No one knows anything for now, people are just in fantasy mode and guessing/wishing what specs it has.
 
But not entirely. If it were then we wouldn't see the 1.39TFlops 750ti perform so favourably against the 1.84TFlops PS4 (which should be mostly divorced from AMD's driver issues).

Agreed.

Also, I'd like to see games coded specifically for the 750ti. I'd wager it would perform even more favorably against the ps4's GPU.
 
Status
Not open for further replies.
Top Bottom