32GB version might be saved for the prosumer Titan, but 16GB is totally realistic for the upper midrange/lower high end cards(x70/x80 equivalents). HBM2 will make this entirely feasible.Doubt these will be for the GTX cards btw. Maybe the next Titan. HPCs and professional cards at first most likely. I'm mean just thinking about the 980 Pascal equivalent to ship with 16/32 is just insane lol
17 billion GPU transistors, or 17 billion including the ram (as everything is essentially on the same chip now)?
17 billion GPU transistors seems a big jump, although with the process node shrink the overall area could be similar or even smaller than a Titan X?
No, that is not what is *always said*.That is what is always said, and yet each time it's about a 20~30% incremental increase.
32GB version might be saved for the prosumer Titan, but 16GB is totally realistic for the upper midrange/lower high end cards(x70/x80 equivalents). HBM2 will make this entirely feasible.
This is beautiful. Next gen consoles shouldn't be gimped or half stepped like the current ones are. At the very least they should be able to do 4k @ 60fps by 2019.
Are these the ones that require new motherboards due to the new architecture, making me feel like my Z97+i5 4690k purchase a bit recently was a bit silly?
Yes. Nvlink requires a completely different physical connector and as such I'm not certain how they will comply with ATX specs. Not to mention that it could only be used potentially for GPU-GPU communication.
32GB version might be saved for the prosumer Titan, but 16GB is totally realistic for the upper midrange/lower high end cards(x70/x80 equivalents). HBM2 will make this entirely feasible.
2016? I remember when we were waiting for Maxwell on 2013. Get ready for one or even two more iterations of Maxwell before Pascal hits.
Pascal is meant to be NVIDIAs next high performance, compute focused graphics architecture which will be found on all market segments that will include GeForce, Quadro and even Tesla.
That's probably for the next Titan. I expect 8 and 16 GB configurations for the 1070/1080
Hm, I assume these figures are for a Titan level card? The jump sounds pretty insane.
Building a mid level pc this year to replace my current 6 year old machine, might have to build it a bit cheaper and look to replace in a couple years instead of holding out for 5+ years again.
From the article:
The 32GB configuration described in the article may be aimed at HPC compute. I'd certainly expect it to be very expensive at first.
Might still depend on cost, though. Just because you can stack a shit ton of memory doesn't mean it will be cheap to do so.It's just soooooo crazy. Going from 4/6 to 16.
Also a very realistic scenario.Assuming for the moment that 32GB cards aren't out of the question for the GeForce line, I'd guess 32GB for the Titan Whatever, 16GB for its slightly inferior brother and 8GB for the 970/980 equivalents.
What incentive do Nvidia have to *not* drip feed Pascal out in incremental small jumps?
Is Nvidia releasing any more GPUs this year?
None
yes it isNo, that is not what is *always said*.
Sounds neat! Depending on how big a jump it is in terms of game performance, I might consider upgrading. Otherwise I'm waiting for Volta.
Yeah, that sounds reasonable to me too. At 8GB of HBM2, an x70 card would be fine for several years of 1080/1440p + 60FPS+ performance, even if texture bloat in the ultra high levels continues. 16GB would have a price premium and be for 4K gaming at the very least.Since pricing will still be an important marketing strategy, I think an 8GB x70 and a 16GB x80 would make a lot of sense. Unlike the difference between the 970 and 980 in performance, yet the disproportionate price, they could get away with it if they had a huge vRAM difference. It may be completely irrelevant to gaming performance at that point, but it would still be a huge 'DOUBLE THE VRAM' selling point to justify a well higher price tag.
So Quadro line?
Wait. . so will one of these bad boys be able to do 4k 60fps?
In-development improvements in architecture, HBM, and possibly even the next die shrink.What incentive do Nvidia have to *not* drip feed Pascal out in incremental small jumps?
Oh well, since you said so, what room do I have to disagree? Great point. Exciting discussion.yes it is
Damn, 2018 is a long time to wait.