Please provide a quote from "PC GAF" stating that. Don't worry, I'll wait. (By the way, why is generalizing "PC GAF" not treated in the same way as "Sony GAF"?)Who does this guy think he is? PC GAF says they're exactly the same, and will always yield the same results.
I am skeptical of the 2x performance increase they think can be gained pretty much from what developers have gotten so far. If developers are still getting used to the console APIs and through time get as close to utilising it fully, wouldn't that just mean the console's graphics processing power will be fully utilised? I'm not understanding why they (and Carmack) think it's going to be 2x performance increases.
Please provide a quote from "PC GAF" stating that. Don't worry, I'll wait. (By the way, why is generalizing "PC GAF" not treated in the same way as "Sony GAF"?)
I still think people should qualify suc ha statement a lot more before they present it to the masses. Reduce your CPU overhead twofold? Sure, at least in some scenarios (and you better, considering the anemic CPUs in consoles). But you won't get much better performance shading pixels or transforming vertices on consoles than on equivalent PC hardware -- and many high-end multiplatform titles confirm this.
I doubt that in particular had anything to do with the Diablo situation - the "DX12-style" is probably essentially a prototype of DX12 itself (with whatever GCN-specific custom tweaks they've added that won't generalize over to nvidia/intel hardware), and taking advantage of that would likely take a near-complete rewrite of the renderer.
During the DX12 announcement presentation, they mentioned that the Forza DX12 port took them 4 months (with a team of 4 engineers working on it), so depending on when that API became available in the XDKs, I wouldn't expect anything to ship using it until next year (unless some first-party teams have had early access - Forza being ported early this year does make me wonder whether Horizon 2 might have had the time to benefit from that work), as nobody would want to embark on anything of that magnitude just before shipping.
Any improvements for games shipping in the near future will be down to whatever they can wring out from the kinect reserve being lifted, and improvements to the DX11.x runtime to fast-path more calls and eliminating as much of the overhead as they can without completely re-architecting the API.
it comes up all the time. !!Please provide a quote from "PC GAF" stating that. Don't worry, I'll wait..
Pretty much echo what people on gaf have been saying for sometime, ps4 is more powerful and the esram is a major issue that will hamper them well into the gen.
To be fair many PC gamers here on GAF have been reasonable about it. A lot however start frothing at the mouth when the word comes up but then talk about the benefits of Mantle/DX12.it comes up all the time. !!
- the coding to the metal "myth."
- let's not dredge up that cliffyb tweet.
- there has been zero examples of a multi platform game benefitting from any metal advantage.
- pc has assembler too.
- any performance increase over the years was down to algorithms not optimization.
it is pc gaf lore that metal is not a thing that matters, and pc gaf lore that the same number of Teraflops means same game.
that's just my impression! but what do I know I have only been lurking daily for a year.
I don't know about ESRAM itself being expensive, but it would have made the APU a lot more expensive if they didn't want to compromise the GPU any further.
It directly increases the size of your main chip by a significant amount. So, yes.
I guess this greatly depends on how you define an "insane amount of money".
It's integrated on the same die with the APU (hence the E, from embedded) which is already huge as it is. That makes it expensive at the moment, until they shrink the whole chip.
I know this has been answered but for the scope of such an increase here's what the Xbox one APU looks like with only 32mb of esram
It is on the same die as the GPU and the CPU, 32MB is already the largest on die SRAM pool ever made so making it even larger would have
a) increased power consumption,
b) decreased APU yield,
c) increased APU cost due to b,
d) increased power supply cost due to a,
e) increased cooling costs due to a.
Of course it is more powerful. According to him "PS4 is just a bit more powerful.".
But ESRAM major issue? I guess you opened a different link.
Cant believe they made a 60 fps locked game in four months while living in a war torn country.
To be fair many PC gamers here on GAF have been reasonable about it. A lot however start frothing at the mouth when the word comes up but then talk about the benefits of Mantle/DX12.
According to him "PS4 is just a bit more powerful."
PS4 has twice the number of GPU components but it's only a bit more powerful?
What would you expect him to say? "PS4 has a decent performance lead and it took us a while to get the X1 version up to scratch"?
Why not? I think given they had even less time on Xbox One their work on this version is amazing.
That's a pretty arbitrary number there. Define what 1x better means, ya know?I don't know if I still want 60 fps.
Seriously 2.5 times better, that's a lot.
Digital Foundry: To what extent will DX12 prove useful on Xbox One? Isn't there already a low CPU overhead there in addressing the GPU?
Oles Shishkovstov: No, it's important. All the dependency tracking takes a huge slice of CPU power. And if we are talking about the multi-threaded command buffer chunks generation - the DX11 model was essentially a 'flop', while DX12 should be the right one.
PS4 has twice the number of GPU components but it's only a bit more powerful?
I think what most people say is that more is made of the 'coding to the metal' thing than is really there. Perhaps, on-paper, in a theoretical capacity, it might be true. But in practical terms, I don't think there's a whole lot of evidence to support the difference being quite that big. And I also don't think you'll find anybody who denies that there is some advantage to using a low level console API.it comes up all the time. !!
- the coding to the metal "myth."
- let's not dredge up that cliffyb tweet.
- there has been zero examples of a multi platform game benefitting from any metal advantage.
- pc has assembler too.
- any performance increase over the years was down to algorithms not optimization.
it is pc gaf lore that metal is not a thing that matters, and pc gaf lore that the same number of Teraflops means same game.
that's just my impression! but what do I know I have only been lurking daily for a year.
it comes up all the time. !!
- the coding to the metal "myth."
- let's not dredge up that cliffyb tweet.
- there has been zero examples of a multi platform game benefitting from any metal advantage.
- pc has assembler too.
- any performance increase over the years was down to algorithms not optimization.
it is pc gaf lore that metal is not a thing that matters, and pc gaf lore that the same number of Teraflops means same game.
that's just my impression! but what do I know I have only been lurking daily for a year.
Pretty much echo what people on gaf have been saying for sometime, ps4 is more powerful and the esram is a major issue that will hamper them well into the gen.
But my favourite quote - just for out of context potential...
PS4 one million times faster than Xbox One confirmed.
In the grand scheme of computing power, he isn't wrong.
PS4 has twice the number of GPU components but it's only a bit more powerful?
it comes up all the time. !!
- the coding to the metal "myth."
- let's not dredge up that cliffyb tweet.
- there has been zero examples of a multi platform game benefitting from any metal advantage.
- pc has assembler too.
- any performance increase over the years was down to algorithms not optimization.
it is pc gaf lore that metal is not a thing that matters, and pc gaf lore that the same number of Teraflops means same game.
that's just my impression! but what do I know I have only been lurking daily for a year.
In the grand scheme, no. The GPU in the PS4 is only a bit more powerful than the Xbone in comparison with a GTX Titan. But in a direct comparison between just the PS4 and Xbone, I don't think that's true.
What the hell is "PC GAF lore" and where can I see actual quotes of it?
but there is not a single practice example of that.
Oles Shishkovstov: The problem with unified memory is memory coherence. Even on consoles, where we see highly integrated SoCs (system on chips), we have the option to map the memory addresses ranges basically 'for CPU', 'for GPU' and 'fully coherent'. And being fully coherent is really not that useful as it wastes performance. As for the traditional PC? Going through some kind of external bus just to snoop the caches - it will be really slow.
pc games are horribly inefficient. late gen console games will be achieving much closer to peak throughput in all areas of its hardware than a pc will ever dream of
Is ESRAM expensive? Is there a reason Microsoft couldn't have gone with 64 or 128MB?
Are you sure of that. Because 8800 gt released in november of 2006 for 200 bucks is faster then a ps3 in recent multiplat. This whole optimization is fishy.
its also over twice as powerful as the gpu in ps3
its also over twice as powerful as the gpu in ps3. plz find me anything a 7950gt, or whatever card is spec matched to the rsx, can run at playable fps that looks anywhere near as good as uncharted 3, god of war, tlou, killzone 3 etc.
I was about to ask for an actual example of a game running on consoles at 2x performance compared to an equivalent PC. Maybe there are, but I can't find any.
In the grand scheme, no. The GPU in the PS4 is only a bit more powerful than the Xbone in comparison with a GTX Titan. But in a direct comparison between just the PS4 and Xbone, I don't think that's true.
He is primarily a PC developer so I imagine he is looking at this in terms of the bigger picture. To him, the difference is not a huge one.
It's Leadbetter. What did you expect? Professionalism?
I was about to ask for an actual example of a game running on consoles at 2x performance compared to an equivalent PC. Maybe there are, but I can't find any.
Thats the biggest thing when we look back on this generation, said the co-founder of id Software, when asked about the length of time its taken to develop Rage in an interview with CVG.
We had, at one point or another, seven years of work that went into it.
Its a foregone conclusion that were never going to throw out that much of the code base ever again. If we have eight times the resources on a next gen console, we cant spend a decade writing the optimal technology for that. We need to be able to be at a point where we rewriter sections that matter and have an impact, and reuse as much as we can.
He went on to say that his own development outlook has changed. He wants ids content creators to be able to do their job without the never-ending drive for better technology getting in the way.
A lot of the things we do nowadays, with the AI and the animation, is really good enough, he said.
Every aspect of Rage could be made better if we wanted to spend another two years on it, but its not the optimal use of [our] resources. Id be happier if we produced two games in that time. And thats actually my personal marching orders for technology development going forward.
Well you are directly contradicting Shishkovstov and Carmack, they are not mincing words and leaving it open to interpretation. Any counterclaim has to start with "Sorry Oles and John, but you guys are wrong".Its so ridiculous statement to think that PC are running on 50% or less efficiency that i dont know who with the any brain claim that
Oh so you believe that optimisation DOES exist after all. And youre quoting that n00b Carmack!There aren't any, if there were you can be sure that you would see them plastered all over Neogaf threads. I believe that no developer would seriously consider spending many months or even years on optimization just to eek out a bit better performance. Carmack, whose "2x" tweet is often brought up as evidence, since there isn't any actual evidence out there, had this to say on the topic of his greatest regrets:
Carmack in the end understood what the vast majority of multiplatform devs already know. That endless optimization is a gigantic waste of time for little actual benefit.
The defensive attitude only proves that its to do with vested interests. Some people see bringing up the idea as somehow making more powerful PC hardware less impressive.Well you are directly contradicting Shishkovstov and Carmack, they are not mincing words and leaving it open to interpretation. Any counterclaim has to start with "Sorry Oles and John, but you guys are wrong".
More like, "Sorry Oles and John, but I don't see the evidence that supports this."Well you are directly contradicting Shishkovstov and Carmack, they are not mincing words and leaving it open to interpretation. Any counterclaim has to start with "Sorry Oles and John, but you guys are wrong".