• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry: GTA V PS4 and Xbox One compared in new frame-rate stress test.

Panajev2001a

GAF's Pleasant Genius
Indeed. Outside of maybe two scenes in the DF video, there's blips on both consoles that could be pretty much anything or nothing of particular consequence.

And in the two scenes were things are a bit more suspect, it's impossible for us to really say for sure what's going on.

I'm just poking at the narrative that this is another 'CPU bound open world game' that does better on XBO. Even if one wanted to assign CPU perf as the factor where PS4 is doing worse in the DF video, I think you'd have to assume people haven't watched the video to try and pass that idea.

Also, we have no clue on how much the particular OS is taxing each console. If it is not as simple as two cores being reserved and the other going full throttle in each implementation. How heavy is the effect of virtualizing CPU and GPU accesses through Hypervisor and Game OS? On that note, IMHO that is a good strategy that is paying off right now. Separating Game OS and the pretty much Windows 8 partition must introduce some headaches, but it is allowing them to change what consumers perceive as the console OS (Start screen, apps for friends, Youtube, Netflix, video sharing, LIVE Marketplace, Windows Snapping, etc...) rapidly without affecting games much if at all.

From what we know Sony is not running a hypervisor like on PS3, I wonder if that is true or it is a big collective assumption.
 
I wish Fafalada and Panajev would open their own DF, then we would not have to deal with some of DF's amateur conclusions anymore. (They have some good contributors though)
 

c0de

Member
From what we know Sony is not running a hypervisor like on PS3, I wonder if that is true or it is a big collective assumption.

Well, as I see it Sony took a off-the-shelf freebsd and customized it. But we don't know how much these customizations were done and what Sony changed in the code.

But we have this: http://en.wikipedia.org/wiki/ULE_scheduler
"ULE is the default scheduler for the FreeBSD operating system (versions 7.1 and forward) for the i386 and AMD64 architectures.[3] It was introduced in FreeBSD version 5,[4] but it was disabled by default for a time in favor of the traditional BSD scheduler until it reached maturity. The BSD scheduler does not make full use of SMP or SMT,[5] which is important in modern computing environments. The primary goal of the ULE project is to make better use of SMP and SMT environments. ULE should improve performance in both uniprocessor and multiprocessor environments,[6] as well as interactive response under heavy load.[7] The user may switch between the BSD scheduler and ULE using a kernel compile-time tunable.[8]"
 

Curufinwe

Member
The entire article is written in a way that implies both console versions are on equal footing when they are in fact not. Their own video shows that one has more graphical detail via higher resolution shadows, increased environmental detail and larger draw distance in vast open areas. The same version offers a consistent 2-4fps advantage across almost every performance scenario except a single instance in which the other version is slightly higher. That is not about equal like the actual written article would have people believe.

The PS4 version not only has more graphical flourish it also performs better 99% of the time offering a slight (2-4 frame per second) advantage in situations where both versions falter and offering a solid 30fps in many scenarios where the XBO is dropping to 24-26fps. There is only a single instance where that performance advantage shifts to the other version and yet the article makes it seem that is more important than every other example they themselves provide. It's absurd.

As far as actual quotation from the article goes:



This is in direct contradiction of their own performance analysis videos wherein the XBO actually produces 24 FPS in high speed chases and the PS4 produces 26fps. But they would have you believe that this completely bullshit "advantage" on the XBO is good enough to offset the frequent and stark performance gap shown when in shootouts (solid 30 vs sustained 24fps). How is that anything but misleading. They even go so far as to insinuate the minimal difference in performance in this extremely limited scenario is due to the small 150mhz speed bump on the XBO cpu when in reality it's far more likely to be due to porting a last gen engine to current gen hardware.

So yes, to put it bluntly, it IS biased bullshit.

Then, if you're like me you might want to ask "why bother?" But then you read stuff like this in this thread and it becomes fairly obvious why they would minimize the advantages of one console in order to make them both appear as though they are on more equal footing

Good summary of Leadbetter's dishonest narrative.
 

goonergaz

Member
Open-world game engines have been built to be very CPU-centric for some time. In games that don't stress the CPU the way Open-world games like AC: U and GTAV do the PS4's GPU and RAM win out. In CPU heavy-games we're seeing the XBO take the lead.

?? PS4 version is better! And let's see what happens with FC4.

1 (bad) example does not a fact make!
 

nampad

Member
The entire article is written in a way that implies both console versions are on equal footing when they are in fact not. Their own video shows that one has more graphical detail via higher resolution shadows, increased environmental detail and larger draw distance in vast open areas. The same version offers a consistent 2-4fps advantage across almost every performance scenario except a single instance in which the other version is slightly higher. That is not about equal like the actual written article would have people believe.

The PS4 version not only has more graphical flourish it also performs better 99% of the time offering a slight (2-4 frame per second) advantage in situations where both versions falter and offering a solid 30fps in many scenarios where the XBO is dropping to 24-26fps. There is only a single instance where that performance advantage shifts to the other version and yet the article makes it seem that is more important than every other example they themselves provide. It's absurd.

As far as actual quotation from the article goes:



This is in direct contradiction of their own performance analysis videos wherein the XBO actually produces 24 FPS in high speed chases and the PS4 produces 26fps. But they would have you believe that this completely bullshit "advantage" on the XBO is good enough to offset the frequent and stark performance gap shown when in shootouts (solid 30 vs sustained 24fps). How is that anything but misleading. They even go so far as to insinuate the minimal difference in performance in this extremely limited scenario is due to the small 150mhz speed bump on the XBO cpu when in reality it's far more likely to be due to porting a last gen engine to current gen hardware.

So yes, to put it bluntly, it IS biased bullshit.

Then, if you're like me you might want to ask "why bother?" But then you read stuff like this in this thread and it becomes fairly obvious why they would minimize the advantages of one console in order to make them both appear as though they are on more equal footing



All the sudden the dialogue is shifting to the idea that the XBO can overcome the performance gap that has existed this far due to actual hardware disadvantages. When you have the leading publication for performance analysis minimizing graphical and performance advantages something is wrong. The only explanation is bias with an intent to muddy the water around the existing hardware and the ensured performance gap between the two consoles going forward.

Yeah, feel the same about Digital Foundry. They seem quite biased.
Also, the Xbox One's higher CPU clock seems to be the new Blast Processing. The one thing the hardware is better at played up.
 
This thread makes me pretty happy that I have an i7.

Yeah, feel the same about Digital Foundry. They seem quite biased.
Also, the Xbox One's higher CPU clock seems to be the new Blast Processing. The one thing the hardware is better at played up.

a higher CPU clock was always what blast processing was
 

Tobor

Member
Even through Leadbetter's attempts to downplay the difference, it's clear the PS4 version is superior. What's all the drama about?
 

Poeton

Member
I think this whole stress test business has more to do with Leadbetter being salty about the internet doing his job for him in regards to the denser foliage on the ps4 that he missed. This has more to do with his ego than anything else.
 
Even through Leadbetter's attempts to downplay the difference, it's clear the PS4 version is superior. What's all the drama about?

well I guess they just dont seem to want to admit that the PS4 version is superior.

Oh well, GTA V is not the right game to fight about, its a multiplatform game.

If it had been a true next gen title and its been this close, then maybe it would be worth debating over. Assassins creed does not count for obvious reasons.
 
No shit that the FPU deals with floating point operations, just like the shaders in a GPU deal with floating point operations and it has nothing to do with the TMUs, ROPS etc. It is just a single metric to show the scale difference between the CPU and the GPU.

gflopsureh4.gif




Funny that the 3rd fastest super computer (Sequoia) is using a Power PC based CPU which is using a RISC type instruction set.

RISC vs CISC are different CPU architecture paradigms just like the Object Oriented paradigm vs the Functional paradigm. Going from one to the other is possible but it takes more work than going from different implementations of the same paradigm.

Learn something about microarchitecture before coming to sprout nonsense, genius.
 
after watching the video both versions seems very close frame rate wise, if anything ps4 is the winner, from that gun battle footage where the XB1 holds 26fps for a while. the cpu showing it's advantage comment seems baseless with out any real proof.
 
All I see in the video is PS4 holding on to 30fps most of the time while XB1 can´t and one instance where BOTH dip but XB1 has a higher framerate. All of this while also having some graphical differences. The CPU advantage hypothesis is getting ridiculous.

Are engines taking advantage of the GPGPU? If this happens will the difference be even wider?
 

c0de

Member
Yea, I missed that. Still if you have to use a CISC ISA vs using a RISC ISA it introduces differences in how you program for it making it more challenging.

Thankfully we have high-level languages for most cases and compilers that do the most work for us (which doesn't exclude intrinsincs, of course).
 

Poeton

Member
Thankfully we have high-level languages for most cases and compilers that do the most work for us (which doesn't exclude intrinsincs, of course).

Sometimes, late at night I miss programming in assembly.
Not really
 
If we count CPU only it is. From your post:

Xbox 1 = 1750 * 6 * 8 = 84 GFlops.
PS4 = 1600 * 6 * 8 = 76.8 GFlops.

IF this is a purely CPU bottleneck, it won't matter if their GPU or memory architecture is better or if the raw crunching numbers are better. The CPU will always 'chug' (its not that wide of a difference, hence the '') along for the ride.

I'm not an actual developer so I could be wrong. So in my uneducated opinion, there are already two separate instances (AC U and GTA V) where the supposed inferior machine is out performing the other. Both cases have been linked to the CPU difference.

So what if Sony frees an additional CPU core:

Xbox 1 = 1750 * 6 * 8 = 84 GFlops.
PS4 = 1600 * 7 * 8 = 89.6 GFlops.

This would give developers more power to play with.
 

Marlenus

Member
So what if Sony frees an additional CPU core:

Xbox 1 = 1750 * 6 * 8 = 84 GFlops.
PS4 = 1600 * 7 * 8 = 89.6 GFlops.

This would give developers more power to play with.

Once either console maker does that there is no going back, the whole reason for the RAM and CPU allocation is to make sure they do not lose out on some killer feature. If nothing comes down the line in 2/3 years I can see them opening it up for devs in the last years of its life but not before.
 

Respawn

Banned
Unlikely, they will have been binned and tested based on the current clock speed, at the current voltage settings with the current cooler. While 90% of consoles may be able to support the increased clock speed without upgrading the cooling system or increasing the voltage there will be a few that will not because the particular chip only just passed the tests at the desired speed bin.

I still find it funny that they are attributing a > 9.4% FPS advantage to the CPU alone. It is likely part of the reason but when you are comparing 24 FPS to 28 FPS you are talking about a 20% gap so there must be something else going on too.
Not sure what I just read here. Not that I don't understand mind you.
 

Panajev2001a

GAF's Pleasant Genius
Thankfully we have high-level languages for most cases and compilers that do the most work for us (which doesn't exclude intrinsincs, of course).

True, but then you cases like Xbox 360 and PS3 where the CPU is a mix of both. PowerPC is generally understood as a RISC architecture, but there are several instructions which are micro-coded and which you will want to avoid (making sure the compiler or you through intrinsics/ASM do not introduce them) as they will slow you down. Also, copying data between registers and other perfectly valid cases will make this your friend:

Load Hit Store

BTW, on x86 cores, yes even on the Jaguars cores people love to hate, the hardware can sidestep the issue in most scenarios and still proceed full tilt (forwarding data from store to load units without going through external memory essentially): Speculative Execution, Load Hit Store on x86

Anyways, the more you make the front-end complex (faster and faster instruction decoding, macro to micro-op translation, macro and micro-op fusion, etc...) the more you increase the cost of going massively multi-core versus a CPU dealing with a simpler instruction set. ISA still matters. You just have 30+ years of R&D put into compilers, libriaries, state-of-the art semiconductor processes constantly pushing the edge, and CPU micro-architecture from two behemoths like Intel and AMD that offset the picture ;).
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
Looks like both versions are fine. Dipping a few frames here and there isn't that big a deal.

This is the bottom line. It is really hard to make an open world game not dip below 30. In effect they would have to target 45+fps just to handle dips going into the 30s and then cap at 30. This is what Second Son does, but that is a lot of headroom to throw away. We don't know the unlocked frame rate in GTAV, but I'm guessing it only goes into the mid 30s, hence dips into the 20s during stressful parts.

I'd like to the the PS4/XB1 frame rates compared to PS3/360. I bet we are still much better off now.
 

MDSLKTR

Member
I'm happy with my ps4 purchase but the framerate drops while driving at full speed in a city are dissapointing to see. This is what wanted to be fully fixed compared to the previous-gen. Oh well defintive experience will be when I build a new pc next year I guess
 

c0de

Member
True, but then you cases like Xbox 360 and PS3 where the CPU is a mix of both. PowerPC is generally understood as a RISC architecture, but there are several instructions which are micro-coded and which you will want to avoid (making sure the compiler or you through intrinsics/ASM do not introduce them) as they will slow you down. Also, copying data between registers and other perfectly valid cases will make this your friend:

Load Hit Store

BTW, on x86 cores, yes even on the Jaguars cores people love to hate, the hardware can sidestep the issue in most scenarios and still proceed full tilt (forwarding data from store to load units without going through external memory essentially): Speculative Execution, Load Hit Store on x86

Anyways, the more you make the front-end complex (faster and faster instruction decoding, macro to micro-op translation, macro and micro-op fusion, etc...) the more you increase the cost of going massively multi-core versus a CPU dealing with a simpler instruction set. ISA still matters. You just have 30+ years of R&D put into compilers, libriaries, state-of-the art semiconductor processes constantly pushing the edge, and CPU micro-architecture from two behemoths like Intel and AMD that offset the picture ;).

Of course the isa is still important. But compilers are customized by ms and Sony for sure for dealing with this special core(s). The architecture, the delays between load, compute and store are fixed in consoles and pipeline stalls happen in a (mostly) deterministic way. I remember a Sony slide giving exact ms for ram access and cache access. Therefore you can adjust your tools for many cases.
I guess they have a massive load of profiling data and specs for their jaguar cores I expect them to have similar profiling tools as pix that help devs to find wait states in their code.
 

thelastword

Banned
The only benchmarks I could find were the browser based ones like sunspider, Kraken, Octane etc and Anandtech has this warning about them



So comparing Android vs iOS is not really valid due to the different ways they work which will mean comparing iOS to Windows is also invalid for the same reason. Then there is the fact that the A8 and the A8X are ARM based yet jaguar is x86 based and we have no clue as to how efficient these benchmarks are on the different architectures.

Kabini Benchmarks

This review is really good in that if you click the All Results button below each graph it gives you desktop CPU performance upto the top Intel chips. In this way we can compare a 4 core 1.6 Ghz Jaguar CPU to the top end Intel i7s.

Looking at the i5 4690 and comparing it to the 5150 you see performance across the board is around 4x faster with some instances of it being 5x faster. Considering the huge clock speed advantage the i5 has over the 5150 it just shows that IPC is not totally terrible on the Jaguar chips when you consider the die size.

This also gives us a chance to look at what a 28.1% CPU clock speed advantage gives when the GPU is the same as they are both APUs with the same GPU running at the same speed. In this case you can see that at no point does the faster clocked CPU even get a 1 FPS advantage, at best it gets a 0.8 FPS advantage which is well within the margin of error for these tests. Now some of that will be down to the fact that it is GPU limited but even in games like Sleeping Dogs which are open world you are not seeing a difference.

Another interesting one to look at is the Unity Draw Calls test on the IGP synthetic test page. That is a scenario you would expect to be CPU limited but a 28.1% clock speed advantage only gives an 11.7% higher score. Yet we are being led to believe by these so called 'experts' that a 9.4% advantage to yield a 20% fps boost. It is total bullshit and anybody who is pushing it is totally incapable of doing sound analysis. Mr Leadbetter really needs to up his game if he does not want to come across as an incompetent buffoon.
Thank you for this article Marlenus, I've long stopped looking for information to dispel the nonsense I'm reading in DF articles. They have the framerate analysis, yet they go against it in many instances to put the microsoft console in a good light. It's like they're calling persons who view their frames-videos dumbasses "you did not see what you saw, we will tell you what you saw", it's ridiculous.

Look how they try to downplay the extra foliage in GTA5 "a tree here and there", look how they try to downplay resolution differences in many games "like for like, no discernible difference". DF tries to downplay a 40-50% percent gpu advantage all the time, and yet, at the same time try to up-play a 150mhz upclock, and yet people defend them. Even if I was the biggest MS fan, I could never defend this.
 
Even through Leadbetter's attempts to downplay the difference, it's clear the PS4 version is superior. What's all the drama about?

I think some people are upset that Leadbetter didn't specifically come right out and say the PS4 version was superior.

He praises both versions for having their own strengths and weaknesses.
 

Amused

Member
No difference in lighting or other effects between PS4 and X1? Same AA solution and overall IQ?

I guess my question is, will there be another DF article comparing more than foliage and framerate?
 

onanie

Member
The entire article is written in a way that implies both console versions are on equal footing when they are in fact not. Their own video shows that one has more graphical detail via higher resolution shadows, increased environmental detail and larger draw distance in vast open areas. The same version offers a consistent 2-4fps advantage across almost every performance scenario except a single instance in which the other version is slightly higher. That is not about equal like the actual written article would have people believe.

The PS4 version not only has more graphical flourish it also performs better 99% of the time offering a slight (2-4 frame per second) advantage in situations where both versions falter and offering a solid 30fps in many scenarios where the XBO is dropping to 24-26fps. There is only a single instance where that performance advantage shifts to the other version and yet the article makes it seem that is more important than every other example they themselves provide. It's absurd.

As far as actual quotation from the article goes:



This is in direct contradiction of their own performance analysis videos wherein the XBO actually produces 24 FPS in high speed chases and the PS4 produces 26fps. But they would have you believe that this completely bullshit "advantage" on the XBO is good enough to offset the frequent and stark performance gap shown when in shootouts (solid 30 vs sustained 24fps). How is that anything but misleading. They even go so far as to insinuate the minimal difference in performance in this extremely limited scenario is due to the small 150mhz speed bump on the XBO cpu when in reality it's far more likely to be due to porting a last gen engine to current gen hardware.

So yes, to put it bluntly, it IS biased bullshit.

Then, if you're like me you might want to ask "why bother?" But then you read stuff like this in this thread and it becomes fairly obvious why they would minimize the advantages of one console in order to make them both appear as though they are on more equal footing



All the sudden the dialogue is shifting to the idea that the XBO can overcome the performance gap that has existed this far due to actual hardware disadvantages. When you have the leading publication for performance analysis minimizing graphical and performance advantages something is wrong. The only explanation is bias with an intent to muddy the water around the existing hardware and the ensured performance gap between the two consoles going forward.

I think leadbeater was hoping that no one actually looks at the videos. Or maybe he himself doesn't look at the videos when he writes his drivels.
 

thelastword

Banned
I think leadbeater was hoping that no one actually looks at the videos. Or maybe he himself doesn't look at the videos when he writes his drivels.
Apparently, he's not the only one, with the way some of the articles have been worded so far this gen, it seems that his workers are towing the line.
 
Calm down mate, its better to explain why he's wrong than to just spit venom in his general direction.

Current x86 CPUs (since Pentium PRO) are equipped with different types of x86 decoders. They don't only decipher the incoming instructions, but also translate them to simpler micro-ops to be executed.

They also have the capability of fusing 2 x86 instructions into a single instruction, this is called macro ops fusion.

As I am mostly repeating, there isn't a different technology between RISC and CISC, but just a different CPU design paradigma. RISC aims for speedier CPUs with less and more simple instructions, whereas CISC works around complex instructions that can even take some cycles on their own. Note that CISC is a somehow pejorative way of refering to anything else that isn't RISC by the people who promote RISC.

With the current CPU's, that distinction makes no sense. And some might argue that never did actually.
 

KageMaru

Member
I wish Fafalada and Panajev would open their own DF, then we would not have to deal with some of DF's amateur conclusions anymore. (They have some good contributors though)

Problem is unless you had access to or worked on code for the game, you're always going to be guessing.
 

iceatcs

Junior Member
Shame that Xbox lacking performance from those FPS videos.

I think they should patch it downgrade to 900p. The standard resolution for Xbox - and it fans are so used with it.
 

KageMaru

Member
I think some people are upset that Leadbetter didn't specifically come right out and say the PS4 version was superior.

He praises both versions for having their own strengths and weaknesses.

From what I can see this isn't the final face off, this is about the frame rate. It's in the face off where they claim a winner.

People are just looking for reasons to bitch about him.

No difference in lighting or other effects between PS4 and X1? Same AA solution and overall IQ?

I guess my question is, will there be another DF article comparing more than foliage and framerate?

Yeah I'm thinking there should be another face off article. With how large this game is, it's no surprise they are breaking it down and taking their time with the face off. Assuming they plan to do a final face off.
 

Marlenus

Member
Current x86 CPUs (since Pentium PRO) are equipped with different types of x86 decoders. They don't only decipher the incoming instructions, but also translate them to simpler micro-ops to be executed.

They also have the capability of fusing 2 x86 instructions into a single instruction, this is called macro ops fusion.

As I am mostly repeating, there isn't a different technology between RISC and CISC, but just a different CPU design paradigma. RISC aims for speedier CPUs with less and more simple instructions, whereas CISC works around complex instructions that can even take some cycles on their own. Note that CISC is a somehow pejorative way of refering to anything else that isn't RISC by the people who promote RISC.

With the current CPU's, that distinction makes no sense. And some might argue that never did actually.

With current X86 CPUs that is true for the hardware only. The ISA is still CISC or RISC and there are plenty of architectures that are almost exclusively RISC.

I said the same thing about RISC vs CISC being a design paradigm so how can you then tell me to 'learn about 'microarchitectures' when you have just said the same as I did.
 
Top Bottom