• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry The PS5 GPU in PC Form? Radeon RX 6700 In-Depth - Console Equivalent PC Performance?

Gaiff

SBI’s Resident Gaslighter
Another interesting part is that they were using a 2080Ti. Which is only 15-17% faster that an RTX 2080, that some people compare to the PS5 GPU. And not very distant from the RX6700, that DF
The 2080 Ti is more like 25% faster than the 2080, not just 15-17% faster. It's actually faster than even the 6750 XT so a bit difficult to scale it with the 6700 non-XT.
 
Last edited:

Gaiff

SBI’s Resident Gaslighter
Techpowerup says it's exactly 20% faster.
Those averages are often a few % off from the real world and Techpowerup tends to fuck up quite a bit on their benches (such as when they paired the 4090 with a 5800X). They're not far of but it's 25%. Techpowerup includes CPU-bound benchmarks in those averages. Those averages are also at 1080p and include old benches where the 2080 Ti was CPU bound.

Based on TPU review data: "Performance Summary" at 1920x1080, 4K for 2080 Ti and faster.

In real life scenarios, the 2080 Ti will tend to beat the PS5 by over 25% and often more. Thanks to its massive bandwidth, in some extreme cases, it can lead by up to 50% at 4K.

It doesn't completely change what you said but the results would be a couple % faster on the 2080 Ti. Like around 10%.




This is a benchmarking video from 10 months ago and the 2080 Ti always leads by around 20-30%.
 
Last edited:

winjer

Member
Those averages are often a few % off from the real world and Techpowerup tends to fuck up quite a bit on their benches (such as when they paired the 4090 with a 5800X). They're not far of but it's 25%. Techpowerup includes CPU-bound benchmarks in those averages. Those averages are also at 1080p and include old benches where the 2080 Ti was CPU bound.

Based on TPU review data: "Performance Summary" at 1920x1080, 4K for 2080 Ti and faster.

In real life scenarios, the 2080 Ti will tend to beat the PS5 by over 25% and often more.

It doesn't completely change what you said but the results would be a couple % faster on the 2080 Ti. Like around 10%.




This is a benchmarking video from 10 months ago and the 2080 Ti always leads by around 20-30%.


Regardless of being 20 or 25%. It's closer than most tests posted here that used much more powerful GPUs.
And the 4650G is closer to the PS5 GPU, than most other options. So that test from Anandtech gives a rough approximation of how much the CPU matters.
 

Kataploom

Gold Member
No, you put a ryzen 3600X in place of 13900K, you see a big change in richs' results.
Most games won't even notice that, they are designed with consoles CPUs in mind, put any current green i3 or Ryzen 3 with 16GB of RAM and you can run most games equal or slightly better than consoles because their CPUs are so weak you really have dig into discontinued CPUs in order to get something that more or less matches lol.

But for the sake of proof, I here are some videos with 3600 and 3600X running Hitman 3 well above 60 fps with a similarly powerful card, can't find one with 6700 + 3600X btw, note how CPU never gets pushed while GPU gets most of the load:





 
Death Stranding would be about the same, PS5 version performs slightly better then a 2080 and so does the 6700 compared to the 2080.

Not sure why you think Alan Wake 2 performs poorly on the PS5 compared to PC, it's around the 2080 level which is better then most.
Death stranding performs like a 3070 but I suspect the pc
how is it close to 3x when it is barely 2.3x faster than a 3.6 ghz zen+ cpu in gaming scenarios

if anything it is closer to 2x than 3x

PS5's cpu is zen 2 architecture with 3.5 ghz with 8 mb cache but probably has access to more bandwidth than a ddr4-based zen+ or zen 2 cpu. typical ddr4 bandwidth zen+ or zen 2 cpu on desktop will be between 40-60 gb/s (2666 mhz to 3600 mhz). meanwihle console has access to a total of 448 gb/s. even if we assume GPU uses 350 gb/s, that would still give cpu a massive 100 gb/s bandwidth to work with. and considering i've been monitoring GPU bandwidth usage in a lot of 2023/2024 games on my 3070, trust me, games are not that hungry for memory bandwidth. (i can provide you some numbers if you want later on with some of the heaviest games)

ps5's cpu bound performance is super inconsistent. some people will downplay it to gain argument advantage here and go as far saying it is like a ryzen 11700. this is what ryzen 2600 gets you in spiderman with ray tracing



as you can see it is super cpu bottlenecked and drops to 50s. and ps5 is known to be able to hit 70+ high frame rates in its ray tracing mode. (gpu is clearly underutilized in this video. it is a super heavy bottleneck that is occuring at 50 fps average. a bottleneck that does not happen on PS5, as it is able to push 70+ fps in ray tracing mode across the city in both games)

care to explain this? in spiderman, ps5 CPU clearly overperforms ryzen 2600, a cpu that has the %45 of the performance of a 13900k.

simple questions:

1) do you think ps5 cpu is faster than ryzen 2600?
2) do you acknowledge that in gaming scenarios, 13900k is only 2.3x faster over a ryzen 2600 in average at 720p cpu bound resolution
3) if your answers to the questions above are both yes, do you understand this means 13900k is only, realistically is 2.2x faster than the PS5 CPU and nowhere near being 3x faster than it?
4) and if you do indeed acknowledge 13900kj being barely 2.3x faster than a ps5 equivalent CPU, would you also admit that 1/3 cheaper i5 12400f is also close to being 2x faster than ryzen 2600, the one that we should've settled as PS5 or worse equivalent CPU?

what will it be?

i think the ps5 cpu is worse than the 2600
 

peish

Member
Most games won't even notice that, they are designed with consoles CPUs in mind, put any current green i3 or Ryzen 3 with 16GB of RAM and you can run most games equal or slightly better than consoles because their CPUs are so weak you really have dig into discontinued CPUs in order to get something that more or less matches lol.

But for the sake of proof, I here are some videos with 3600 and 3600X running Hitman 3 well above 60 fps with a similarly powerful card, can't find one with 6700 + 3600X btw, note how CPU never gets pushed while GPU gets most of the load:







No, that is not how things worked.

3600X -> 13900K will give a big uplift in fps even with 6700. This is the way, you do not have to see 99% usage on 3600X to have a CPU uplift
 

Gaiff

SBI’s Resident Gaslighter
No, that is not how things worked.

3600X -> 13900K will give a big uplift in fps even with 6700. This is the way, you do not have to see 99% usage on 3600X to have a CPU uplift
How many times do we have to explain this? The CPU won’t magically start doing the job of the GPU. The GPU is maxed out. Putting a faster CPU won’t make it go faster because it’s already going as fast as it can.

What’s so unclear about this very simple concept?

A faster CPU doesn’t give a GPU a "performance boost" this is nonsense. What it will do is allow the GPU to run closer to its real max performance by unbinding it because it’s the CPU that sends the frames to the GPU and not the other way around. If the GPU is working as fast as it can, then upgrading the CPU will do nothing.
 
Last edited:

yamaci17

Member
Tired The Office GIF
 

SlimySnake

Flashless at the Golden Globes
Techpowerup says it's exactly 20% faster.
I bought a 2080 over a 2080 Ti back in the day and it was roughly 35% in virtually all benchmarks we saw back then. Not sure whats going on over at techpowerup. It is definitely not 20% more powerful. I remember because i was very envious of the performance it was getting. Im sure you can find actual benchmarks on youtube.

But yes, the CPUs of course matter. We have seen this time and time again over the last few years especially as CPU heavy next gen games have begun to arrive. I cant believe everyone has forgotten games like gotham knights which devs themselves stated had a CPU bottleneck on consoles while we on PC ran them just fine at 60 fps. starfield is another such example. after some patches, people were able to run the game at 60 fps, but no such patch on the xsx. i wonder why. maybe its CPU is even worse than the 3600, 2700x and in line with the 1800x.

There are dozens of benchmarks on PC that show just how atrocious the zen1 CPUs like the 1800x and 2700x really are when compared to 7800x3d or the 13900k using the 3070, let alone the 4090. You can see those 30 series GPUs get held back by those CPUs. You get 2-3x more performance from the same exact GPU. Thats literally the definition of a CPU bottleneck.

Granted a 3070 is roughly 30% more powerful than the 6700, but while the disparity wont be 2-3x when there is fewer GPU power to go around, you will still be bottlenecked by the CPU. How much is anyone's guess but I dont see that 100-150% advantage completely disappearing when using a slightly weaker GPU.
 

SlimySnake

Flashless at the Golden Globes
How many times do we have to explain this? The CPU won’t magically start doing the job of the GPU. The GPU is maxed out.
We cant say this with confidence without profiling both the CPU and GPU on consoles. At this point its just conjecture.

A faster CPU doesn’t give a GPU a "performance boost" this is nonsense. What it will do is allow the GPU to run closer to its real max performance by unbinding i
Isnt that the same thing? If you take a 3070 and pair it up with an 1800x then pair it up with a 7800x3d and see the framerates jump by 2-3x, wouldnt you say that the faster CPU gave the GPU a performance boost? Lets say we are arguing semantics and use your second description where the faster CPU allows the GPU to run closer to its real max performance, wouldnt you say that pairing the ps5 or 6700 with a faster CPU make it run closer to its real max performance? i honestly dont know why you did a 180 on this so fast. you were with me calling rich a moron just two days ago, and now you are acting like we are saying the most insane things for simply stating what you were saying just 48 hours ago.
 
Last edited:

yamaci17

Member
4k fsr performance (%50 scaling) low settings

i5 13600k 5.5 GHZ

heavily GPU bound at 63-67 FPS:


T0BcOIw.png


ryzen 2 2700x 4 GHz

heavily GPU bound at 63-67 FPS (limited by GPU, as confirmed with the help of PRESENTMON profiling tool)

HVdx9JN.jpeg



exact same settings, exact same internal resolution, exact same GPU usage, exact same FPS in the exact same scene


looking for that mythical 2x-3x performance increase 13600k 5.5 ghz supposed to have in this scene:

Driving Ace Ventura GIF


This thread has become funny. People now deny heavy GPU bound scenarios just because.

imagine, this person would call 63 FPS at 1080p low on a 3070 a heavy CPU bottleneck and blame my CPU for it. yet now he won't even able to answer because the GPU still gets the same framerate with a 5.5 GHz mythical 13600k :) i'm really looking forward to the creative ways of how some people will spin this now. Bojji Bojji :)

I've finally found the ultimate way to shut this discussion down. Let's see how they will spin it though. I can already guess the most probable answer:

"muh game is broken, muh game is unoptimized" (it wasn't unoptimized or broken 24 hours ago in a different context though)
"muh its a 13600k, its a petty i5 CPU. 13900k and 7800x 3d would double the framerates magically, you just can't see it, find me a 7800x 3d specific benchmark instead" (despite acknowleding 13600k 5.5 ghz has %90 the performance of a 13900k)
oh and of course the most extreme one would be
"muh zwormz is a bad tester, I don't believe their video. oh, I don't believe your screenshots either. I will believe just what I want to believe, not what is proven scientifically"

or something that is creative that I really can't think of right now. nooo, it just can't be the 3070 that is a limitation. it is a mythically fast GPU that should deliver 100+ FPS at 4K FSR performance low settings in starfield. something must be wrong (???)

oh oh, I found it. he will find some different benchmark in a different part of the town where they get 80+ FPS or something at actual 1080p low (not at 4k fsr performance, because he refuses to accept that 4k fsr performance is much more heavy than 1080p itself. for him, 4k fsr performance is the same things running the game at 1080p, just because) and claim there's something wrong with zwormz benchmark, my benchmark is wrong. he will practically change the conditions of the test, but deny that condition has changed (4k fsr performance is maybe %30-50 heavier to render than native 1080p at any given setting. but to prove his point that 3070 should get more FPS, he will go out of his way to find actual 1080p benchmarks. deep down he now realized and understood that actual 1080p has less cost on GPU and upscaling has a heavy cost on GPU. but of course he wont be able to admit that and instead he will keep talking about the mythical 0.9m pixels and 2m pixels to save face)
 
Last edited:

Bojji

Member
4k fsr performance (%50 scaling) low settings

i5 13600k 5.5 GHZ

heavily GPU bound at 63-67 FPS:


T0BcOIw.png


ryzen 2 2700x 4 GHz

heavily GPU bound at 63-67 FPS (limited by GPU, as confirmed with the help of PRESENTMON profiling tool)

HVdx9JN.jpeg



exact same settings, exact same internal resolution, exact same GPU usage, exact same FPS in the exact same scene


looking for that mythical 2x-3x performance increase 13600k 5.5 ghz supposed to have in this scene:

Driving Ace Ventura GIF


This thread has become funny. People now deny heavy GPU bound scenarios just because.

imagine, this person would call 63 FPS at 1080p low on a 3070 a heavy CPU bottleneck and blame my CPU for it. yet now he won't even able to answer because the GPU still gets the same framerate with a 5.5 GHz mythical 13600k :) i'm really looking forward to the creative ways of how some people will spin this now. Bojji Bojji :)

I've finally found the ultimate way to shut this discussion down. Let's see how they will spin it though. I can already guess the most probable answer:

"muh game is broken, muh game is unoptimized" (it wasn't unoptimized or broken 24 hours ago in a different context though)
"muh its a 13600k, its a petty i5 CPU. 13900k and 7800x 3d would double the framerates magically, you just can't see it, find me a 7800x 3d specific benchmark instead" (despite acknowleding 13600k 5.5 ghz has %90 the performance of a 13900k)
oh and of course the most extreme one would be
"muh zwormz is a bad tester, I don't believe their video. oh, I don't believe your screenshots either. I will believe just what I want to believe, not what is proven scientifically"

or something that is creative that I really can't think of right now. nooo, it just can't be the 3070 that is a limitation. it is a mythically fast GPU that should deliver 100+ FPS at 4K FSR performance low settings in starfield. something must be wrong (???)

I have seen many PC gamers crying before that they changed cpus and saw no performance uplift, hahaha.

Of course there are games that are super cpu limited and changing 3600 to 13600k will give you huge difference but most of the time GPU is the limiting factor in 1440p and 4k and most of the differences can be seen in 1080p HFR experience.

Many people in this thread don't understand it or don't want to understand it. Changing 3600 to 15900KS 7GHz edition won't change performance when GPU is 100% utilized.

So we are losing our time here because they will see that 13900k and say "lol DF hate Sony, they should have used Pentium dual core for parity!".
 
Last edited:

yamaci17

Member
Many people in this thread don't understand it or don't want to understand it. Changing 3600 to 15900KS 7GHz edition won't change performance when GPU is 100% utilized.

that's the crucial part. they look at CPU bound benchmarks and claim as if it is a general rule that you will get 2x perf increase with a 2x faster CPU. what the hell does it mean for a GPU bound scenario?

I run into GPU bound scenarios on my 3070 all the time where even the 7900x 3d wouldn't do a damn thing. some people in this thread legit question my own sanity or their sanity. I just can't believe what I read sometimes.

look at this heavy gpu bound scenario i have here in alan wake 2:



will I get 70+ FPS magically with a better CPU here? how the hell does it even make sense? 3070 is just weak for path tracing. this is literally what it is capable of, even if you pair it with a low, mid or high end CPU, result won't change. this is an apparent GPU bound scenario where only fix would be to get a better GPU.



if this is caused by CPU, why the hell is my GPU squirming at 230 watts at 70 degrees lmao

I literally have to use dlss performance/ultra performance to barely hit 30+ fps averages GPU bound in cyberpunk with path tracing. if that is not the definition of an extremely GPU bound situation, I don't know what is.




this is just what this GPU is capable of.

sure there will be CPU bottlenecks here and there but this is not it. Starfield is a super GPU heavy game while also being a super CPU heavy game at the same time. Both can be true at the same time, which is why probably Starfield confuses him the most and refuses to believe game is that heavy on GPU on something like 3070 and tries to make sense out of it through a CPU perspective.

starfield is still clearly GPU bound at 4k fsr performance low settings on a 3070 at around 60 fps with an high end CPU. which means game is extremely heavy on GPU even at low settings and low resolutions. what else there is to say???

isn't the whole thread/discussion is about whether Avatar is GPU bound on PS5 at 720p or not at 60 FPS?

If Starfield is GPU bound at around 60 FPS on a 13600k and on a 3070 at LOW settings at 1080P resolution, doesn't it mean that at medium/high settings, you would need lower than 1080p internal resolution to get similar 60 framerate averages?
 
Last edited:

SlimySnake

Flashless at the Golden Globes
l

Of course there are games that are super cpu limited and changing 3600 to 13600k will give you huge difference but most of the time GPU is the limiting factor in 1440p and 4k and most of the differences can be seen in 1080p HFR experience.
Which is what most of these console games are running their performance modes on. Actually most are dropping to 720p internal resolutions and still struggling with hitting 60 consistently. Immortals, avatar, ff16, Star Wars, Alan wake, helldivers, skull and bones. Starfield didn’t even get a 720p 60 fps mode. Meanwhile my friend is using my old rtx 2080 with a brand new 7800x3d and had no problems running it at 1080p 60 fps using medium settings at launch. That gpu is what ms themselves said is a equivalent to the xsx gpu.

So yes, good to see you finally agree that console performance modes are bottlenecked by the cpu.
 
Last edited:

yamaci17

Member
"and claim there's something wrong with zwormz benchmark, my benchmark is wrong. he will practically change the conditions of the test, but deny that condition has changed (4k fsr performance is maybe %30-50 heavier to render than native 1080p at any given setting. but to prove his point that 3070 should get more FPS, he will go out of his way to find actual 1080p benchmarks. "

told ya he would use this one :) he now shifted to tune to a "benchmark of his friend's" at actual 1080p (which is lighter to render compared to 4k fsr performance/fsr ultra performance / 1440p fsr performance/dlss performance.) he changed the condition to prove his "own point". he still keeps on with the pixel resolution count talk. this will forever cause him to not understand that upscaled games can be extremely and heavily GPU bound due to upscaling and native buffer costs. consoles most of the time never use 1080p buffers and often stay at 4k buffer and upscale to 4k, which is why it is so costly to begin with (as exemplified in starfield example above. 3070 that is able to push 30-36 FPS at native 4k barely pushes 63 fps at 4k fsr performance, despite the 4x pixel reduction on paper, with a 13600k 5.5 ghz by the way.)

i also call bs on 2080 being able to lock to a 60 at 1080p native in starfield. there are foliage heavy places in new atlantis that will make the 3070 dip below 60 fps at LOW settings at ACTUAL native 1080p too

 
Last edited:

Bojji

Member
Which is what most of these console games are running their performance modes on. Actually most are dropping to 720p internal resolutions and still struggling with hitting 60 consistently. Immortals, avatar, ff16, Star Wars, Alan wake, helldivers, skull and bones. Starfield didn’t even get a 720p 60 fps mode. Meanwhile my friend is using my old rtx 2080 with a brand new 7800x3d and had no problems running it at 1080p 60 fps using medium settings at launch. That gpu is what ms themselves said is a equivalent to the xsx gpu.

So yes, good to see you finally agree that console performance modes are bottlenecked by the cpu.

Resolution drop won't change your fps when you are CPU bound, how many times this has to be said?

Developers are dropping to 720p because they are GPU bound, this wouldn't change shit in fps if CPU was the problem.

When we have clearly CPU bound games like starfield and Gotham knights developers are locking them to 30fps so you don't see that limit. Jedi was running 45 fps because it was CPU bound in some sections and they removed that problem by turning off CPU heavy ray tracing.

So NO, most games on consoles in performance modes are purely GPU limited. That's why I said that many people just don't get how this works.
 

yamaci17

Member
Resolution drop won't change your fps when you are CPU bound, how many times this has to be said?

Developers are dropping to 720p because they are GPU bound, this wouldn't change shit in fps if CPU was the problem.

When we have clearly CPU bound games like starfield and Gotham knights developers are locking them to 30fps so you don't see that limit. Jedi was running 45 fps because it was CPU bound in some sections and they removed that problem by turning off CPU heavy ray tracing.

So NO, most games on consoles in performance modes are purely GPU limited. That's why I said that many people just don't get how this works.
yeah sometimes I wish they unlocked the lower bounds and let it drop below 720p. 720p is probably some kind of limit that they want to maintain to save face (i mean 720p itself is pretty shameful but you can always do worse). I'm sure most games would reach a locked 60 at 500-600p on series x and ps5

i also think that now they somehow believe lower resolution helps poor CPUs too (or something along those lines)
 
Last edited:

yamaci17

Member
here's a funny test I made with these profiling tools that show clear GPU bound performnace scaling between 1080p dlss quality (0.9m pixels) to 1080p dlss ultra performance (0.23m pixels) you can still see it is heavily GPU bound and gets from 32 FPS avg. to 50 FPS avg. it is plain old GPU limitation.



(also, does it actually look like 360 to you? go look at what actual 360p games looked like. and then understand why it is still heavy to run)

again thinking that games drop to 720p because of cpu limitations is admitting that %90 of these modes are dumb (because they unnecessarily let GPU resources go to waste and unnecesarily let you play with a degraded resolution that brings no benefit to the performance. so why is the engine dropping to 720p? if you believe it is not GPU bound, then it can't help with CPU boundness too. so why bother? so by extension their developers are dumb, and by extension sony is dumb by letting them do this and bring shame upon the console, and by extension I'd say if you insist on this theory that PS5 is not gpu bound at 720p with dynamic resolution scaling, you're a part of this circus too. by buying the ps5, playing the those games, at qualtiy or performance. you're, at least per your own beliefs, playing a game whose developer somehow thinks driving game to 720p lower bounds will help with its CPU performance. that is the ultimate moronic stuff you can do as a developer. and apparently they're a bunch. i'd say ditch the console and keep on with the PC then.

I tend to disagree that they're dumb however. it is plain old GPU limitation there too. no such developer will needlessly ruin their art just for the fun of it. it is just what 10 tflops of a mix of rdna1-rdna2 GPU can deliver with software based ray tracing (avatar and alan wake 2)

see above, exact same story. 3070 is too weak, barely gets 30 fps at 1080p dlss quality with path tracing. and to get close to 60 fps, you need to hit below 360p or sorts. profiling tool clearly says it is GPU limited, but again, you're free to believe whatever you want. I've had provided enough solid data

ps5 trying to run software ray tracing in avatar is similar to 3070 trying to run path tracing in Cyberpunk. both is not meant to be and only done so at the expense of pushing things forward to get improved real time lighting etc. opposed to baked lighting. in the end, you of all people will agree that HFW looks better with higher pixel counts than Avatar. so maybe in the end it didn't matter what they did. cyberpunk at 1080p dlss quality looks poor, even with path tracing, it just looks poor with gorgeous lighting lol

however we're also reaching what is possible with rasterization so I can sort of understand why devs try to push ray tracing stuff anyways
 
Last edited:

Bojji

Member
yeah sometimes I wish they unlocked the lower bounds and let it drop below 720p. 720p is probably some kind of limit that they want to maintain to save face (i mean 720p itself is pretty shameful but you can always do worse). I'm sure most games would reach a locked 60 at 500-600p on series x and ps5

i also think that now they somehow believe lower resolution helps poor CPUs too (or something along those lines)

Exactly. Developers should have they IQ and performance targets in mind at start and then build games around it, we had RR7 on PS3 that was 1080p and 60fps when most games were struggling with 30fps targets at 720p. What's the point of high res textures or amazing detailed models when players won't be able to see them behind soup on screen lol.

Something like this happened with Immortals, games is super tech heavy but it was 720p at launch and no wonder, GPU stronger than PS5 (6700xt) barely has more than 30fps in 1080p...

l9dE0u3.jpg
 
Last edited:

yamaci17

Member
Exactly. Developers should have they IQ and performance targets in mind at start and then build games around it, we had RR7 on PS3 that was 1080p and 60fps when most games were struggling with 30fps targets at 720p. What's the point of high res textures or amazing detailed models when players won't be able to see them behind soup on screen lol.

Something like this happened with Immortals, games is super tech heavy but it was 720p at launch and no wonder, GPU stronger than PS5 (6700xt) barely has more than 30fps in 1080p...

l9dE0u3.jpg
yeah this game is a funny one. in this one for example ryzen 3600 cannot guarantee a 60 fps lock over ps5, despite you would be led to believe it would by some folks in this thread;



even at low settings, both have similar drops, and if they're indeed CPU bound, it means aveum is another game where ps5 can match or come close to 3600's CPU bound performance (like in spiderman and ratchet). sure, maybe there's an outlier here too like with spiderman using extra cpu resources for streaming and such. but it is what it is.

we were now led to believe in this thread that PS5's CPU performs like a ryzen 1800x, and plainly worse than ryzen 2600 etc. so if ryzen 1800x-like CPU drops to 50s in that area, why does 3600 have similar drops too? surely it should've locked to 60 considering PS5's performance profile... but it doesn't. or maybe it is the 2080 that is the limitation, who knows.

the game is so heavy on the gpu that he has to drop from 4k dlss quality to dlss performance to lock to 60 with a 13700k (3:54) and a rtx 4080.

imagine believing that this game won't be GPU bound at 720p on PS5, lol.
 
Last edited:

yamaci17

Member
On PS5? Certainly not, the PS5 is about 5% faster then a 2080, while a 3070 is another 30% on top of that.
they somehow have no trouble in believing that console specific optimizations can allow ps5 gpu to punch above its weight

but when it comes to CPU, it must be worse than desktop ryzen 2600 (3.6 ghz zen + cpu) despite being a 3.5 ghz zen 2 CPU, somehow.

these optimizations must only be relevant for its GPU I guess.
 

damiank

Member
If you are talking about second hand market, PC has that as well you know? A second hand 3070 can be found for under $200 if you shop around. Combine that with a refurbished business PC and you have a gaming PC faster then the PS5 for around $350. I wouldn't personally recommend it, but if you are that strapped for cash it can be done.
You mean those old Haswell farts? Good luck with them.
 
Last edited:

Bojji

Member
yeah this game is a funny one. in this one for example ryzen 3600 cannot guarantee a 60 fps lock over ps5, despite you would be led to believe it would by some folks in this thread;



even at low settings, both have similar drops, and if they're indeed CPU bound, it means aveum is another game where ps5 can match or come close to 3600's CPU bound performance (like in spiderman and ratchet). sure, maybe there's an outlier here too like with spiderman using extra cpu resources for streaming and such. but it is what it is.

we were now led to believe in this thread that PS5's CPU performs like a ryzen 1800x, and plainly worse than ryzen 2600 etc. so if ryzen 1800x-like CPU drops to 50s in that area, why does 3600 have similar drops too? surely it should've locked to 60 considering PS5's performance profile... but it doesn't. or maybe it is the 2080 that is the limitation, who knows.

the game is so heavy on the gpu that he has to drop from 4k dlss quality to dlss performance to lock to 60 with a 13700k (3:54) and a rtx 4080.

imagine believing that this game won't be GPU bound at 720p on PS5, lol.


they somehow have no trouble in believing that console specific optimizations can allow ps5 gpu to punch above its weight

but when it comes to CPU, it must be worse than desktop ryzen 2600 (3.6 ghz zen + cpu) despite being a 3.5 ghz zen 2 CPU, somehow.

these optimizations must only be relevant for its GPU I guess.

This is especially funny when you consider that all the "special hardware" PS5 has is mostly to offload CPU, so we have hardware decompression and audio chip - PC don't have things like that and have to calculate stuff on CPU. So PS5 CPU should perform better than what DF were able to test with those console APUs in PC environment. Plus consoles can in theory have lower level apis to improve things even further.
 

Gaiff

SBI’s Resident Gaslighter
We cant say this with confidence without profiling both the CPU and GPU on consoles. At this point its just conjecture.
I'd argue we can, considering we see CPUs in the same class being able to provide double the number of frames to a faster GPU. I don't think it's a CPU bottleneck, ie, it's not because the CPU is insufficiently strong, but there is a bottleneck somewhere else for sure.

As Rich speculated, it could be a myriad of things; latency, drivers, poor coding, maybe the PS5's API scales poorly past a certain level. What we do know is that the CPU is physically powerful enough to deliver many more frames. If the programmers were idiots and somehow screwed up by constraining the CPU, then it's not a CPU bottleneck, it's incompetence.

And you're correct that we don't have profiling tool so we cannot know for sure but we can at least rule determine that physically, the CPU can do more.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Resolution drop won't change your fps when you are CPU bound, how many times this has to be said?

Developers are dropping to 720p because they are GPU bound, this wouldn't change shit in fps if CPU was the problem.

When we have clearly CPU bound games like starfield and Gotham knights developers are locking them to 30fps so you don't see that limit. Jedi was running 45 fps because it was CPU bound in some sections and they removed that problem by turning off CPU heavy ray tracing.

So NO, most games on consoles in performance modes are purely GPU limited. That's why I said that many people just don't get how this works.
Unless you have CPU and GPU profiling results on consoles, you cant say that for certain. A native 4k 30 fps game like guardians of the galaxy doesnt all of a sudden become GPU bound at 1080p with much lower settings. Same goes for Skull and Bones. It was coop support in Gotham Knights. RT support in Star Wars. Physics and object persistence in Starfield. Every engine is different. Every game is different. Go lookup the 6600xt results for Guardians of the Galaxy at 1080p ultra settings. Explain to me why the PS5 couldnt run it at 1080p 60 fps even at drastically paired down settings?

Even you said that Star Wars's RT was CPU bound in some sections, but even after I turned off RT, i was still CPU bound, just not as much.

What we do know is that the CPU is physically powerful enough to deliver many more frames. I
By that logic, 60 fps PS4 era games like CoD, Doom, Halo 5 Guardians, and Titanfall 2 prove that the PS4 CPU was physically powerful enough to deliver 60 fps in every game, and Rockstar, ND, kojima and every other developer who settled for 30 fps games last gen was incompetent. Every game is different. Hogwarts was fine in the open world and a complete mess in hogsmead. You just never know how devs have their games setup on the CPU. the Avatar dev said it best, most games are single threaded. So when you go into a combat section or have different physics going on with NPCs, or enemies or weather effects, or explosions, wouldnt you say the game at that point would become cpu bound just like Star wars in koboh, starfield in new atlantis, and hogwarts in hogsmead? At that point, you can bet that a 6 tflops CPU would do better than a 3.5 GHz CPU found in consoles.

Again, go look up any Zen 1- Zen 4 comparison and you will see insane performance increases. If the CPU didnt make a difference, no one would be upgrading to the 7000 series.
 

Gaiff

SBI’s Resident Gaslighter
By that logic, 60 fps PS4 era games like CoD, Doom, Halo 5 Guardians, and Titanfall 2 prove that the PS4 CPU was physically powerful enough to deliver 60 fps in every game, and Rockstar, ND, kojima and every other developer who settled for 30 fps games last gen was incompetent.
Did we test those games on comparable CPUs and got 60fps? Don't think so. No one tested those mobile CPUs in AAA games and no mainstream CPU as slow as them and comparable in terms of feature set was available. The 3600 is quite close to the consoles' CPU so we have a reference.
Again, go look up any Zen 1- Zen 4 comparison and you will see insane performance increases. If the CPU didnt make a difference, no one would be upgrading to the 7000 series.
No one said the CPU doesn't make a difference. We're saying it doesn't make a difference at such low frame rates as evidence by what Bojji Bojji and yamaci posted. If there was RT in Alan Wake 2 in the unlocked 60fps mode, then of course, it'd make a difference. Look at how BG3 utterly hammers the 3600 on PC and causes it to drop below 30fps even with a fast GPU.

You said it yourself, we don't have profiling tools so we cannot know for sure what is going on so why do you insist on saying it's the CPU (and have been doing so for several pages) when it could just as easily be something else? Especially when the CPU is one of the few elements we can actually isolate on PC to compare. Performance isn't just CPU or GPU, it's a million things.
 
Last edited:

yamaci17

Member
Did we test those games on comparable CPUs and got 60fps? Don't think so. No one tested those mobile CPUs in AAA games and no mainstream CPU as slow as them and comparable in terms of feature set was available. The 3600 is quite close to the consoles' CPU so we have a reference.

No one said the CPU doesn't make a difference. We're saying it doesn't make a difference at such low frame rates as evidence by what Bojji Bojji and yamaci posted. If there was RT in Alan Wake 2 in the unlocked 60fps mode, then of course, it'd make a difference. Look at how BG3 utterly hammers the 3600 on PC and causes it to drop below 30fps even with a fast GPU.

You said it yourself, we don't have profiling tools so we cannot know for sure what is going on so why do you insist on saying it's the CPU (and have been doing so for several pages) when it could just as easily be something else? Especially when the CPU is one of the few elements we can actually isolate on PC to compare. Performance isn't just CPU or GPU, it's a million things.
yeah, i've literally posted starfield benchmark at 1080p low settings (4k fsr %50 performance) where 3070 is fully GPU bound at 63 fps the exact same way on 13600k and 2700x
this literally means that even if the game is CPU bound in general in new atlantis (and as I've noted earlier, in different parts of new atlantis my CPU drops to 55s), it doesn't change the fact that I would also need to play at 1080p low settings to ensure my GPU is capable of pushing 60 FPS, even if I were to have a much better CPU. it is now scientifically proven that even with a 13600k at 5.5 ghz, 3070 still drops to 60 FPS or so at low settings at 1080p internal resolution. this GPU is not that "fast" for Starfield, nor it ever will be. Starfield is a heavy game on the GPU, period.

Fact of the matter is, medium settings are costlier than low settings, xbox series x is weaker than a 3070. All of this points out that even if xbox series x had a Zen 4 CPU in it, its GPU would still require sub-800p resolutions to hit 60 FPS in Starfield. I simply believe and genuinely think that same applies to Avatar Pandora. The game is GPU bound between 1296p and 1800p in its 30 FPS mode from what I recall. logic is simple: if it needs to drop to 1296p at 30 fps, and based on what I've shown above, to get 2x framerate, it would need to drop to 648p or sorts to hit 60 FPS. the fact that it stops at 720p and starts hovering around 50-56 FPS there is proof that just a bit more push forward to 648p would make those 50 FPS dips to 60 FPS (granted the CPU is capable. We can't know that for certain unless I get the game and do tests myself, at least in my case)

I've also shown in an another review with 13600k again and a and 3060ti that was able to render 27 FPS at native 4k low but barely have gotten to 58 FPS with DLSS performance, still being heavily GPU bound... and of course if zWormz did those tests at medium/high settings, even 4k dlss performance wouldn't come anywhere close to 60 FPS actually.

We've somehow settled on Avatar being GPU heavy enough that it can make resolution to drop to 1296p at 30 FPS. Surely its not going to be in any kind of CPU limitation. What do people expect? A GPU that drops to 1296p 30 FPS will have to make SEVERE sacrifices to both raster settings and resolution to double that performance ON THE GPU.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
You said it yourself, we don't have profiling tools so we cannot know for sure what is going on so why do you insist on saying it's the CPU (and have been doing so for several pages) when it could just as easily be something else?
because of the single threaded nature of most modern engines. A 3.5 Ghz CPU vs 6.0 Ghz will always give an advantage. Because we did this with Guaridans of the Galaxy and saw that the same tflops GPU on PC was running the game at much higher framerates at a very high quality.

I remember the Guardians issue vividly because we were able to rule out several things:
- memory bandwidth. 6600xt's vram is half of the PS5 so it couldnt have been that.
- tflops. 6600xt was virtually identical to the PS5 in tflops.
- PS5 vs Direct X APIs. The performance was terrible on both consoles.

So whats left? the CPU.

Im just not buying it. Ran skull and bones beta on my 3080 with no issues at DLSS Quality maxed out with RT on and room to spare. PS5 runs at native 4k 30 fps. It should be able to do 1440p 60 fps. Or at least 1080p 60 fps like helldivers. instead its dropping to 720p. Its the CPU.
 
Last edited:

yamaci17

Member
here's one more funny video

rtx 3080 can't hit a locked 60 fps at 720p (0.9 millions of pixels) with a 5600x :)



heavily GPU bound. (you can't dispute that it is caused by the 5600x, as console "1800x" CPU gets 50 FPS in this game. so 5600x being 2x faster than that, surely, it can't be a CPU limitation.

so why would 3080 get heavily GPU bound at 720p at 56 FPS though? it is pushing native 4k 32 fps in 18:55 ;) how can it render 27-32 fps at 8.2 millions of pixels and barely gets to 56-64 FPS range with 0.9 millions of pixels? COULD it be that... upscaling is costly?

video games joker GIF


damn this has been a funny ride. and this seals the deal. this has been so satisfying lmao.

O3HS1bw.png

bjGBXGi.png


GssOidE.png


Its Over Basketball GIF by NBA


cue the "game is not optimized on PC, DLSS/FSR is broken, 5600x is bad, 13900k testing required, this proves nothing"

i want to cry. finding above destroys everything they've been standing for. everything they've claimed so far. and the sweet part of is that it is literally their GPU.


Alan Wake Dance GIF by Remedy Entertainment
 
Last edited:

Gaiff

SBI’s Resident Gaslighter
because of the single threaded nature of most modern engines. A 3.5 Ghz CPU vs 6.0 Ghz will always give an advantage. Because we did this with Guaridans of the Galaxy and saw that the same tflops GPU on PC was running the game at much higher framerates at a very high quality.

I remember the Guardians issue vividly because we were able to rule out several things:
- memory bandwidth. 6600xt's vram is half of the PS5 so it couldnt have been that.
- tflops. 6600xt was virtually identical to the PS5 in tflops.
- PS5 vs Direct X APIs. The performance was terrible on both consoles.

So whats left? the CPU.
Come on, games are a heck of a lot more complex than that. It could be a million different things fucking with the performance.
Im just not buying it. Ran skull and bones beta on my 3080 with no issues at DLSS Quality maxed out with RT on and room to spare. PS5 runs at native 4k 30 fps. It should be able to do 1440p 60 fps. Or at least 1080p 60 fps like helldivers. instead its dropping to 720p. Its the CPU.
Not sure I understand your logic. It almost locks to 60fps even on the paltry Series S at 540p. My first thought would be to blame the GPU since that's what gets majorly impacted by resolution. Why would you say it's the CPU?
 
Even if the chip was one for one exactly the same as the PS5 chip the PC GPU would lose most comparisons. Games on the consoles are developed natively for the consoles, there's way more overhead and compatibility issues on PC. Why are these comparisons even being made???

On the low end consoles are usually going to be more optimized than PC with exact same specs, and on the mid-high to high end PCs are going to do things that consoles can only dream of. This is the reality of how it has been for at least the last 20 years. The only player this is not true for is Nintendo since the Wii.
 

yamaci17

Member
Not sure I understand your logic. It almost locks to 60fps even on the paltry Series S at 540p.
nice point by the way. That never occured me lol. we've seen many games lock to more robust 60 fps lock on series s due developers having more freedom to be more "relax" with reducing resolution. i remember immortals of aveum was almost a fairly solid 60 fps lock on S but albeit with 436p resolution cost (which is too heavy in my opinion).

and if the argument is that series s uses lower settings that would help its CPU, then are developers dumb for not enabling that

and I'm still shocked as to why would 720p help CPU. :pie_thinking:
 

Gaiff

SBI’s Resident Gaslighter
and I'm still shocked as to why would 720p help CPU. :pie_thinking:
Yeah, I really don't get that part but Snake probably has an explanation for reaching that conclusion. If you need to drop all the way down to 720p from 4K to get 60fps, then you're easing the load on the GPU. Resolution has next to no impact on the CPU.
 

yamaci17

Member
Yeah, I really don't get that part but Snake probably has an explanation for reaching that conclusion. If you need to drop all the way down to 720p from 4K to get 60fps, then you're easing the load on the GPU. Resolution has next to no impact on the CPU.
i personally refuse to accept that logic, honestly.

here's cyberpunk being cpu bound at native 1080p/low settings on the 3070 at 98 FPS; (this is a true CPU bottleneck, as shown with the help of Presentmon. Performance here is indeed limited by the CPU!)

hNnD85i.png




going to 540p (1080p dlss performance) literally does not even get you a frame. I wish it did actually lol. this is the first time i'm hearing 720p/540p helping low end CPU. i wish this mythical thing affected me.

9aiPVvC.png


this is what true CPU bottleneck looks like, but it still happens at 100 FPS or so in Cyberpunk even with my lowend CPU :messenger_tears_of_joy:

even 1080p ultra quickly makes it super GPU bound in an instant:

ICu4UP5.png
 
Last edited:

Gaiff

SBI’s Resident Gaslighter
i personally refuse to accept that logic, honestly.

here's cyberpunk being cpu bound at native 1080p/low settings on the 3070 at 98 FPS; (this is a true CPU bottleneck, as shown with the help of Presentmon. Performance here is indeed limited by the CPU!)





going to 540p (1080p dlss performance) literally does not even get you a frame. I wish it did actually lol. this is the first time i'm hearing 720p/540p helping low end CPU. i wish this mythical thing affected me.



this is what true CPU bottleneck looks like, but it still happens at 100 FPS or so in Cyberpunk even with my lowend CPU :messenger_tears_of_joy:

even 1080p ultra quickly makes it super GPU bound in an instant:
Yeah, pretty much what I expected. Dropping the resolution will do fuck-all in CPU-limited scenarios.
 

SlimySnake

Flashless at the Golden Globes
Come on, games are a heck of a lot more complex than that. It could be a million different things fucking with the performance.

Not sure I understand your logic. It almost locks to 60fps even on the paltry Series S at 540p. My first thought would be to blame the GPU since that's what gets majorly impacted by resolution. Why would you say it's the CPU?
Why would the GPU have no issues rendering the game at native 4k in Guardians and Skull and Bones, then all of a sudden start having a million different things fucking with the performance despite downgrading the GPU load from rendering 8.2 million pixels all the way down to sub 1 million pixels?

The DRS on these games drops when the engine realizes that the game is not going to be able to hit 60 fps. So they drop the resolution. Then they drop it some more and they keep dropping it until it hit the lowest bound. And then they call it a day instead of fixing any bottlenecks. This exact scenario happened with Star wars. They were like fuck it. Lets just bring the DRS resolution down to 640p instead of fixing the root cause which they sort of did by removing RT altogether to free up some of the CPU. But i can promise you the game is still CPU bound even after the RT is removed.

it is no different than the modders we discussed earlier in the thread dropping the resolution of Uncharted 4 in order to get a locked 60 fps because simply halving it didnt work due to CPU, NOT GPU bottlenecks. We are seeing the same thing here. Just because the CPUs are more powerful than the jaguar CPUs doesnt mean they are still capable of holding back the GPU. We have dozens of examples of this from both third and first party studios over the last 3 years. It's literally history repeating itself.
 

Gaiff

SBI’s Resident Gaslighter
Why would the GPU have no issues rendering the game at native 4k in Guardians and Skull and Bones, then all of a sudden start having a million different things fucking with the performance despite downgrading the GPU load from rendering 8.2 million pixels all the way down to sub 1 million pixels?

The DRS on these games drops when the engine realizes that the game is not going to be able to hit 60 fps. So they drop the resolution. Then they drop it some more and they keep dropping it until it hit the lowest bound. And then they call it a day instead of fixing any bottlenecks. This exact scenario happened with Star wars. They were like fuck it. Lets just bring the DRS resolution down to 640p instead of fixing the root cause which they sort of did by removing RT altogether to free up some of the CPU. But i can promise you the game is still CPU bound even after the RT is removed.

it is no different than the modders we discussed earlier in the thread dropping the resolution of Uncharted 4 in order to get a locked 60 fps because simply halving it didnt work due to CPU, NOT GPU bottlenecks. We are seeing the same thing here. Just because the CPUs are more powerful than the jaguar CPUs doesnt mean they are still capable of holding back the GPU. We have dozens of examples of this from both third and first party studios over the last 3 years. It's literally history repeating itself.
I'm really not sure what you're arguing here. Are you saying that dropping the resolution somehow frees up the CPU or lessens the load on it?
 
Top Bottom