• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Mark Cerny: When making consoles, we're not trying to build low-cost PCs

MAX PAYMENT

Member
Does your PC use gpu/cpu cycles to decompress assets or not? (Rhetorical question).

And please share specs of the midrange PC you built 2 years prior to PS5 release that trumps the console.
I had a 2080 at the time. Now my 4080 absolutely waxes the ps5. I played FF7 remake at native 4k with a high refresh rate. Then tried it's sequel on ps5 and was literally shocked at the drop in image quality.
 

MaulerX

Member
Ok sure. Build a PS5 specced PC in 2020 for $399 if thats the case. Or hell, build a PS5-specced PC for $399 today and let's see what you end up with.


That doesn't change the point in the slightest. In fact, it supports it.

A similar specced PC costs more because its static components are not mass produced the way a consoles are. And more importantly they are not subsidized by services like PSN/Xbox Live/Game Pass Core etc...

An X86 console is in fact a low cost lower specced PC. A cost made even lower by the previous point.
 
That doesn't change the point in the slightest. In fact, it supports it.

A similar specced PC costs more because its static components are not mass produced the way a consoles are. And more importantly they are not subsidized by services like PSN/Xbox Live/Game Pass Core etc...

An X86 console is in fact a low cost lower specced PC. A cost made even lower by the previous point.
PC parts are not mass produced? You make it sound like nVidia GPUs or Intel CPUs are as rare as SGI hardware.

Consoles have a single motherboard where all components (CPU, GPU, DRAM) are soldered in (even the SSD on PS5).

CPU + GPU are unified (APU), DRAM is unified, while on PC you have discrete components with different RAM types (DDR vs GDDR). Hell, some people hate consoles due to DRAM latency.

You want PCs to have console-style economies of scale and at the same I see PCMR folks dissing consoles for not having upgradeable parts due to lack of a CPU socket, DIMM slots, PCIe slots etc. HDD/SSD is the only exception.

Not to mention PC gamers don't like having to pay for online MP (it's still shitty, I agree, but console devs always have the option to make their MP game F2P to maximize their potential audience and recoup the costs via cosmetic MTX).

You cannot have your cake and eat it too. Every ecosystem has trade-offs.
 
Last edited:

Mr.Phoenix

Member
That doesn't change the point in the slightest. In fact, it supports it.

A similar specced PC costs more because its static components are not mass produced the way a consoles are. And more importantly they are not subsidized by services like PSN/Xbox Live/Game Pass Core etc...

An X86 console is in fact a low cost lower specced PC. A cost made even lower by the previous point.
You're mistaken... but I am too lazy to correct you. So I'll leave it at lets agree to disagree.
 

Mr.Phoenix

Member
Going by the rumoured GPU specs, the PS5 Pro GPU is roughly equivalent to the RX 7700XT.
The 7700XT, according to chatgpt 4o, has 54 TOPS.

If chatgpt is correct, and the rumour of Sony using RDNA3 is correct, then the only way Sony is hitting 300 TOPS is via a very beefy AI co-processor.

Personally I think it's more likely Sony are using a modified RDNA 3 with RDNA 4 enhancements, or they are just using RDNA4.
The question is, what are those RDNA4 enhancements? RT cores from RDNA4? AI units form RDNA4? We don't know. Furthermore, the AI stuff can be some custom Sony stuff, just like how the IO complex was custom Sony stuff on the of PS5. The point is that Sony has shown that they are more likely to do very deep customizations whatever they are doing with AMD than just copy and paste components.

Hell DF suggested as much during their analysis of the potential PS5pro. And they also chimed in on that 300 TOPs number. But these are all rumors though, just pointing out, however, that 300TOPs has been mentioned. The difference between 300 and 54 is so vast, that that cant be a mistake.
 
All I know is we need a Pro badly by now ...ps5 can't keep up with the latest games and once the next line of Nvidia gpu's come out these consoles will look like a joke.

Games like Black Myth Wukong, Star Wars Outlaw are not going to look anywhere near the PC. Hell, we're already in the situation where the consoles will never have a great version of certain games because the ps5/sx couldn't brute force their way to good settings- cyberpunk 2077 and Alan wake 2 are the prime examples. Gta6 will be a good one too.
 

IDWhite

Member
That's a far cry from you claiming they aren't using the PS5 streaming tech. The devs are having trouble with their engine keeping up with the hardware. This is a good thing as it is typically the other way around.

I'm not saying they don't use the hardware capabilities. What I have tried to say is that they are not taking advantage of everything that the hardware is capable of performing.

You can understand this however you want. For me it is a missed opportunity.
 

MaulerX

Member
PC parts are not mass produced? You make it sound like nVidia GPUs or Intel CPUs are as rare as SGI hardware.

Consoles have a single motherboard where all components (CPU, GPU, DRAM) are soldered in (even the SSD on PS5).

CPU + GPU are unified (APU), DRAM is unified, while on PC you have discrete components with different RAM types (DDR vs GDDR). Hell, some people hate consoles due to DRAM latency.

You want PCs to have console-style economies of scale and at the same I see PCMR folks dissing consoles for not having upgradeable parts due to lack of a CPU socket, DIMM slots, PCIe slots etc. HDD/SSD is the only exception.

Not to mention PC gamers don't like having to pay for online MP (it's still shitty, I agree, but console devs always have the option to make their MP game F2P to maximize their potential audience and recoup the costs via cosmetic MTX).

You cannot have your cake and eat it too. Every ecosystem has trade-offs.



I think you misunderstood what I meant but to argue that X86 consoles are not low cost PC's is laughable.
 
I think you misunderstood what I meant but to argue that X86 consoles are not low cost PC's is laughable.
You're arguing against Cerny basically.

Btw, I've been saying these things since 2013 at least... it's good to see an official acknowledgement from the console architect himself.

And trust me, if we had mainstream internet access/Digital Foundry 35 years ago many people would argue that Sega Genesis/Mega Drive was a low-end Amiga.

Good thing we only had magazines back then. ;)
 
Last edited:

mitch1971

Gold Member
Now, you've done it.

humor situation GIF by BestTech
 

Mr.Phoenix

Member
All I know is we need a Pro badly by now ...ps5 can't keep up with the latest games and once the next line of Nvidia gpu's come out these consoles will look like a joke.
I wonder if you know what you are talking about.,
Games like Black Myth Wukong, Star Wars Outlaw are not going to look anywhere near the PC. Hell, we're already in the situation where the consoles will never have a great version of certain games because the ps5/sx couldn't brute force their way to good settings- cyberpunk 2077 and Alan wake 2 are the prime examples. Gta6 will be a good one too.
I hope for the industry's sake, devs aren't settling on brute forcing their way through performance. Which, unfortunately, is what has been happening for a while. But that is not a good thing. The last thing any consumer should want is a world where the only way to get great-looking and performing games is to just throw more power at the problem.

This is something I have discussed before with someone here, but the issue we have with these current-gen consoles, and the industry at large in all honesty... is that these things are actually too powerful. Or better way to put that, powerful enough to be abused.

Think about it, GoT, TLOU2, GOW, The Order 1886...etc, were all made to run, and look like they looked on a 1.8TF console. I'll say that again... 1.8TF. And yet, we have a good number of games on the current gen, that are not even pushing past 1440p@30fps, but do not even look as good as any of the games I just mentioned.

And yet people like you... think the issue is that 10TF+ consoles with all-around better architecture... are not powerful enough.
Would the PS5 have been better if Sony actually utilized the SSD the way they said they would instead of going multiplat and not being able to now?
What does this even mean? I read stuff like this and its why I think to myself that people that say stuff like this have no idea what they are talking about.

Using the PS5 SSD the way it was designed to be utilized, is exactly why the PS5 can get away with having only 12.5GB of usable RAM. Even with an SSD, a desktop PC would need around 16GB of CPU RAM and at least a GPU that has upwards of 12GB of GPU RAM to match that. Or basically, you overcompensate in some other areas.

The point is, on a technical level, the SSD is doing exactly what it was intended to do. What you are probably thinking about, are the gameplay experiences we get that are direct benefits of having that kinda SSD, but that has nothing to do with Sony. Its down to the devs. Sony has given them the tools, its left to the devs to decide how they wanna use them.
 
Last edited:

ChiefDada

Gold Member
Would the PS5 have been better if Sony actually utilized the SSD the way they said they would instead of going multiplat and not being able to now?

That's the problem with PC development...too many PC configurations so they can now only do so much with the i/o

PS5 gamers getting screwed this gen

In SM2, Insomniac is literally streaming animations off the SSD on a frame by frame basis based on what maneuvers the player decides to carry out as they swing, glide through the world. They are not compromising PS5 i/o capabilities. PC platform is the one that compromises (or will be compromised when the port eventually releases) in the form of sacrificed VRAM; high quality textures are first to go, followed by RT.
 

simpatico

Member
In SM2, Insomniac is literally streaming animations off the SSD on a frame by frame basis based on what maneuvers the player decides to carry out as they swing, glide through the world. They are not compromising PS5 i/o capabilities. PC platform is the one that compromises (or will be compromised when the port eventually releases) in the form of sacrificed VRAM; high quality textures are first to go, followed by RT.
How much fidelity loss are you expecting on the PC version of SM2?
 

64bitmodels

Reverse groomer.
Compared to a Pentium 4? It's pretty much completely different. The PS5 has a regular Zen 2 which is the exact same CPU you can find in a PC.
Different sure but it achieves the same purpose and doesn't do anything particularly better. Even the CELL had its advantages
 

ChiefDada

Gold Member
How much fidelity loss are you expecting on the PC version of SM2?

Depends on the PC build, of course. But the better question imo is what HW (CPU/GPU/VRAM/SD) is necessary to match/exceed PS5. Until GPU gets dedicated i/o hardware, significant CPU and GPU cycles are being used to brute force, inherently allowing PS5 to "punch above it's weight" in the eyes of people who only think in terms of traditional CPU and GPU power.
 
The last Playstation that wasn’t a cheap PC in design was the PS3, and we all know how that went.
By following your logic it was half a PC (GeForce 7/RSX), half exotic (Cell).

You can argue PS2 was the last 100% exotic console by Sony...

Compared to a Pentium 4? It's pretty much completely different. The PS5 has a regular Zen 2 which is the exact same CPU you can find in a PC.
It depends on what you wanna do. Different doesn't necessarily mean "better", there are pros and cons in each approach.

Both EE and Cell were crappy general purpose CPUs (even compared to Pentium 3 on OG XBOX), but they were floating point beasts at their time. Different philosophy, different transistor budget allocation.

In-order CPU cores are not particularly fast in general purpose workloads... that's why Linux didn't run particularly fast on PS2/PS3 (let alone the small RAM). You can also see this in cheap octa-core mobile ARM SoCs where there's not a single OoO core and the Android experience is quite laggy.

The way I see it hardware commoditization allows laymen to make easy comparisons (x86 vs x86). They treat PowerPC ISA as if it's something "superior", just because it's exotic compared to x86. Why? Because they cannot do a 1:1 comparison!

Zen CPUs don't have to be floating point beasts (AVX units on PS5 are a bit downgraded compared to PC, so not really the exact same CPU), because guess what, you have a beast GPU to perform GPGPU workloads much faster and a unified heterogeneous memory (hUMA) to share datasets among the CPU/GPU.

You think Cerny doesn't know these trade-offs and we know better than him?
 

Zathalus

Member
It depends on what you wanna do. Different doesn't necessarily mean "better", there are pros and cons in each approach.

Both EE and Cell were crappy general purpose CPUs (even compared to Pentium 3 on OG XBOX), but they were floating point beasts at their time. Different philosophy, different transistor budget allocation.

In-order CPU cores are not particularly fast in general purpose workloads... that's why Linux didn't run particularly fast on PS2/PS3 (let alone the small RAM). You can also see this in cheap octa-core mobile ARM SoCs where there's not a single OoO core and the Android experience is quite laggy.

The way I see it hardware commoditization allows laymen to make easy comparisons (x86 vs x86). They treat PowerPC ISA as if it's something "superior", just because it's exotic compared to x86. Why? Because they cannot do a 1:1 comparison!

Zen CPUs don't have to be floating point beasts (AVX units on PS5 are a bit downgraded compared to PC, so not really the exact same CPU), because guess what, you have a beast GPU to perform GPGPU workloads much faster and a unified heterogeneous memory (hUMA) to share datasets among the CPU/GPU.

You think Cerny doesn't know these trade-offs and we know better than him?
I didn't claim any approach was better, simply pointing out that the PS2 was quite a bit more different then PCs of the time compared to the PS5 and PCs of today.
 

RobRSG

Member
By following your logic it was half a PC (GeForce 7/RSX), half exotic (Cell).

You can argue PS2 was the last 100% exotic console by Sony...
The selection of XDR memory was very exotic at the time as well, alongside with the BD drive that we take for granted today but was nowhere near to be a cheap component.
 
The selection of XDR memory was very exotic at the time as well, alongside with the BD drive that we take for granted today but was nowhere near to be a cheap component.
Yes, but GDDR3 for RSX was not that exotic... even Wii had it.

If I had to play devil's advocate, I'd say PCs had Rambus long before the PS3:



There's a reason XDR was abandoned in favor of higher-clocked SDRAM.

Regarding optical media, we could have had a BD-ROM successor:


It's a shame tech progress stagnated on purpose (due to the rise of digital stores and streaming).
 

RobRSG

Member
Yes, but GDDR3 for RSX was not that exotic... even Wii had it.
IIRC Crazy Ken wanted the PS3 to count 100% on CELL even for the graphics, and the RSX was added very late in the R&D.

I’d pay to see what the hell would happen with Cell as CPU and GPU, or even a second Cell to act as a GPU.

In the end, the PS3 was the last creation of the mad man!
 

Audiophile

Member
For the PS3, they actually originally planned on a "GPU" chip simply called "RS". It would be fed vertices by the Cell BE & it's job was all pixels; much more akin to PS2, plus there'd be another element which coordinated everything to be in synchronisation. But it didn't work out, I think due to die size or cost or something.. Plan B was two Cells which also didn't happen. The last minute and reluctant Plan C was the Nvidia GPU which was then dubbed "RSX" as we know.

Only got a moment now and can't find it, but I'll look for the quote by the Insomniac employee (who was also working on the PS3 early on) which detailed it more and add it in in a bit..

EDIT:

Scratch that, just had a brainwave and found it:

Rob Wyatt:
"One thing to remember when it comes to the SPUs is they were made for a completely different purpose to what they were used for. When I started working on the PS3, I was in Japan, it was probably 2002 and I was splitting time between the PS3 and working on Ratchet on PS2. For the PS3 the RSX wasn't in the picture for maybe another year or so.

The original PS3 design had a Sony designed GPU called the RS but it only did pixels, it was also kind of complicated as you had to schedule all the threads yourself. The SPUs were intended to feed the RS with transformed vertices, in a similar manner to how the PS2 worked, and if you look at how the SPU DMA works then processing vertices is an almost perfect use case. The intended design was you'd be able to do fantastically complex vertex processing, with programmable nodes for skeleton joints, because the SPUs were not just stream processors (like vertex processors still are). There was a device called the LDPCU and to this day I'm 100% sure how it worked, it had 1500 pages of documentation in Japanese and only Mark Cerny could read it. It was basically a gate keeper and synchronization system that would allow the SPUs to process and complete vertex batches out of order but still have the RS/GPU render them in order. We never really used it because we didn't know how, to got it to work from what Mark told us and by the nature of how simple our tests were - I'm pretty sure it would have a total nightmare..

So what happened was the RS was too big, in silicon terms, to make and it wasn't really possible to optimize down to a reasonable size without significantly gutting it, if they gutted it then it wouldn't have competed with the XBox. At this point Sony were stuck between a rock and a hard place, they looked putting multiple cells in the console and software rendering (I actually wrote a prototype software renderer, in 100% hand paired asm, that would run across multiple SPUs - ultimately it was a proof of concept of what not to do), they look at stacking a bunch of PS2 style GPUs together to make a pseudo programmable blend stack. Ken Kutargi did not want to give up and go to Nvidia or AMD/ATI but in the end he had no choice, its a good job he did because how terrible would the PS3 have been if the SPUs were used for graphics and games had just the single PowerPC core??

Once the RSX showed up and it could do vertex processing the SPUs had no job. This is when the ICE team started looking at using the SPUs for other tasks, it was a massive exercise in data design. If you started from scratch you could design a system for physics, audio, AI, particles - whatever and it would be very fast because you could factor in the constraints of the SPU memory. However, if you started with existing code or cross platform code, then it was next to impossible to get the SPUs to do anything useful. Initially this resulted in huge variance in quality between first party and third party games. This was also the time frame when fewer and fewer studios were willing to write an engine from scratch and things like Unreal engine were getting very popular, it was UE3 at the time, and it ran like crap on the PS3 but ran awesome on the Xbox and PC. Ultimately, the negative developer feedback cut through the arrogance that was present at the time within Sony (and Ken himself) and the PS4 was intentionally designed to be PC like (and was done by Mark)
."

Kinda wish they managed to pull off the original plan, could've been a whole lot more interesting.
 
Last edited:
For the PS3, they actually originally planned on a "GPU" chip simply called "RS". It would be fed vertices by the Cell BE & it's job was all pixels; much more akin to PS2, plus there'd be another element which coordinated everything to be in synchronisation. But it didn't work out, I think due to die size or cost or something.. Plan B was two Cells which also didn't happen. The last minute and reluctant Plan C was the Nvidia GPU which was then dubbed "RSX" as we know.

Only got a moment now and can't find it, but I'll look for the quote by the Insomniac employee (who was also working on the PS3 early on) which detailed it more and add it in in a bit..

EDIT:

Scratch that, just had a brainwave and found it:



Kinda wish they managed to pull off the original plan, could've been a whole lot more interesting.
Very interesting read, thanks!

I'm curious to know if the custom RS GPU had the equivalent of pixel shader pipelines or if it had fixed-function pipelines (like PS2 GS, 3Dfx Voodoo, Riva TNT).

I'm asking because without pixel shader support, ND games wouldn't have that distinct cinematic look. They would feel more cartooney (like Jak & Daxter).

Also, I don't believe RSX was capable to process vertices alone with only 8 vertex pipelines, so it totally made sense to use Cell SPUs for vertex assistance.

There's also no harm to use it for physics or audio processing, considering the fact PCs had PhysX and EAX back then.

And let's not forget that 48 MPEG2 video decompression demo:


Cell was made from the get-go to handle various computing tasks (except pixel shaders and a few other stuff).

The Cell-only PS3 doesn't make much sense, it sounds too much like Larrabee in a way and it would have killed PlayStation. GPUs excel at graphics workloads, because they have dedicated circuitry (ROPs, TMUs etc).

Even Intel changed their approach with Xe GPUs, so that tells me a lot.
 

Zathalus

Member
He was merely switching from one class build save file - that was complete - to another and it took that long. After he died it took 15-20 on the first respawn, and 10 or so in respawns after that, so could be the save file cache was old needing regenerated or the drive section untrimmed, maybe even recompiling shaders - even though he hasn't updated drivers recently - but either way, the PS5 won't ever have a 3min loading on Elden Ring because of the IO complex and single SKU precompiled shaders, for something that should take 10secs.
Elden Ring on PC doesn't precompile shaders though. Not on first launch or any point of the game. I couldn't replicate any of that on PC, even switching between multiple different characters in different locations. Respawning is quick as well, and the longest load I saw was just over 6 seconds. That was on both a laptop Zen 3 and my desktop 13900k.

Actually I can't think of any single recent game that takes over 30 seconds to load. Maybe late game Civ 6 or something.
 
Last edited:

PaintTinJr

Member
Elden Ring on PC doesn't precompile shaders though. Not on first launch or any point of the game. I couldn't replicate any of that on PC, even switching between multiple different characters in different locations. Respawning is quick as well, and the longest load I saw was just over 6 seconds. That was on both a laptop Zen 3 and my desktop 13900k.

Actually I can't think of any single recent game that takes over 30 seconds to load. Maybe late game Civ 6 or something.
I'm guessing it was the SDD trim issue, and relocating data, or overprovisioning being full from a different game he'd played more recently than Elden Ring.

I see the trim issue with SSDs on my kids PCs frequently, even with daily auto trim set(and them all Pro OS SkUs too) but for whatever reason the drives frequently report 20-40 days since last trimmed - when I manually check because of me noticing a lag in the PC responsiveness for a basic desktop action - which again is certainly not an issue consoles have because auto drive maintenance is completely invisible to the end user, and repeatable performance is standard on a working console.
 

Zathalus

Member
I'm guessing it was the SDD trim issue, and relocating data, or overprovisioning being full from a different game he'd played more recently than Elden Ring.

I see the trim issue with SSDs on my kids PCs frequently, even with daily auto trim set(and them all Pro OS SkUs too) but for whatever reason the drives frequently report 20-40 days since last trimmed - when I manually check because of me noticing a lag in the PC responsiveness for a basic desktop action - which again is certainly not an issue consoles have because auto drive maintenance is completely invisible to the end user, and repeatable performance is standard on a working console.
Huh, can't say I've had any OS issues with Trim. I don't actively check, but looking now and for all three of my drives Trim was done in the past day. That's on Windows 11 Pro, can't say how any specific Linux distro handles it, or MacOS.
 

Togh

Member
Next up is the Nintendo PR saying that Nintendo consoles are not low cost smartphone hardware lmao
 

Melfice7

Member
This used to be true until the gamecube and ps3, after that they became exactly that, low cost pc's with a couple of custom things, oh and xbox's were always straight up pc's since the original
 

MacReady13

Member
I hope for the industry's sake, devs aren't settling on brute forcing their way through performance. Which, unfortunately, is what has been happening for a while. But that is not a good thing. The last thing any consumer should want is a world where the only way to get great-looking and performing games is to just throw more power at the problem.

This is something I have discussed before with someone here, but the issue we have with these current-gen consoles, and the industry at large in all honesty... is that these things are actually too powerful. Or better way to put that, powerful enough to be abused.

Think about it, GoT, TLOU2, GOW, The Order 1886...etc, were all made to run, and look like they looked on a 1.8TF console. I'll say that again... 1.8TF. And yet, we have a good number of games on the current gen, that are not even pushing past 1440p@30fps, but do not even look as good as any of the games I just mentioned.

And yet people like you... think the issue is that 10TF+ consoles with all-around better architecture... are not powerful enough.
Absolutely 10000000% correct. This is why (one of the main reasons) our once great industry is in the shit state it is in now.
 
Next up is the Nintendo PR saying that Nintendo consoles are not low cost smartphone hardware lmao
Tegra X1 was not designed to run Zelda BoTW-ToTK or Splatoon 2-3 and yet, here we are. nVidia didn't design Tegra X1 to be a portable Wii U, they didn't have Nintendo in mind (this could change with Switch 2, we'll see).

Just like Cell wasn't designed to be paired with RSX.

Does it really matter though? Is it a matter of technological "purism"?

Some of the best games I can remember are on Switch and the PS3 (Uncharted 1-2-3, TLOU1).
 

sachos

Member
The fact that he called out Linus clickbait video is so amusing to me, i can imagine Cerny resisting the urge to comment on that video comment section lol. He also seems to read the forums, that means when the PS6 goes full RT+AI is because he read our comments here right? Right?

"I think as long as we continue to create that very nice package, the future of consoles is pretty bright." Yeah, it seems consoles are not dead.

He also seems to be surprised by the focus on 60fps games and the use of RT in games, maybe preparing the talking points for the PS5 Pro reveal?
 
Last edited:

Seider

Member
All I know is we need a Pro badly by now ...ps5 can't keep up with the latest games and once the next line of Nvidia gpu's come out these consoles will look like a joke.

Games like Black Myth Wukong, Star Wars Outlaw are not going to look anywhere near the PC. Hell, we're already in the situation where the consoles will never have a great version of certain games because the ps5/sx couldn't brute force their way to good settings- cyberpunk 2077 and Alan wake 2 are the prime examples. Gta6 will be a good one too.
Ps5 is doing a lot better than Ps4 did in terms of perfomance. There are a lot of 60 fps games released in Ps5. Much more than in Ps4.

So i totally disagree with your opinion.
 
Top Bottom