• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PlayStation 4 hUMA implementation and memory enhancements details - Vgleaks

so having a less amount of % used in rendering per say water waves, etc... I just don't see a point in where Multiplatform games won't use this extra juice... when they do this with certain PC ports... does this mean that an increase omph on another version may become a norm for consoles or do they not have the extra manpower/time to do so
 

MoneyHats

Banned
PS4 is more powwrfull but you'll only see a difference in first party games probably

Just like between the PS2 and the GC/Xbox multip- oh wait...


Haha cute, but the difference between PS2 and Xbox was friggin enormous, and understandable since they were practically 2 years apart in technology

PS2 vs Xbox Difference

FLOPS........... 6.2Gflops...........21.6Gflops ......... 250%


Not to mention Double the Ram, that's right 2x the memory pool, 2x the bandwidth, it's almost ridiculous. The PS2's architecture was comparable to a DX6 GPU, no hardware AA or bump mapping of any kind, meanwhile Xbox had the first programmable shader GPU compliant with DX8.1, can do normal mapping, phong shadows, realtime lighting and cube mapping.


Even the difference between GameCube and Xbox is larger than Xbox One vs PS4. Let's take a look.

GameCube vs Xbox Difference

FLOPS........10.5Gflops........21.6Gflops.......... 100%

Also Xbox had considerable more ram, 64MB vs 40MB for GameCube. The GameCube GPU was quite a bit more advanced than the PS2's but still considered closer to a DX7 family of GPUs with hardware AA, Bump mapping but not programmable shaders, a technique used still today on modern GPUs and Xbox had it thanks to the GF3


Conclusion:

Difference between PS4 and Xbox One < Difference between Xbox vs GameCube.
 
brain-fry-o.gif
 

TheExodu5

Banned
I wonder what the performance implications are of having data invalidated by the GPU while the CPU is working on it.

I wonder if you could ever get into a situation where the CPU is entirely prevented from working with a piece of data because the GPU keeps invalidating it. (or vice-versa)

Or maybe there's a queuing system of sorts.
 

pompidu

Member
It doesn't help the CPU power in the traditional sense (like making it faster, for instance) but it helps with redundancies and this combined with GPU compute will allow the CPU to have more cycles to do what it does best - work on the more intense processes (like complex AI, physics, etc).


An incredibly simple example that will make Durante yell at me (which he probably will anyway) is using their water example above. Let's say we're making a jet ski game. Typically the water needs to be rendered in a straight line copying the data back and forth between the CPU and GPU. The CPU is saying "the jetski is making a wake" and the GPU draws the waves coming off of it. This system allows both to be done simultaneously without swapping back and forth - so it's happening in parallel instead of in a straight line.

So what does that mean? Well, if our jetski on a non-hUMA product takes 20% of the available resources to render the water and the real time waves created by the jetskis - it may now (and i'm pulling a number of my ass here, but the % isn't the point) may take 10%. So that's 10% "extra" that they have. Once you add everything... the AI, the lighting, all of the animations, the graphical effects, particles, etc... you have more overhead to add more because of the cycles you saved on the water rendering using hUMA. So the physics may be even more in depth... or the water particle effects may be EXTRA crazy.

So, yeah, it doesn't make the processor more powerful but it makes the entire system more efficient. Which in turns makes games better. So to the end user it will feel like more power - but it's not literally more power. Upclocking would *literally* be more power, but hUMA has nothing to do with that.

Pretty much this. But there both (CPU and GPU) looking at the same data in memory. Without having to copy the data back and forth.
 

Raonak

Banned
so having a less amount of % used in rendering per say water waves, etc... I just don't see a point in where Multiplatform games won't use this extra juice... when they do this with certain PC ports... does this mean that an increase omph on another version may become a norm for consoles or do they not have the extra manpower/time to do so

The thing that a lot of multiplatform devs are doing is building for PC, then downporting, so there should be no extra cost to having better effects on PS4 than xbone.
 
I am just going to come out and say it:

I don't have the foggiest clue what Huma is.

Can someone please explain it to me like you would to a little child? Thank you!
 

badb0y

Member
Haha cute, but the difference between PS2 and Xbox was friggin enormous, and understandable since they were practically 2 years apart in technology

PS2 vs Xbox Difference

FLOPS........... 6.2Gflops...........21.6Gflops ......... 250%


Not to mention Double the Ram, that's right 2x the memory pool, 2x the bandwidth, it's almost ridiculous. The PS2's architecture was comparable to a DX6 GPU, no hardware AA or bump mapping of any kind, meanwhile Xbox had the first programmable shader GPU compliant with DX8.1, can do normal mapping, phong shadows, realtime lighting and cube mapping.


Even the difference between GameCube and Xbox is larger than Xbox One vs PS4. Let's take a look.

GameCube vs Xbox Difference

FLOPS........10.5Gflops........21.6Gflops.......... 100%

Also Xbox had considerable more ram, 64MB vs 40MB for GameCube. The GameCube GPU was quite a bit more advanced than the PS2's but still considered closer to a DX7 family of GPUs with hardware AA, Bump mapping but not programmable shaders, a technique used still today on modern GPUs and Xbox had it thanks to the GF3


Conclusion:

Difference between PS4 and Xbox One < Difference between Xbox vs GameCube.

Nice try but none of the consoles you mentioned were similar in architecture therefore taking their GFLOPS numbers and coming to a conclusion is wrong.

PS4 and Xbox One on the other hand have the same architecture implemented in a different way.
 

TheExodu5

Banned
I am just going to come out and say it:

I don't have the foggiest clue what Huma is.

Normally memory is divided so that the CPU and GPU have access to their own specific portions of the memory. When a piece of data belongs to the GPU and the CPU wants to work on it, it needs to be copied to the other portion of memory and vice-versa. Only one component has access to a single piece of data so there are no conflicts. The downside is that it takes time and management to copy data from one portion of memory to the other.

In the case of huma, both the CPU and GPU can access all of the memory. There is no subdivision. The caveat being that if for example the GPU changes a piece of data while the CPU is working on it, then the CPU's data is out of sync and becomes invalid. At that point all the work the CPU did to that piece of data needs to be flushed and the CPU needs to grab the up to date data and start again. Same goes for the inverse (if the CPU changes data while the GPU is working on it).
 

Violater

Member
No worries

I just find like in that other recent thread with DBZ power levels

2 out of 3 posts is about that and it tends to get out of hand

I do so love DBZ but it doesn't add much to the conversation

Personally I find all the numbers meaningless until we see how the tech is being used.
It was a long while before PS3 finally hit its stride, I hope it does not take as long for this upcoming gen of console games.
 
I think if you analyze the gigaflops versus the mega jewels you'll begin to understand the significance of GDDR 8 as opposed to L1 and L2. Since the APU and the CPU will not have to talk to the GPU, the Xbox one cloud is rendered null and void. This is very significant, as we will see in 3rd party games after year 2 and 3.

The one mistake Microsoft made with the APU is the esRAM and how that will interact with the ladjfa;lsdjfa;sldkjfaos;dfijsdflaskdjfal;sfjka;sldfkj......
 
so having a less amount of % used in rendering per say water waves, etc... I just don't see a point in where Multiplatform games won't use this extra juice... when they do this with certain PC ports... does this mean that an increase omph on another version may become a norm for consoles or do they not have the extra manpower/time to do so




Well the problem is that PC ports use the extra oomph usually in the form of brute force. I'll try to explain (though last time I did durante yelled at me... is it obvious that i'm afraid of durante?)


When designing a game for a PC you aren't building to a specific system. You're building to an insanely wide array of systems. If you're building a Xbox 360 game you know exactly what to expect - each dvd drive is the same speed, each harddrive is the same speed, each GPU and CPU is the same, each has the same amount of ram, Etc. You get it. On the PC, I personally have a fairly modest 3.6ghz i7, 650ti, 8gbs of ram. Super PC gamer X has a better processor, a $1000 gpu, 16gbs of ram, etc. And then casual PC gamer Y has a laptop with a 2.2ghz i5, a year old mobile processor and 4gbs of ram. A developer wants the game to run on all of these systems.

The issue is that since a PC developer needs to worry about gamer Y's laptop that Gamer X's supercomputer isn't programmed to specifically. Things are generally just scaled up. If you have the extra processor speed/better ram/better HD you'll load much faster - if you have the better GPU you can turn on more effects. The games obviously look much, much better on Gamer X's super computer... but they don't look nearly as good as they would if the developer said "fuck gamer Y, fuck mortimer, i'm making a game specifically for Gamer X's system" and worked to the strengths as the system as a whole instead of the brute force of it just being faster.


That's why the modest, in terms of PC specs, consoles will make great looking games. The developers will learn these systems and work specifically on what they do well. Meanwhile there may be amazing features in my 650TI that never, ever get used because there just isn't a reason for a PC developer to hone in on that one card.


It may sound like I'm shitting on PC gaming but I'm not. PCs will always produce the best looking games because of their ability to scale upwards. Even though your graphics card won't have every trick inside it exploited like a console - it will still produce incredible graphics while it's relevant. By the time the PS4 launches my current PC will be two years old and it will run multiplats like BF4 better than the PS4 will.


This is why Crysis on the PC was such an amazing looking game though. It actually targeted high end systems at the time. For years afterwards when you got a PC the question was "ok but how well does it run crysis?" They programmed to the strengths of high end PCs at the time and it stayed in the category of amazing looking game for years because of it.


But back to your question... this isn't anything you can brute force. This doesn't make the CPU or GPU more powerful. It will take time to and expertise to exploit it. Most people, including Sony PR/Cerny, think it will be a couple of years atleast. But once these tools get worked into the SDK I think you will see them used to some extent in most games. But we are years away from that.
 
Well the problem is that PC ports use the extra oomph usually in the form of brute force. I'll try to explain (though last time I did durante yelled at me... is it obvious that i'm afraid of durante?)


When designing a game for a PC you aren't building to a specific system. You're building to an insanely wide array of systems. If you're building a Xbox 360 game you know exactly what to expect - each dvd drive is the same speed, each harddrive is the same speed, each GPU and CPU is the same, each has the same amount of ram, Etc. You get it. On the PC, I personally have a fairly modest 3.6ghz i7, 650ti, 8gbs of ram. Super PC gamer X has a better processor, a $1000 gpu, 16gbs of ram, etc. And then casual PC gamer Y has a laptop with a 2.2ghz i5, a year old mobile processor and 4gbs of ram. A developer wants the game to run on all of these systems.

The issue is that since a PC developer needs to worry about gamer Y's laptop that Gamer X's supercomputer isn't programmed to specifically. Things are generally just scaled up. If you have the extra processor speed/better ram/better HD you'll load much faster - if you have the better GPU you can turn on more effects. The games obviously look much, much better on Gamer X's super computer... but they don't look nearly as good as they would if the developer said "fuck gamer Y, fuck mortimer, i'm making a game specifically for Gamer X's system" and worked to the strengths as the system as a whole instead of the brute force of it just being faster.


That's why the modest, in terms of PC specs, consoles will make great looking games. The developers will learn these systems and work specifically on what they do well. Meanwhile there may be amazing features in my 650TI that never, ever get used because there just isn't a reason for a PC developer to hone in on that one card.


It may sound like I'm shitting on PC gaming but I'm not. PCs will always produce the best looking games because of their ability to scale upwards. Even though your graphics card won't have every trick inside it exploited like a console - it will still produce incredible graphics while it's relevant. By the time the PS4 launches my current PC will be two years old and it will run multiplats like BF4 better than the PS4 will.


This is why Crysis on the PC was such an amazing looking game though. It actually targeted high end systems at the time. For years afterwards when you got a PC the question was "ok but how well does it run crysis?" They programmed to the strengths of high end PCs at the time and it stayed in the category of amazing looking game for years because of it.


But back to your question... this isn't anything you can brute force. This doesn't make the CPU or GPU more powerful. It will take time to and expertise to exploit it. Most people, including Sony PR/Cerny, think it will be a couple of years atleast. But once these tools get worked into the SDK I think you will see them used to some extent in most games. But we are years away from that.
I see.. so we have to wait awhile
 

MoneyHats

Banned
Nice try but none of the consoles you mentioned were similar in architecture therefore taking their GFLOPS numbers and coming to a conclusion is wrong.

PS4 and Xbox One on the other hand have the same architecture implemented in a different way.

You got it backwards buddy, the Gamecube architecture was older and more archaic, had no programmable shaders which would make the difference in performance even LARGER than the Flop number would have you believe since its missing functionality at the architecture level. Even worse for PS2 which was missing even more key architectural features like hardware bumpmapping. In contrast both PS4 and One are compliant with DX11 class hardware.
 

Aureon

Please do not let me serve on a jury. I am actually a crazy person.
This sounds cool as fuck.
I want to develop for a PS4 now.
 

Prelude.

Member
Haha cute, but the difference between PS2 and Xbox was friggin enormous, and understandable since they were practically 2 years apart in technology

PS2 vs Xbox Difference

FLOPS........... 6.2Gflops...........21.6Gflops ......... 250%


Not to mention Double the Ram, that's right 2x the memory pool, 2x the bandwidth, it's almost ridiculous. The PS2's architecture was comparable to a DX6 GPU, no hardware AA or bump mapping of any kind, meanwhile Xbox had the first programmable shader GPU compliant with DX8.1, can do normal mapping, phong shadows, realtime lighting and cube mapping.


Even the difference between GameCube and Xbox is larger than Xbox One vs PS4. Let's take a look.

GameCube vs Xbox Difference

FLOPS........10.5Gflops........21.6Gflops.......... 100%

Also Xbox had considerable more ram, 64MB vs 40MB for GameCube. The GameCube GPU was quite a bit more advanced than the PS2's but still considered closer to a DX7 family of GPUs with hardware AA, Bump mapping but not programmable shaders, a technique used still today on modern GPUs and Xbox had it thanks to the GF3


Conclusion:

Difference between PS4 and Xbox One < Difference between Xbox vs GameCube.
This is even cuter but has nothing to do with the point.

Did the devs take advantage of the substantial extra power of the Xbox? Yes. That's it.
 

Prelude.

Member
Have you played Splinter Cell: Chaos Theory on Xbox, for instance? :)
No, I don't have an Xbox and I don't know about the game.
But even if it's a crappy port it doesn't mean that they didn't take advantage of the better specs in several other multiplats.

What I'm saying is that given the similarities between the PS4 and the Xbox One the developers should deliberately gimp the PS4 version in order to achieve the exact same results on both consoles (launch games excluded, obviously) and that's not going to happen.
Of course I'm not expecting Naughty Dogs levels of mastery of the console from Ubisoft or EA, but you get the point.
 

Kimppis

Member
No, I don't have an Xbox and I don't know about the game.
But even if it's a crappy port it doesn't mean that they didn't take advantage of the better specs in several other multiplats.

I can "see" that you don't have an Xbox. No offence.

Check this out: http://www.youtube.com/watch?v=HkHoDEh0FWc

And I think the PS2 version wasn't a crappy port, as I have both versions. It was a solid looking PS2 game. Xbox was just clearly more powerful machine. You can't deny that.

But yeah, you surely have a point, especially when the power different is not that big this time around. But Splinter Cell games were not the only (even third-party) titles that pushed the Xbox hardware. The fact that PS4 is more powerful than X1 probably means that we will see some differences in quite a few games, at least by the end of the generation. The problem is that it's not clear what the actual difference is, as both systems have number of tweaks in their hardware.

EDIT: LOL, I just noticed that I misread/messed up the quotes: "Did the devs take advantage of the substantial extra power of the Xbox? Yes." Yeah, exactly... Anyway I have to go to sleep and probably take some more English classes. You quoted a post that you pretty much agreed with, so I though you disagree that the Xbox was more powerful. Apologies. :p
 

Prelude.

Member
I can "see" that you don't have an Xbox. No offence.

Check this out: http://www.youtube.com/watch?v=HkHoDEh0FWc

And I think the PS2 version wasn't a crappy port, as I have both versions. It was a solid looking PS2 game. Xbox was just clearly more powerful machine. You can't deny that.
But I was talking about the Xbox version when I said "crappy port".

Since I said that when devs have a significantly more poweful platform they take advantage of it you replied with Splinter Cell:CT I assumed that the Xbox version was a half-assed port of the PS2 version that didn't show the difference in specs. I honestly don't know shit about the series.

EDIT: LOL, I just noticed that I misread/messed up the quotes: "Did the devs take advantage of the substantial extra power of the Xbox? Yes." Yeah, exactly... Anyway I have to go to sleep and probably take some more English classes. You quoted a post that you pretty much agreed with, so I though you disagree that the Xbox was more powerful. Apologies. :p
lol, don't worry.
 
You got it backwards buddy, the Gamecube architecture was older and more archaic, had no programmable shaders which would make the difference in performance even LARGER than the Flop number would have you believe since its missing functionality at the architecture level. Even worse for PS2 which was missing even more key architectural features like hardware bumpmapping. In contrast both PS4 and One are compliant with DX11 class hardware.

lol, it's like you didn't even read a single word of his post. let me take a stab at it.

FLOPS + TWO DIFFERENT ARCHITECTURES = A BIG NO NO

Just like comparing a amd gpu to an nvidia gpu using flops. It doesn't work. Neither does the ps2-xbox comparison flop comparison.
 

IN&OUT

Banned
Haha cute, but the difference between PS2 and Xbox was friggin enormous, and understandable since they were practically 2 years apart in technology

PS2 vs Xbox Difference

FLOPS........... 6.2Gflops...........21.6Gflops ......... 250%


Not to mention Double the Ram, that's right 2x the memory pool, 2x the bandwidth, it's almost ridiculous. The PS2's architecture was comparable to a DX6 GPU, no hardware AA or bump mapping of any kind, meanwhile Xbox had the first programmable shader GPU compliant with DX8.1, can do normal mapping, phong shadows, realtime lighting and cube mapping.


Even the difference between GameCube and Xbox is larger than Xbox One vs PS4. Let's take a look.

GameCube vs Xbox Difference

FLOPS........10.5Gflops........21.6Gflops.......... 100%

Also Xbox had considerable more ram, 64MB vs 40MB for GameCube. The GameCube GPU was quite a bit more advanced than the PS2's but still considered closer to a DX7 family of GPUs with hardware AA, Bump mapping but not programmable shaders, a technique used still today on modern GPUs and Xbox had it thanks to the GF3


Conclusion:

Difference between PS4 and Xbox One < Difference between Xbox vs GameCube.

can you link where you found those numbers? I can't seem to find it anywhere.


EDIT: OK your point is 50% power difference in PS4 and X1 is negligible. what about the difference between GTX670 and GTX770, there is 50% difference which seems everybody is pleased with.
 

Biker19

Banned
PS4 is more powerful but you'll only see a difference in first party games probably.

3rd party publishers/developers should have absolutely no excuse as to why they can't get the most out of PS4's hardware in terms of graphics, resolutions, & framerates.

It's not exactly like the PS3 where it's hard to get the most out of the console due to the cell architecture.
 
3rd party publishers/developers should have absolutely no excuse as to why they can't get the most out of PS4's hardware in terms of graphics, resolutions, & framerates.

It's not exactly like the PS3 where it's hard to get the most out of the console due to the cell architecture.

market share of the product also plays a big rule for if the multi-plat version on the more powerful hardware will have extra eye candy... for the mainstream top games that is
 

Kleegamefan

K. LEE GAIDEN
Well the problem is that PC ports use the extra oomph usually in the form of brute force. I'll try to explain (though last time I did durante yelled at me... is it obvious that i'm afraid of durante?)


When designing a game for a PC you aren't building to a specific system. You're building to an insanely wide array of systems. If you're building a Xbox 360 game you know exactly what to expect - each dvd drive is the same speed, each harddrive is the same speed, each GPU and CPU is the same, each has the same amount of ram, Etc. You get it. On the PC, I personally have a fairly modest 3.6ghz i7, 650ti, 8gbs of ram. Super PC gamer X has a better processor, a $1000 gpu, 16gbs of ram, etc. And then casual PC gamer Y has a laptop with a 2.2ghz i5, a year old mobile processor and 4gbs of ram. A developer wants the game to run on all of these systems.

The issue is that since a PC developer needs to worry about gamer Y's laptop that Gamer X's supercomputer isn't programmed to specifically. Things are generally just scaled up. If you have the extra processor speed/better ram/better HD you'll load much faster - if you have the better GPU you can turn on more effects. The games obviously look much, much better on Gamer X's super computer... but they don't look nearly as good as they would if the developer said "fuck gamer Y, fuck mortimer, i'm making a game specifically for Gamer X's system" and worked to the strengths as the system as a whole instead of the brute force of it just being faster.


That's why the modest, in terms of PC specs, consoles will make great looking games. The developers will learn these systems and work specifically on what they do well. Meanwhile there may be amazing features in my 650TI that never, ever get used because there just isn't a reason for a PC developer to hone in on that one card.


It may sound like I'm shitting on PC gaming but I'm not. PCs will always produce the best looking games because of their ability to scale upwards. Even though your graphics card won't have every trick inside it exploited like a console - it will still produce incredible graphics while it's relevant. By the time the PS4 launches my current PC will be two years old and it will run multiplats like BF4 better than the PS4 will.


This is why Crysis on the PC was such an amazing looking game though. It actually targeted high end systems at the time. For years afterwards when you got a PC the question was "ok but how well does it run crysis?" They programmed to the strengths of high end PCs at the time and it stayed in the category of amazing looking game for years because of it.


But back to your question... this isn't anything you can brute force. This doesn't make the CPU or GPU more powerful. It will take time to and expertise to exploit it. Most people, including Sony PR/Cerny, think it will be a couple of years atleast. But once these tools get worked into the SDK I think you will see them used to some extent in most games. But we are years away from that.



You are pretty damn good at this :)
 
Most games will be built on PC and then downscaled to ps4 and xbone. They will just have to be downscaled even further on xbone because of the weaker hardware. How much or how little difference it will make, I'm not qualified to say. But the difference will be there and it won't require any extra effort from devs.

For people who keep bringing up ps3/360 multiplats as a counter, saying "ps3 had more power and it didn't matter!", it actually works as more of a support if anything. The 360 had a superior GPU, more RAM to play with, and in some ways the more appropriate CPU. And almost all multiplatform games reflected those advantages.
 

Elios83

Member
Haha cute, but the difference between PS2 and Xbox was friggin enormous, and understandable since they were practically 2 years apart in technology

PS2 vs Xbox Difference

FLOPS........... 6.2Gflops...........21.6Gflops ......... 250%


Not to mention Double the Ram, that's right 2x the memory pool, 2x the bandwidth, it's almost ridiculous. The PS2's architecture was comparable to a DX6 GPU, no hardware AA or bump mapping of any kind, meanwhile Xbox had the first programmable shader GPU compliant with DX8.1, can do normal mapping, phong shadows, realtime lighting and cube mapping.


Even the difference between GameCube and Xbox is larger than Xbox One vs PS4. Let's take a look.

GameCube vs Xbox Difference

FLOPS........10.5Gflops........21.6Gflops.......... 100%

Also Xbox had considerable more ram, 64MB vs 40MB for GameCube. The GameCube GPU was quite a bit more advanced than the PS2's but still considered closer to a DX7 family of GPUs with hardware AA, Bump mapping but not programmable shaders, a technique used still today on modern GPUs and Xbox had it thanks to the GF3


Conclusion:

Difference between PS4 and Xbox One < Difference between Xbox vs GameCube.

This comparison is kinda misleading, you're basing everything on the NVflops which is not a fair thing to do. First of all nVidia included in that figure also non programmable operations. But besides that the PS2 still managed to have a few advantages over the Xbox hardware notwithstanding being a hardware whose development was completed in 1999, it had a much higher fill rate and a much higher bandwidth (almost incredible for the time) for datas cached in the embedded ram which allowed for better performance with particles effects, multi pass effects (motion blur, depth of field), transparencies and the ability to achieve 60fps easily.
Xbox key advantages were double the ram and a GPU with a modern feature set which led to higher resolution, higher texture resolution, anti aliasing, advanced texture effects like bump mapping, anisotropic filtering, some rudimental shaders.
The situation between Xbox One and PS4 is kinda different and unlike anything in the past. The architecture is almost the same, but one has a much stronger GPU with 40% extra flops and double the ROPs plus a easier to use memory system with more bandwidth. In the past even the globally inferior hardware could do something better, this time, no. There's a clear winner.
 

Yoday

Member
3rd party publishers/developers should have absolutely no excuse as to why they can't get the most out of PS4's hardware in terms of graphics, resolutions, & framerates.

It's not exactly like the PS3 where it's hard to get the most out of the console due to the cell architecture.
I agree completely. I think people are going to see a much wider divide in third party games this generation. People are used to only first party developers getting the most out of the hardware, but these boxes being far less specialized should make it significantly easier for developers to to get the most of out each platform. Not to mention development tools and engines have come a very long way, and things scale much more now than they ever did. People are kidding themselves if they think developers are just going to ignore 40% more GPU power and higher memory bandwidth.

We have already heard of one developer (NFS Rivals) saying one version of the game looks better than they other, and they wouldn't even think to bring it up if the difference wasn't noticeable.

Whether or not third party developers really dig into the PS4 to make use of the more specialized features of the hardware is certainly up for debate, but those specialized features are only going to widen the gap between the two consoles further, not create the gap to begin with. Alternatively, it seems to me that developers making use of any specialized features of the XBone can really only work to diminish the gap between the two systems.
 

velociraptor

Junior Member
Haha cute, but the difference between PS2 and Xbox was friggin enormous, and understandable since they were practically 2 years apart in technology

PS2 vs Xbox Difference

FLOPS........... 6.2Gflops...........21.6Gflops ......... 250%


Not to mention Double the Ram, that's right 2x the memory pool, 2x the bandwidth, it's almost ridiculous. The PS2's architecture was comparable to a DX6 GPU, no hardware AA or bump mapping of any kind, meanwhile Xbox had the first programmable shader GPU compliant with DX8.1, can do normal mapping, phong shadows, realtime lighting and cube mapping.


Even the difference between GameCube and Xbox is larger than Xbox One vs PS4. Let's take a look.

GameCube vs Xbox Difference

FLOPS........10.5Gflops........21.6Gflops.......... 100%

Also Xbox had considerable more ram, 64MB vs 40MB for GameCube. The GameCube GPU was quite a bit more advanced than the PS2's but still considered closer to a DX7 family of GPUs with hardware AA, Bump mapping but not programmable shaders, a technique used still today on modern GPUs and Xbox had it thanks to the GF3


Conclusion:

Difference between PS4 and Xbox One < Difference between Xbox vs GameCube.
Comparison is not valid for one reason - different architectures yield different flops that aren't completely indicative of performance. You're comparing apples to oranges.

The PS4 and Xbox One have the same architecture. They are both based using AMD GPUs. The comparison is more direct.

Imagine if you have two PCs. Same CPU. Same RAM. One has a stronger GPU, one has a weaker GPU. That's the PS4 and Xbox in a nutshell. A highly simplistic comparison, but a there you go.
 
It doesn't help the CPU power in the traditional sense (like making it faster, for instance) but it helps with redundancies and this combined with GPU compute will allow the CPU to have more cycles to do what it does best - work on the more intense processes (like complex AI, physics, etc).


An incredibly simple example that will make Durante yell at me (which he probably will anyway) is using their water example above. Let's say we're making a jet ski game. Typically the water needs to be rendered in a straight line copying the data back and forth between the CPU and GPU. The CPU is saying "the jetski is making a wake" and the GPU draws the waves coming off of it. This system allows both to be done simultaneously without swapping back and forth - so it's happening in parallel instead of in a straight line.

So what does that mean? Well, if our jetski on a non-hUMA product takes 20% of the available resources to render the water and the real time waves created by the jetskis - it may now (and i'm pulling a number of my ass here, but the % isn't the point) may take 10%. So that's 10% "extra" that they have. Once you add everything... the AI, the lighting, all of the animations, the graphical effects, particles, etc... you have more overhead to add more because of the cycles you saved on the water rendering using hUMA. So the physics may be even more in depth... or the water particle effects may be EXTRA crazy.

So, yeah, it doesn't make the processor more powerful but it makes the entire system more efficient. Which in turns makes games better. So to the end user it will feel like more power - but it's not literally more power. Upclocking would *literally* be more power, but hUMA has nothing to do with that.

Now this is a good explanation! Everything everyone was saying was flying over my head, but using examples from games and how it would work makes it more understandable. Thanks for this.
 
Comparison is not valid for one reason - different architectures yield different flops that aren't completely indicative of performance. You're comparing apples to oranges.

The PS4 and Xbox One have the same architecture. They are both based using AMD GPUs. The comparison is more direct.

Imagine if you have two PCs. Same CPU. Same RAM. One has a stronger GPU, one has a weaker GPU. That's the PS4 and Xbox in a nutshell. A highly simplistic comparison, but a there you go.

Except they don't have the same RAM. :p
 

velociraptor

Junior Member
Except they don't have the same RAM. :p
Yes, as I said, a highly simplistic comparison :p

RAM advantage aside, we haven't even touched the actual specifications of the GPU, nevermind the 'teraflops'. The PS4 GPU has an advantage in pretty much every way possible.

Xbox One:
1.31 TFLOPS
40.9 GTex/s
13.6 GPix/s
68GB/s DDR3
109GB/s eSRAM
16 ROPS
12 CUs (768 ALUs)

PS4:
1.84 TFLOPS (+40%)
57.6 GTex/s (+40%) - Texture Fill Rate
25.6 GPix/s (+90%) - Pixel Fill Rate
176GB/s GDDR5
32 ROPS
18 CUs (1152 ALUs) - Compute Units
 
Most games will be built on PC and then downscaled to ps4 and xbone. They will just have to be downscaled even further on xbone because of the weaker hardware. How much or how little difference it will make, I'm not qualified to say. But the difference will be there and it won't require any extra effort from devs.

For people who keep bringing up ps3/360 multiplats as a counter, saying "ps3 had more power and it didn't matter!", it actually works as more of a support if anything. The 360 had a superior GPU, more RAM to play with, and in some ways the more appropriate CPU. And almost all multiplatform games reflected those advantages.


Yep. The ps3 was more powerful but much harder to program for than the 360. The ps4 is more powerful and easier to program for than the xbox one. That's a pretty nice combo for Sony this time.
 

velociraptor

Junior Member
Yep. The ps3 was more powerful but much harder to program for than the 360. The ps4 is more powerful and easier to program for than the xbox one. That's a pretty nice combo for Sony this time.
I thought the PS3 and 360 were roughly evenly matched?
PS3: Better CPU, hard to program for. Weaker GPU. Split memory.
360: Better GPU. Easier to program. Unified memory
 
bunch of crap

This reminds me of the time when microsoft fans accused sony fans of using nvidia's numbers against the 360 and how they were bullshit. See what I'm trying to say?

You have no clue what you're talking about. You're literally comparing apples to oranges. The hardware inside both consoles is pretty much the same, the ps4 just has more power and more "features". It's just reality, stop making bullshit arguments.
 
I thought the PS3 and 360 were roughly evenly matched?
PS3: Better CPU, hard to program for. Weaker GPU. Split memory.
360: Better GPU. Easier to program. Unified memory


The gpu was tweaked quite a bit so it wasn't all that much weaker (but yeah the 360 one is better but with less tweaks). The problem was that coding for the ps3 was all custom stuff that was a waste of time. This is why Gaben bitched about it back in the day... you had to learn specifically how to make games take advantage of the ps3 hardware and none of that work translated over to the 360 or pc. The cell was a brilliant chip that that was a complete waste of time to learn.... unless you were first party.
 

Biker19

Banned
This reminds me of the time when microsoft fans accused sony fans of using nvidia's numbers against the 360 and how they were bullshit. See what I'm trying to say?

You have no clue what you're talking about. You're literally comparing apples to oranges. The hardware inside both consoles is pretty much the same, the ps4 just has more power and more "features". It's just reality, stop making bullshit arguments.

Xbox fanboys are pretty much in denial that the PS4 is more powerful than the Xbox One is. I'm sure they were bragging about the Original Xbox being more powerful than the PS2 & how the Xbox 360 received a ton of better multiplats than the PS3 because of the PS3's cell architecture.

My, my, how the tables have turned.
 

Yoday

Member
The gpu was tweaked quite a bit so it wasn't all that much weaker (but yeah the 360 one is better but with less tweaks). The problem was that coding for the ps3 was all custom stuff that was a waste of time. This is why Gaben bitched about it back in the day... you had to learn specifically how to make games take advantage of the ps3 hardware and none of that work translated over to the 360 or pc. The cell was a brilliant chip that that was a complete waste of time to learn.... unless you were first party.
I hear what you're saying, but all I see is a Sackboy without a hat.
 
Haha cute, but the difference between PS2 and Xbox was friggin enormous, and understandable since they were practically 2 years apart in technology

PS2 vs Xbox Difference

FLOPS........... 6.2Gflops...........21.6Gflops ......... 250%

GameCube vs Xbox Difference

FLOPS........10.5Gflops........21.6Gflops.......... 100%


Difference between PS4 and Xbox One < Difference between Xbox vs GameCube.


In Flop difference:

PS2 vs Xbox
15.4Gflops

Xbox vs GC
11.1Gflops

XOne vs PS4
530 Gflops

I can't see how such a big number couldn't make a difference?
 
Top Bottom