• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox One hardware breakdown by ExtremeTech.com after HotChips reveal

Ushae

Banned
I don't think Xbone is weak compared to PS4. It runs Ryse with high to very high settings @ 1080p with AA @ 30 FPS and that is great IMO. At the end of the day only a select few Sony devs will use the extra power PS4 has

Exactly, really we're getting some stunning games on both sides of the fence ! :))
 

RayMaker

Banned
I really hope not. There's already a significant difference between the consoles and PC's, and the consoles aren't even out yet.

I would not worry to much, even if the PS4/X1 had titans in them I doubt it change developers ability to create the games they want to create.

If it wernt for games like farcry 3 that had jaggie galore and unstable framerate this gen could of kept on going for a few years. Gta5 has proven to me that graphics arnt everything.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
Most devs have been saying the PS4 has the lead but that there isn't a huge gap between the consoles so it's funny to see so many people on GAF claiming the PS4 will smash the XBone.

The problem with all of these statements is that they are not quantified. As long as nobody quantifies what he means by "smash" or "much better", discussions won't make any progress. (Not that they would anyway :) The important thing in all those tech discussions (aside from the fun…) is that no console has any sort of game-changing special sauce, but instead the same general architecture, so comparing them in terms of GPU ALU and memory bandwidth(s) and projecting that difference onto benchmarks on their respective PC counterparts is a warranted approximation(!) for people who care. It's certainly a valid aspect to consider when you are only buying one console.

That said, I think Ryse looks good and I will hopefully have fun playing it. :p
 
The sentence "Game X will look 40% better" doesn't have meaningful semantics. People are referring to the fact that the PS4's GPU has 50% more ALU (running at 800mhz compared to 853mhz) leading to 41% more programmable raw computation power. Having the same architecture does not mean that those additional 384 ALUs (or "shader cores" or whatever we want to call them) are just there for shits and giggles. It just means that both GPUs are built from the same building blocks, only the PS4 has 50% more of them. That is not controversial, that is just an objective fact. And if those differences are meaningless then I wonder why so many people spend additional money for graphics card from the same vendor and the same family but with more processing power. It's really the same situation. Everybody can take from that what he wants.

He's def not saying it's meaningless, and I don't think anybody else really is, either. It's simply being stated that the 40% or 50% difference that we see on paper, may not manifest in the ways people are expecting it to. Simply having a much better AA solution for PS4 games can wipe out close to 30 or 40% of that power.

When more devs, mostly ps4 first parties, start killing it with all those extra ALU resources for cool looking gpu compute effects, that's when we'll start seeing some real and clear separation from Xbox One and PS4 titles. Nobody questions that the PS4 is stronger. It is. How the heck could it not be? But there's a lot of unrealistic expectations that you don't need to be an expert to see are unrealistic.

The Xbox One is definitely noticeably weaker than the PS4 on paper, but far from being a weak system.

Many people understand the first part of that, but I don't think a lot of people are grasping the second half of that sentence.

Exactly, really we're getting some stunning games on both sides of the fence ! :))

That we are. It's an exciting time to be a gamer :D
 
I'm not denying the specs at all, I'm just laughing at the people that expect 40% more performance because it says so on paper.
Most devs have been saying the PS4 has the lead but that there isn't a huge gap between the consoles so it's funny to see so many people on GAF claiming the PS4 will smash the XBone.

As others have said, the proof is in the pudding and we'll see for ourselves how much better the multiplatforms run on the PS4.

Most devs are never going to say one console is much more powerful than the other. This is not news. Developers are in the business of selling their games, so they're never going to proclaim one version of their game notably inferior to the other versions. That's how this stuff works. People who want to rely on developers for information on which console is more powerful only do so because they like hearing developers tell them they're very comparable.

We'll see differences when those multiplatform games are compared at sites like The Digital Foundry.
 

Chobel

Member
PS4 XBO Difference
Vertex throughput(bn). .....1.6........1.7....-5.88%

Source for this?
Ops/cycle......................1152......768....50.00%

Not operation per cycle, it's shader units (ALU).

Now here's the one that you need to get you head around and where the famous "50%" more power tends to get bandied about. A few things you have to understand about a graphics pipeline, and that is that its entirely programmable. The length and complexity of those calculations is entirely up to the programmer running for both vertex and pixel (and others these days).

I wont get to much into but lets put it this way. If dev choose a rather lengthy complex calculation to run on a pixel shader, then this will favor the PS4. If not, then it wont affect either, except for the fact that the XBO is pushing more cycles per second.
The best way to thing of it is like lines of code, this is by no means a correct analogy, but its the best I can come up with....

What the hell are you talking about, Shader units represents how many shaders can be run simultaneously, it doesn't care about the complexity of shaders at all.
 
Source for this?


Not operation per cycle, it's shader units



What the hell are you talking about, Shader units represents how many shaders can be run simultaneously, it doesn't care about the complexity of shaders at all.

He's getting all this stuff from someone else on a different forum, but not saying that he's getting the posts from another forum, unless of course he and that person are one and the same. He's getting it from here.

Grabbing link..

http://www.psu.com/forums/showthrea...con-(Update)?p=6190908&viewfull=1#post6190908

This is where he got it from. I don't know who the guy is, but he comes off as pretty damn knowledgeable, but then it's hard to tell him apart from your typical forum poster, since 80% of GAF are engineers and chip designers :p

And somehow I find the bolded regarding complexity of shaders thing hard to believe... If the complexity of shaders aren't somewhat important to the entire equation, then what's the point when developers or engineers constantly talk about being able to run much more complex shaders on more powerful hardware? Are they just talking out of their you know what?
 
If the programmer runs code on shader that's less than 768 "instructions", then it favor the XBO due to its higher clock rate.
If the code is larger than 768 "instructions" then it favors the PS4.

.......

The only conclusion I make from this is, that for most tasks, shadow mapping etc it's pretty much a wash outside of clock rate.
The PS4 allows for more complex code in terms of allowing more textures per model or mesh, and more complex and lengthier code "per model".

The end result will likely be a wash in terms of most assets used. Models, the number of models, "things going on" is identical. The actual texture resolution though, this favors the PS4, whether that be through the number of textures applied to a model, or simply higher res textures.

.....

Conclusion:Things aren't going to be "faster" you not going to see "more things" on the PS4 graphics. What you may see is higher res textures, or better texture effects possible on the PS4. While I expect the frame rate to be steady and the same on both. My gut feeling is, that extra "bump" in the snake is there for compute calculations.

Now, what Ive left out of all of this, is the extra work that either the CPU or GPU have to do on the PS4. There is no real way of conclusively discussing that as Sony hasn't been terribly forthcoming on and additional hardware specs.
But, as things stand, there's and awe-full lot the PS4 has to deal with outside of simply drawing a polygon, that will need either CPU or GPU resources, and could impact on that extra compute claculations overhead the PS4 has.

Very interesting analysis. We'll see what happens. My gut feeling is enthusiasts will make a big deal about differences, but mainstream consumers will not notice or care. Both systems will be able to impress visually with exclusives as we're already seeing with Ryse, Forza, Infamous. If I'm being honest I think the X1 is a bit ahead right now in real world results, but there could be numerous explanations for that.
 

ethomaz

Banned
I am interested in the sources for some of these numbers. Can you post a link or reference? Thanks.
Simple maths using the GPU specs released by MS/Sony.

PS4's GPU: 800Mhz; 18CUs; 1152SPs; 72TMUs; 32ROPs;
Xbone's GPU: 853Mhz; 12CUs; 768SPs; 48TMUs; 16ROPs;

So using the PS4's GPU for example...

Texture reads(gt/s): 72TMUS * 800Mhz = 57.6 GT/s
Vertex throughput(bn): 800Mhz * 2 = 1.6 billion triangles for second... not Vertex bla bla
Output(gp/s): 32ROPS * 800Mhz = 28 GP/s
Ops/cycle: 1152SPs
TF: 1152SPs * 800Mhz * 2 = 1.84 TFLOPS
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
And somehow I find the bolded regarding complexity of shaders thing hard to believe... If the complexity of shaders aren't somewhat important to the entire equation, then what's the point when developers or engineers constantly talk about being able to run much more complex shaders on more powerful hardware? Are they just talking out of their you know what?

What that guy is saying about how shaders are executed on GPUs is just wrong. I am tired of writing another long text today, so you have to take my word for it. /edit: Or maybe he is just using bad analogies, I don't really get what some metaphors are supposed to mean.
 
PS4 XBO Difference
Texture reads(gt/s)...........56........41.....36.59%
Vertex throughput(bn). .....1.6........1.7....-5.88%
Output(gp/s)..................25.6.......13.6....88.24%
Ops/cycle......................1152......768....50.00%
TF................................1.84......1.308...40.67%


So what does it all mean?
The Texture reads are the number of textures you can "fetch" in per second from a given source. Its exactly what it means. The PS4 can grab 36% more textures per second.

Vertex throughput - Conversely, because both are tied to there clock speed, the end result is, the XBO has an advantage on just how many things it can display on screen per second (note: I'm not saying how pretty those things are).

Output- This one is a little squishy, while the raw numbers would favor Sony, the real trick here is to try and output only what is required. The more useless information you can "kill" and not output at all, the better.
Couple together the ability to store data in a compressed format, a rather large ROP cache that allows for depth testing pre pixel shader work, and you make significant gains in output. It's really really hard to come up with any sort of meaningful conclusion on this one other than to say they both have more than enough grunt to do 1080p 60fps with well over 10 overdraw easily.

Op/s per cycle.......Now here's the one that you need to get you head around and where the famous "50%" more power tends to get bandied about. A few things you have to understand about a graphics pipeline, and that is that its entirely programmable. The length and complexity of those calculations is entirely up to the programmer running for both vertex and pixel (and others these days).

I wont get to much into but lets put it this way. If dev choose a rather lengthy complex calculation to run on a pixel shader, then this will favor the PS4. If not, then it wont affect either, except for the fact that the XBO is pushing more cycles per second.

The best way to thing of it is like lines of code, this is by no means a correct analogy, but its the best I can come up with....

If the programmer runs code on shader that's less than 768 "instructions", then it favor the XBO due to its higher clock rate.
If the code is larger than 768 "instructions" then it favors the PS4.

Here's the kicker though, because this code is running "per vertex" or "per texal" ie. per model or mesh, then your still tied to your vertex throughput or texture fetch rate.
One of these does allow for the PS4's greater lines of code, by allowing for more texture fetches, but on a vertex by vertex rate? Your still limited to clock speed.

The only conclusion I make from this is, that for most tasks, shadow mapping etc it's pretty much a wash outside of clock rate.
The PS4 allows for more complex code in terms of allowing more textures per model or mesh, and more complex and lengthier code "per model".

The end result will likely be a wash in terms of most assets used. Models, the number of models, "things going on" is identical. The actual texture resolution though, this favors the PS4, whether that be through the number of textures applied to a model, or simply higher res textures.

I cant help but think of the PS4 render pipeline architecture as a snake, whose eaten a rather large meal. He has this rather large belly in the middle of it. While its head and tail are sleek.
The issue being of course, if you use that overhead "space" or not.

Conclusion:Things aren't going to be "faster" you not going to see "more things" on the PS4 graphics. What you may see is higher res textures, or better texture effects possible on the PS4. While I expect the frame rate to be steady and the same on both. My gut feeling is, that extra "bump" in the snake is there for compute calculations.

Now, what Ive left out of all of this, is the extra work that either the CPU or GPU have to do on the PS4. There is no real way of conclusively discussing that as Sony hasn't been terribly forthcoming on and additional hardware specs.
But, as things stand, there's and awe-full lot the PS4 has to deal with outside of simply drawing a polygon, that will need either CPU or GPU resources, and could impact on that extra compute claculations overhead the PS4 has.

Not cool to just copy paste without sourcing it.
 

cripterion

Member
We'll see differences when those multiplatform games are compared at sites like The Digital Foundry.

Can't disagree with that.

I was just pointing out that some people seem to have an agenda somehow. Wish people would just enjoy their consoles for what they are.

I intend to purchase both down the line(just like I did this current gen) but went with the Xbone first cause I can see myself playing Kinect titles with my daughter and also cause the launch lineup is more interesting to me.

I love eye candy and so far, the only thing that blew me away was Deep Down and that sorcerer demo. No doubt the PS4 will be a solid system but I think the Xbone can be one too even if it's underpowered compared to the PS4.
 

Chobel

Member
He's getting all this stuff from someone else on a different forum, but not saying that he's getting the posts from another forum, unless of course he and that person are one and the same. He's getting it from here.

Grabbing link..

http://www.psu.com/forums/showthrea...con-(Update)?p=6190908&viewfull=1#post6190908

This is where he got it from. I don't know who the guy is, but he comes off as pretty damn knowledgeable, but then it's hard to tell him apart from your typical forum poster, since 80% of GAF are engineers and chip designers :p

Thanks for the link, I'm going to read that thread right now.
And somehow I find the bolded regarding complexity of shaders thing hard to believe... If the complexity of shaders aren't somewhat important to the entire equation, then what's the point when developers or engineers constantly talk about being able to run much more complex shaders on more powerful hardware? Are they just talking out of their you know what?

Context. I said that the number of shader units have nothing to do with complexity of shaders. So yes the comelxity of shaders have an impact (bad/good) but that doesn't mean that less shader units will somehow decrease those effects.
Clock * 2... you don't need source for that ;) it is the number of triangles per second... GCN runs 2 triangles per clock cycle... the name is just wrong (Vertex throughput lol).

thanks for the clarification.
 
Not cool to just copy paste without sourcing it.

Yea, should definitely give credit to the original poster, especially seeing as how he obviously took the time to type all of that out. I don't know if he's right or wrong, but he does seem pretty knowledgeable.

I might be mistaken, but I think that guy's some indie dev or something. I remember someone telling me who he is, but I can't for the life of me remember.
 

astraycat

Member
Output- This one is a little squishy, while the raw numbers would favor Sony, the real trick here is to try and output only what is required. The more useless information you can "kill" and not output at all, the better.
Couple together the ability to store data in a compressed format, a rather large ROP cache that allows for depth testing pre pixel shader work, and you make significant gains in output. It's really really hard to come up with any sort of meaningful conclusion on this one other than to say they both have more than enough grunt to do 1080p 60fps with well over 10 overdraw easily.

While this may be true for opaque geometry, it becomes really tricky for when you start accounting for multisampling and transparency.

Op/s per cycle.......Now here's the one that you need to get you head around and where the famous "50%" more power tends to get bandied about. A few things you have to understand about a graphics pipeline, and that is that its entirely programmable. The length and complexity of those calculations is entirely up to the programmer running for both vertex and pixel (and others these days).

I wont get to much into but lets put it this way. If dev choose a rather lengthy complex calculation to run on a pixel shader, then this will favor the PS4. If not, then it wont affect either, except for the fact that the XBO is pushing more cycles per second.

The best way to thing of it is like lines of code, this is by no means a correct analogy, but its the best I can come up with....

If the programmer runs code on shader that's less than 768 "instructions", then it favor the XBO due to its higher clock rate.
If the code is larger than 768 "instructions" then it favors the PS4.

This isn't true at all. The 768 OPs vs. 1152 OPs are the number of parallel instructions that can be done in a cycle. So long as you have more than 768 threads (12 wavefronts) in flight, it will favor the PS4. More than 12 wavefronts isn't very hard to imagine -- a full screen quad will require 32,400 wavefronts alone at 1080p. Shader length doesn't really come into the picture here.

Here's the kicker though, because this code is running "per vertex" or "per texal" ie. per model or mesh, then your still tied to your vertex throughput or texture fetch rate.
One of these does allow for the PS4's greater lines of code, by allowing for more texture fetches, but on a vertex by vertex rate? Your still limited to clock speed.

The only conclusion I make from this is, that for most tasks, shadow mapping etc it's pretty much a wash outside of clock rate.
The PS4 allows for more complex code in terms of allowing more textures per model or mesh, and more complex and lengthier code "per model".

The end result will likely be a wash in terms of most assets used. Models, the number of models, "things going on" is identical. The actual texture resolution though, this favors the PS4, whether that be through the number of textures applied to a model, or simply higher res textures.

I cant help but think of the PS4 render pipeline architecture as a snake, whose eaten a rather large meal. He has this rather large belly in the middle of it. While its head and tail are sleek.
The issue being of course, if you use that overhead "space" or not.

Conclusion:Things aren't going to be "faster" you not going to see "more things" on the PS4 graphics. What you may see is higher res textures, or better texture effects possible on the PS4. While I expect the frame rate to be steady and the same on both. My gut feeling is, that extra "bump" in the snake is there for compute calculations.

Now, what Ive left out of all of this, is the extra work that either the CPU or GPU have to do on the PS4. There is no real way of conclusively discussing that as Sony hasn't been terribly forthcoming on and additional hardware specs.
But, as things stand, there's and awe-full lot the PS4 has to deal with outside of simply drawing a polygon, that will need either CPU or GPU resources, and could impact on that extra compute claculations overhead the PS4 has.

This is pretty bunk. Games, for the grand majority of cases, are fill limited rather than vertex limited. This means that more than enough pixel wavefronts are generated to saturate the CUs, which in turn means that the PS4 will actually finish processing faster. Will it be 40% faster? Probably not. But it's still going to be there.
 

jaypah

Member
The problem with all of these statements is that they are not quantified. As long as nobody quantifies what he means by "smash" or "much better", discussions won't make any progress. (Not that they would anyway :) The important thing in all those tech discussions (aside from the fun…) is that no console has any sort of game-changing special sauce, but instead the same general architecture, so comparing them in terms of GPU ALU and memory bandwidth(s) and projecting that difference onto benchmarks on their respective PC counterparts is a warranted approximation(!) for people who care. It's certainly a valid aspect to consider when you are only buying one console.

That said, I think Ryse looks good and I will hopefully have fun playing it. :p

I agree and it's a valid aspect for those (like me) who buy all platforms also. Aside from it just being plain interesting it can help to know the differences in graphics so you can weigh it against other thing like controllers preference, community, exclusive content etc.
 

jaypah

Member
I just got lost. Damn my ignorance on this stuff. I just wasted 30 minutes trying to understand.

Some of it is hit or miss for me but I like trying to figure it all out. I've literally learned at least one new thing from Tech-GAF every day since the console specs leaked. I have a long way to go so I never get involved but it's fun and good reading. It's certainly the most interesting stuff on GAF for me and often leads me to go off on my own and research stuff.
 

gofreak

GAF's Bob Woodward
While this may be true for opaque geometry, it becomes really tricky for when you start accounting for multisampling and transparency.

And MRTs and multiple passes of various flavours...I'm sure it's few modern games that use one write target with x levels of overdraw to compose a frame.
 

dude819

Member
It also makes no sense to force a 24 our check in on your customers, and demand they have a Kinect plugged in to even use the console, and to force the Kinect into every single box even though a lot of people will never use it, and all of that resulted in the console costing $100 more than the competition.

You would think they'd never do any of that, but yet they did. So yeah, they did design a console that is notably less powerful than a PS4. That doesn't mean the XB1 is weak or not going to produce impressive games. It will, but there are going to be notable differences, especially between multiplatform games where you have a directly comparable example.

Features are very different than a tactical disadvantage.

By your logic, you would never think a company would remove all the buttons from a phone, make it a solid brick, lock away the battery, make users sign up for data plans and have to pay for added applications but I bet you love your smartphone.
 
Thanks for the link, I'm going to read that thread right now.


Context. I said that the number of shader units have nothing to do with complexity of shaders. So yes the comelxity of shaders have an impact (bad/good) but that doesn't mean that less shader units will somehow decrease those effects.


thanks for the clarification.

My general takeaway is that the guy seems surprised or impressed with the amount of special purpose hardware that MS has offloading work from either the GPU or the CPU. He thinks a lot of very common tasks that are essential and used frequently on GPUs are things Microsoft specifically created special purpose hardware for to help cutdown on the amount that the GPU has to do itself.

Also, doing some random reading on the Directx 11.2 featureset really does make it look like the most significant additions to the Xbox One GPU were all made entirely with Directx 11.2 features in mind. Reading through the feature set you'll instantly see things that outright scream back to specific customizations inside the Xbox One, such as the display planes, for example. I always suspected that the Xbox One seemed ideal for a system that would dynamically resize its framebuffer without the gamer knowing to maintain strong graphics performance, and it's an actual Directx 11.2 feature. It seems perfect for the display planes as they were described for the Xbox One and quite a bit more. And that isn't even the only one.

http://msdn.microsoft.com/en-us/library/windows/apps/bg182880.aspx#two

GPU overlay support

[Get the DirectX Foreground Swap Chains now.]

GPU multi-plane overlay support keeps your gorgeous 2D art and interfaces looking their best in native resolution, while you draw your 3D scenes to a smaller, scaled frame buffer. Use the new IDXGIOutput2::SupportsOverlays method to determine if the platform supports multiple draw planes. Create a new overlay swap chain by specifying the DXGI_SWAP_CHAIN_FLAG_FOREGROUND_LAYER flag (see DXGI_SWAP_CHAIN_FLAG).

This feature allows the scaling and composition of two swap chains to happen automatically on the fixed-function overlays hardware, without using any GPU resources at all. This leaves more GPU resources for the app rendering code.
Apps can already use code to render at multiple resolutions, but this just involves extra copies and composition passes that aren't ideal on low-power platforms.

That he doesn't understand basic principles of GPU parallelism... makes him come off as a total fucking idiot.

Honestly, I don't know what to think about most of this, since I'm no expert on the subject, but the one thing I can trust is that both Sony and Microsoft have clearly built some pretty damn capable hardware, and we're going to be benefitting from that in the years to come. :D

Another from the Directx 11.2 feature set.

Frame buffer scaling

New GPU scaling lets you dynamically resize your frame buffer to keep your 3D graphics smooth. The new IDXGISwapChain2 interface defines a set of new methods to get and set the source swap chain size, and define the scaling matrix for your frame buffer.

If stuff like this is happening in games this gen, it'll be even more difficult to spot differences outside of DF doing some pixel counting and such, but the typical gamer probably won't even be able to spot when, if at all, the resolution was quickly throttled to something lower to maintain performance, and then back up again once a certain heavy scene has passed.
 

HORRORSHØW

Member
So, in the final analysis, will games for both systems have (more or less) visual parity? If so, good because it forces developers to differentiate their games with unique ways to play and experience games rather than using graphical fidelity as their primary bullet point.
 
Grimløck;78809873 said:
So, in the final analysis, will games for both systems have (more or less) visual parity? If so, good because it forces developers to differentiate their games with unique ways to play and experience games rather than using graphical fidelity as their primary bullet point.

Pretty much assured, with some expected edges to the PS4, the biggest I think will be superior resolution and certainly some better textures in places. Honestly, I think the 32 vs 16 rops is the single most significant difference between the two platforms.
 
PS4 XBO Difference
Texture reads(gt/s)...........56........41.....36.59%
Vertex throughput(bn). .....1.6........1.7....-5.88%
Output(gp/s)..................25.6.......13.6....88.24%
Ops/cycle......................1152......768....50.00%
TF................................1.84......1.308...40.67%

The amount of vertices that can be passed along to CUs depends on memory bandwidth. The number of vertices that can be transformed per given time is dependent on the clock and the number of CUs. It's one of the first things you learn in an introductory computer graphics course.

Conclusion:
PS4's vertex throughput is 40% faster.

Fact:
In an ideal situation where memory bandwidth is infinite and latency is non-existent, PS4's gpu can compute 40% faster than Xb1's gpu in best case scenarios. Pixel fill rate is an exception since that's dependent on number of ROPs which PS4 has twice as many.

This does not mean ps4 will have 40% faster framerate in games. That will never happen.
 
The amount of vertices that can be passed along to CUs depends on memory bandwidth. The number of vertices that can be transformed per given time is dependent on the clock and the number of CUs. It's one of the first things you learn in an introductory computer graphics course.

Conclusion:
PS4's vertex throughput is 40% faster.

Fact:
In an ideal situation where memory bandwidth is infinite and latency is non-existent, PS4's gpu can compute 40% faster than Xb1's gpu in best case scenarios. Pixel fill rate is an exception since that's dependent on number of ROPs which PS4 has twice as many.

This does not mean ps4 will have 40% faster framerate in games. That will never happen.

I think it's definitely possible, but, like you, I obviously think it won't happen, because devs will simply just lower the resolution on the Xbox One version of their game if they have to, which will literally be almost unnoticeable to the large majority of gamers.
 

MoneyHats

Banned
Personally I think we will end up like this for next gen:

PS4 - Medium settings
XB One- Medium settings
PC - High settings

I think if your hope is the PS4 will be 40% more powerful outside of paper specs, you will be disappointed.


Yep, we've already seen comparisons between hardware with 50% difference in power and how that translates to game performance. The difference was somewhere between 12% and 20% in framerate. Digital Foundry and other sites should update these comparisons now and compare a 40% difference on paper, I suspect the difference in framerate will drop down to somewhere between 9% and 17% framerate drop for Xbox One. A 60fps game on PS4 will likely run 50 to 54fps on Xbox One, not a major difference that can be address by minor trade-offs such as tweaking the res slightly or decreasing AA or minor LODs that will be virtually undetectable unless literally placing side by side, kind of like the way things are now with PS3 and 360.
 
Can't disagree with that.

I was just pointing out that some people seem to have an agenda somehow. Wish people would just enjoy their consoles for what they are.

I intend to purchase both down the line(just like I did this current gen) but went with the Xbone first cause I can see myself playing Kinect titles with my daughter and also cause the launch lineup is more interesting to me.

I love eye candy and so far, the only thing that blew me away was Deep Down and that sorcerer demo. No doubt the PS4 will be a solid system but I think the Xbone can be one too even if it's underpowered compared to the PS4.

Doing simple math constitutes having an agenda? If you don't like the numbers fine, but this is a tech thread. Nobody is saying who can or can't enjoy which console for what they are. Whether you want to play Kinect with your daughter, or whether you find the x1 launch lineup more appealing has no relevance.

Yep, we've already seen comparisons between hardware with 50% difference in power and how that translates to game performance. The difference was somewhere between 12% and 20% in framerate. Digital Foundry and other sites should update these comparisons now and compare a 40% difference on paper, I suspect the difference in framerate will drop down to somewhere between 9% and 17% framerate drop for Xbox One. A 60fps game on PS4 will likely run 50 to 54fps on Xbox One, not a major difference that can be address by minor trade-offs such as tweaking the res slightly or decreasing AA or minor LODs that will be virtually undetectable unless literally placing side by side, kind of like the way things are now with PS3 and 360.

These are not PCs. There is no high/medium settings that devs can switch on and off. Those df comparisons are meaningless. fps are design choices locked by developers. xbox-gc-ps2 had a wide range of differences in specs yet most games were locked into 30fps.
 

RayMaker

Banned
Yep, we've already seen comparisons between hardware with 50% difference in power and how that translates to game performance. The difference was somewhere between 12% and 20% in framerate. Digital Foundry and other sites should update these comparisons now and compare a 40% difference on paper, I suspect the difference in framerate will drop down to somewhere between 9% and 17% framerate drop for Xbox One. A 60fps game on PS4 will likely run 50 to 54fps on Xbox One, not a major difference that can be address by minor trade-offs such as tweaking the res slightly or decreasing AA or minor LODs that will be virtually undetectable unless literally placing side by side, kind of like the way things are now with PS3 and 360.

What makes you think devs wont just lower texture resolution and effects clarity to maintain parity of FPS and resolution?

I'am very confident that in 3rd party titles that the difference in visuals will be less then it was on PS3/360.
 

mocoworm

Member
I have no idea how you made that connection. So you think this article relates to the fact that the next Need For Speed will look better on Xbox One than the PS4?

What? No. Not at all. Read the quote from the NFS dev in that thread. I even quoted it in my post that you quoted. Are you are being deliberately obtuse?
 
Doing simple math constitutes having an agenda? If you don't like the numbers fine, but this is a tech thread. Nobody is saying who can or can't enjoy which console for what they are. Whether you want to play Kinect with your daughter, or whether you find the x1 launch lineup more appealing has no relevance.

I wouldn't say people pointing out numbers have an agenda, but there comes a point when those numbers are already well known and, as such, old news. It basically starts to seem after awhile that, in place of the consoles proving the numbers out for themselves, all we're left with are the numbers, which will in time get pushed aside by real games that tell a much better story than boring (but not exactly unimportant) numbers ever could. Even Kinect Sports Rivals has incredible graphics lol.

For example, I don't imagine 2 or 3 years in people will find much satisfaction in using specific numbers, on either side, to make a point if god forbid the games aren't showing what we thought they should based on the raw numbers.
 

RayMaker

Banned
Doing simple math constitutes having an agenda? If you don't like the numbers fine, but this is a tech thread. Nobody is saying who can or can't enjoy which console for what they are. Whether you want to play Kinect with your daughter, or whether you find the x1 launch lineup more appealing has no relevance.



These are not PCs. There is no high/medium settings that devs can switch on and off. Those df comparisons are meaningless. fps are design choices locked by developers. xbox-gc-ps2 had a wide range of differences in specs yet most games were locked into 30fps.

So next gen engines are not scalable?I dont believe devs cant tone down certain aspects on the X1 version to achieve the games desired framerate + resoultion,

they did it on the PS3, so what you saying seems untrue.
 

cripterion

Member
Doing simple math constitutes having an agenda? If you don't like the numbers fine, but this is a tech thread. Nobody is saying who can or can't enjoy which console for what they are. Whether you want to play Kinect with your daughter, or whether you find the x1 launch lineup more appealing has no relevance.

kinda feel bad for the Xbone HW dudes :/
Slaving away at that box for years, only for Sony to roll in a 400lbs Gorilla that smashes their underpowered machine to bits with 40% more power.

Quote from the hUMA thread. A lot of people have litterally been shitting on MS for month just because of whole DRM debacle.
I'll say it again, I don't mind the numbers.
 

USC-fan

Banned
Grimløck;78809873 said:
So, in the final analysis, will games for both systems have (more or less) visual parity? If so, good because it forces developers to differentiate their games with unique ways to play and experience games rather than using graphical fidelity as their primary bullet point.
Ps4 version will look or run better for every game. It tba and will be game to game how much that difference in specs will be.

At best the xbone version will be close..
 
My general takeaway is that the guy seems surprised or impressed with the amount of special purpose hardware that MS has offloading work from either the GPU or the CPU. He thinks a lot of very common tasks that are essential and used frequently on GPUs are things Microsoft specifically created special purpose hardware for to help cutdown on the amount that the GPU has to do itself.

I don't think anyone can deny that the PS4 will win any raw power comparisons, but I do wonder how much Xbox One's special purpose hardware can help mitigate some of its weaknesses (by taking some pressure off of the GPU). It's not something I've seen a whole lot of discussion about, and I would like to hear more about it (especially discussion of the unbiased nature).

I do wonder if these specific customizations by the Xbox One complicates the comparisons a bit, and makes the "40%" difference not so cut-and-dry.

While I hope to get both (well, all three!) next gen systems, I'm not expecting this 40% difference in raw power to translate into "40% better looking games" (whatever that really means anyways...40% better frame rate? 40% more "stuff on the screen"?). I'm expecting the differences to be fairly minute, and not terribly noticeable to the average gamer (which I am not, but most of my friends are)...although, I know that won't stop the "consolllee warzzzz" DF comparisons from happening.
 
hUMA is just the short form of this

as opposed to this:


If the XB1 can do the former without being constrained in one way or another due to its architecture, then great.

If this is the case, then, to some large degree, the XB1 seems to have a good deal of this covered. And there's some other interesting ways that it seems to do so also. Microsoft pointed out to devs in an early Durango developer summit presentation from early 2012 that one of the benefits of ESRAM would be that there would be no need to move data into System RAM in order to read it.

I'm sure that this isn't the case 100% of the time with just ESRAM, what with all the various amounts of data that may be required at any given time, but when you look at this fact in addition to taking advantage of virtual memory addresses or pages in memory, and the ability to pass pointers between the CPU and GPU, which Microsoft confirms in that very same document to be architecturally possible thanks to the GPU's Memory Management Unit (which I guess we now know from hot chips is referred to as the Guest-Host GPU MMU), then having that added luxury of being able to read data without copying it into the slower DDR3 ram starts to look really nice.
 

Busty

Banned
what? the hUMA/shared memory stuff is on PS4, and there are conflicting articles on wether X1 has it. On a purely hardware standpoint, I don't see how they are about the same at all.

Xbox One:
1.31 TFLOPS
40.9 GTex/s
13.6 GPix/s
68GB/s DDR3
109GB/s eSRAM

PS4:
1.84 TFLOPS (+40%)
57.6 GTex/s (+40%)
25.6 GPix/s (+90%)
176GB/s GDDR5

This right here is why I stay out of 'tech discussions'. I don't understand all this involved tech back and forth at all but even the layman in me understands enough to realise that the PS4 (on paper I grant you) has more 'under the hood' than the Xbone.

The proof, as they say, will ultimately be in the pudding. And by pudding I do of course mean 'first party exclusives'.
 

badb0y

Member
Yep, we've already seen comparisons between hardware with 50% difference in power and how that translates to game performance. The difference was somewhere between 12% and 20% in framerate. Digital Foundry and other sites should update these comparisons now and compare a 40% difference on paper, I suspect the difference in framerate will drop down to somewhere between 9% and 17% framerate drop for Xbox One. A 60fps game on PS4 will likely run 50 to 54fps on Xbox One, not a major difference that can be address by minor trade-offs such as tweaking the res slightly or decreasing AA or minor LODs that will be virtually undetectable unless literally placing side by side, kind of like the way things are now with PS3 and 360.
Wrong.

The performance difference will actually be greater than 40%.
 
The performance difference will actually be greater than 40%.
You do realize that, as a matter of course, the game developers will be using each platform's strengths and adjust between them to match as well as possible. That means you won't really be seeing that much of a jump for multiplats when exclusives will or should be built to showcase the host platform. Maybe in the latter half of the generation will there possibly be this widespread difference because the hardware is so old and trying to keep up with the newest PC tech. These publishers on big games will especially try to maintain parity just to make every version a good one to buy.
 

avaya

Member
This narrative that the bone is a very capable machine is straight up laughable. The PS4 is mid-level GPU, pathetic CPU and 8GB of GDDR5. It is nothing to shout home about in terms of specs. The Bone is even more pathetic compared to this low standard.

Both of these machines are underpowered relative to what we could have gotten, both console makers have been far too conservative.
 
This narrative that the bone is a very capable machine is straight up laughable. The PS4 is mid-level GPU, pathetic CPU and 8GB of GDDR5. It is nothing to shout home about in terms of specs. The Bone is even more pathetic compared to this low standard.

Both of these machines are underpowered relative to what we could have gotten, both console makers have been far too conservative.

Eh, for exclusives, they'll both be plenty capable of making awesome on their own. Maybe those games built with PCs in mind might suffer to fit and match performance targets, but these are not crappy PCs, they're powerful new consoles. Besides, it will be nice to see fixed-spec performance over time using MS' and Sony's machines to their limits at gen's end. That's something that never happens on PCs with its moving target nature.
 

Reg

Banned
This narrative that the bone is a very capable machine is straight up laughable. The PS4 is mid-level GPU, pathetic CPU and 8GB of GDDR5. It is nothing to shout home about in terms of specs. The Bone is even more pathetic compared to this low standard.

Both of these machines are underpowered relative to what we could have gotten, both console makers have been far too conservative.

Not many people want to pay 600 bucks for a console.
 
Top Bottom