• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox One hardware breakdown by ExtremeTech.com after HotChips reveal

RayMaker

Banned
Xbox One is powerful enough to have great looking games.

in relation to the PS4,lol.

People might as well stop the bickering and place there bets and eat crow later.

I'll go first

In the launch window All 3rd party games will be piratically identical with the PS4 versions having slightly more punchy colours, barely noticeable to the naked eye higher res textures and a tiny bit less aliasing. In 2016 and beyond we might see these things stretched a little bit more in a few games.
 

Flatline

Banned
bloody hell.. Are you ok? Sounds like you're having a really shitty day and need to talk to someone about it!

to me its custom silicon and ms have just given it their own PR names, nothing more. The 204 number has to come from somewhere though, i just find it laughable that its cant be real!!!! They made it up!!! Quite a thing to do that and plaster it all over the internet.. Or is it just more MS bait to get the fanboys rabid lol


What a terrible post. I'm repeating myself here but since certain fans refuse to face reality I'll say it again. We're supposed to believe that Microsoft invented some magical technique that almost doubles ESRAM's bandwidth reaching the performance of a double bus ESRAM, a technique not even the ESRAM manufacturer knows about.

They're full of shit. They probably discovered a technique that achieves somewhat simultaneous reads/writes so they decided to double the bandwidth out of nowhere in their PR to make the numbers look better. Btw they did the same crap in June but that time they added all the different RAM bandwidths together, now they're just adding reads and writes which is even more preposterous if you think about it.
 

ekim

Member
Hey guys, have any of you read this? (Caution: English is not his 1st Language):



(source: http://misterxmedia.livejournal.com/132131.html)

So a good amount of MS fanboys are going crazy about the above posted material lol, and of course the same guy wrote an "interesting" article last summer regarding Sony:

(http://misterxmedia.livejournal.com/98352.html)

Surely his name should be "Suzy" now and should therefore be quiet but I digress lol...Anyway what do you guys think?

Haha... I regularly check this site - these guys are nuts. When the Xbox One launches (of course without a secret dGPU) they will claim that MS will release a firmware update that enables another hidden APU.
 

B_Boss

Member
why, oh why would you post anything from those idiots?

Things that they predicted:

3 different socs/apus
3 different gpus
ray tracing chip
hybrid memory cube
ddr4 ram
10tflop console

etc etc.

think logically for a second. If Microsoft was holding such a card in their favor, would they have held on to it for the last few months after all the negative press just to get some insane positive light for themselves?

You know dude I've never heard of the guy lol. I came across the article and user because of reddit and did a search on the user and found the posted article otherwise I don't think I wouldve posted it. I figured it was MS tech praising and a good place to analyze it is here @NGaf.
 

toff74

Member
why, oh why would you post anything from those idiots?

Things that they predicted:

3 different socs/apus
3 different gpus
ray tracing chip
hybrid memory cube
ddr4 ram
10tflop console

etc etc.

think logically for a second. If Microsoft was holding such a card in their favor, would they have held on to it for the last few months after all the negative press just to get some insane positive light for themselves?

because right now its a flop war which they know they cant win.

all these custom chips wont add to these numbers.

seems like ms is going to let the games do the talking.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
Haha... I regularly check this site - these guys are nuts. When the Xbox One launches (of course without a secret dGPU) they will claim that MS will release a firmware update that enables another hidden APU.

jizzaflopshwz5o.png
 

PTG

Member
Haha... I regularly check this site - these guys are nuts. When the Xbox One launches (of course without a secret dGPU) they will claim that MS will release a firmware update that enables another hidden APU.

lol
 
because right now its a flop war which they know they cant win.

all these custom chips wont add to these numbers.

seems like ms is going to let the games do the talking.

Special sauce is not going to eliminate the hugeass GPU deficiency.

MS is going to let the games do the talking because they literally can't do anything else. Their console is inferior in terms of specs and they obviously know it. If it wasn't, they wouldn't have to constantly muddy the water like they're currently doing (i.e. adding DDR3 + embedded memory bandwidths together).

At this point, their only hope is that all of Sony's not-launch period first party games suffer the same fate as The Last Guardian because the performance difference should be downright obvious once Naughty Dog releases something. You've got the exact same architecture as the competitor, except more parts that can be used for GPGPU.
 
Seriously, people might be holding onto the hope for embedded memory special sauce but I cannot see how it can eliminate the sheer fact that the PS4's GPU is better than the Xbox One's GPU in literally every single way.
 
Let me go into maximum armchair mode for this:

The IBM Cell in PS3 had about 180 GFLOPS. So, the GPU in PS4 is like three Cells more powerful than Xbox One when it comes to raw power.

No, seriously: It's not only more FLOPS. PS4 also has 50% more texture mapping units, 100% more raster operation processors and four times as many compute pipelines.

Texture mapping is done with shaders so it should be included with flops figure. Also four times as many compute pipelines is a bit misleading since it's more like 4 times more entrances to each pipeline from what I understand.
 

Klocker

Member
until more is revealed by game makers the true ability of the known hardware and how it functions *NO NOT SECRET SAUCE DERAILING BULLSHIT* but the actual engineered system we can determine exactly how it compares to ps4

extremetechs own quote here describes how it can not even quantify it yet..

The big picture takeaway from this is that the Xbox One probably is HSA capable

... Even with this new information, the use and capabilities of the ESRAM remain mysterious. It’s not clear what Microsoft expects it to be used for — if it’s for caching GPU data, why break it into 8MB chunks, and why does the CPU have a connection to it?
 

toff74

Member
Special sauce is not going to eliminate the hugeass GPU deficiency.

MS is going to let the games do the talking because they literally can't do anything else. Their console is inferior in terms of specs and they obviously know it. If it wasn't, they wouldn't have to constantly muddy the water like they're currently doing (i.e. adding DDR3 + embedded memory bandwidths together).

At this point, their only hope is that all of Sony's not-launch period first party games suffer the same fate as The Last Guardian because the performance difference should be downright obvious once Naughty Dog releases something. You've got the exact same architecture as the competitor, except more parts that can be used for GPGPU.


for them its better to keep people guessing and keeping the debate open about voodoo and secret sauce than coming out and flatly admitting they have a weaker machine.. That would be like hitting a self district button. It just wouldn't make sense for them to do so.

so for now we are still talking about potentially made up numbers and secret sauces..

if after lunch the games do actually look a magnitude worse than the PS4 version then we can say 'yup, we were right all along', but they will have already got a good number of people to drink from the XBONE fountain. Job done!
 

Finalizer

Member
Can somebody explain the significance of the .5 flops difference between the two consoles ? Is it really that much of a gap ?

For a pure graphical dick-waving comparison, the GPUs in the PS4 & Xbone are roughly equivalent to a 7850 & 7770 respectively, so this handy-dandy comparison table gives an extremely rough idea of the performance difference.

Beyond that... It's hard to say what's going to show up in games. Between first party titles, there will certainly be a difference, no doubt about that. Getting into third party stuff, it's a little harder to say - fact of the matter is, I don't think there's ever been a console generation where two major players had such similar architectures in their systems and that same architecture happens to be x86, where there's decades of experience from PC development to build upon, so it's hard to really draw examples from previous generations as definitive proof of what will happen. Will devs target the Xbone as lead platform, then toss some glitter toward the PS4 and leave it at that? Will they target the PS4, and do serious hack jobs on the visuals to get them to run on the Xbone? Will they just target PC because "lol it's all x86," and simply cut down as necessary to hit acceptable performance on each console? How do you quantify what counts as a significant difference between the consoles anyway? Will it be immediately obvious just looking at it, or will devs cut away on AA/resolution for the Xbone version and try to leave basic visuals as untouched as possible? Actual graphics engineers would have more insight into those topics than I, but suffice to say it's gonna be an interesting generation to watch unfold.

And let it be said - there will be meltdowns come November. Grab some popcorn, sit back, and enjoy the show.
 
PS4 can create wavefronts for eight different compute kernels in parallel while doing graphics rendering at the same time. Xbox One can only create wavefronts for two compute kernels. PS4 can queue 64 compute kernels at the same time. Microsoft can only store something between 2 or 16 (they didn't mention it on the slides).

Which is why the entrance comparison. For each pipeline, PS4 can have a lot more stuff ready to go in any order to maximize the use of each pipeline compared to XB1. But the peak performance of each pipeline in ideal situations remain the same. In the end, it'd be easier to get closer to the peak performance on PS4 compared to XB1 as long as there are enough different programs to run on CUs.
 

brosephimjoseph

Neo Member
For a pure graphical dick-waving comparison, the GPUs in the PS4 & Xbone are roughly equivalent to a 7850 & 7770 respectively, so this handy-dandy comparison table gives an extremely rough idea of the performance difference.

Beyond that... It's hard to say what's going to show up in games. Between first party titles, there will certainly be a difference, no doubt about that. Getting into third party stuff, it's a little harder to say - fact of the matter is, I don't think there's ever been a console generation where two major players had such similar architectures in their systems and that same architecture happens to be x86, where there's decades of experience from PC development to build upon, so it's hard to really draw examples from previous generations as definitive proof of what will happen. Will devs target the Xbone as lead platform, then toss some glitter toward the PS4 and leave it at that? Will they target the PS4, and do serious hack jobs on the visuals to get them to run on the Xbone? Will they just target PC because "lol it's all x86," and simply cut down as necessary to hit acceptable performance on each console? How do you quantify what counts as a significant difference between the consoles anyway? Will it be immediately obvious just looking at it, or will devs cut away on AA/resolution for the Xbone version and try to leave basic visuals as untouched as possible? Actual graphics engineers would have more insight into those topics than I, but suffice to say it's gonna be an interesting generation to watch unfold.

And let it be said - there will be meltdowns come November. Grab some popcorn, sit back, and enjoy the show.

Thanks buddy ! I'm definitely going to be waiting on picking up a console at launch, observing others meltdowns will almost be just as good as having a PS4/X1 haha
 

Melchiah

Member
The sentence "Game X will look 40% better" doesn't have meaningful semantics. People are referring to the fact that the PS4's GPU has 50% more ALU (running at 800mhz compared to 853mhz) leading to 41% more programmable raw computation power. Having the same architecture does not mean that those additional 384 ALUs (or "shader cores" or whatever we want to call them) are just there for shits and giggles. It just means that both GPUs are built from the same building blocks, only the PS4 has 50% more of them. That is not controversial, that is just an objective fact. And if those differences are meaningless then I wonder why so many people spend additional money for graphics card from the same vendor and the same family but with more processing power. It's really the same situation. Everybody can take from that what he wants.

The fact the GPUs are similar, leads me to believe it wouldn't be a problem for the devs to use better AA, AF, textures, shadowing, and/or effects on the more powerful one. I find it odd, that some are arguing the difference won't show in 3rd party games, because it would demand extra work, when it appears to be exactly the opposite. Makes me wonder would the tone be different, if the roles were reversed?
 

artist

Banned
Nope.

Let me go into maximum armchair mode for this:

The IBM Cell in PS3 had about 180 GFLOPS. So, the GPU in PS4 is like three Cells more powerful than Xbox One when it comes to raw power.

No, seriously: It's not only more FLOPS. PS4 also has 50% more texture mapping units, 100% more raster operation processors and four times as many compute pipelines.
And this time the flop metric is usable to properly quantify the grunt in the hardware. (compared to the PS3/360 era where flops was not a meaningful comparative metric)

I'll just leave an Anandtech quote here;
Anandtech said:
there’s no escaping the fact that Microsoft has given the Xbox One less GPU hardware than Sony gave the PlayStation 4. Note that unlike the Xbox 360 vs. PS3 era, Sony's hardware advantage here won't need any clever developer work to extract - the architectures are near identical, Sony just has more resources available to use.
http://www.anandtech.com/show/6972/xbox-one-hardware-compared-to-playstation-4/2
 

antic604

Banned
They claimed that the cloud will increase the performance of Xbox One by a factor of 3.

In all fairness, this is not accurate - what they (Marc Whitten, I believe) explicitly said is that "for every physical Xbox One we build, we're provisioning the CPU and storage equivalent of three Xbox Ones on the cloud".

Source: http://www.oxm.co.uk/54748/xbox-one...e-equivalent-of-three-xbox-ones-in-the-cloud/

If that's the case, then it is nowhere near 3 * Xbone performance, as the CPU itself is very weak in terms of TFlops I think. And even if that 3 * CPU power was the case, I guess PS4's 0,5TFlops GPU advantage married with more flexible GPGPU architecture will easily outperform the benefits the cloud can provide - after all the things that can be supposedly offloaded to the cloud (e.g. physics, lighting, AI) are prefect for the GPGPU.
 

Finalizer

Member

My understanding is, they're both roughly between those respective cards, so a hypothetical 7780 vs 7860 would be the best bet. That said, I specifically called it an extremely rough comparison for a reason - there's more to the differences between the systems as a whole than just one GPU being beefier than the other. On top of that, as I understand it there's also the aspect of developers "coding to the metal" on consoles, where they can get more direct access to the system's capabilities compared to a PC where a lot of it is abstracted by DirectX, hence you get the whole "PC brute-forces it" thing. (don't quote me on that though - well out of my knowledge base here, just going on my understanding from what I've gleaned)

So, like I said, extremely rough comparison. Look at how some bars are bigger than the other. Mmm, bigger is better. Woooh~
 

TheD

The Detective
My understanding is, they're both roughly between those respective cards, so a hypothetical 7780 vs 7860 would be the best bet. That said, I specifically called it an extremely rough comparison for a reason - there's more to the differences between the systems as a whole than just one GPU being beefier than the other. On top of that, as I understand it there's also the aspect of developers "coding to the metal" on consoles, where they can get more direct access to the system's capabilities compared to a PC where a lot of it is abstracted by DirectX, hence you get the whole "PC brute-forces it" thing. (don't quote me on that though - well out of my knowledge base here, just going on my understanding from what I've gleaned)

So, like I said, extremely rough comparison. Look at how some bars are bigger than the other. Mmm, bigger is better. Woooh~

No, the XB1 GPU is just about dead on the HD 7770 in regards to processing power, pixel and texel fillrate.
 

twobear

sputum-flecked apoplexy
Seems weird that they spent all that time and effort designing such an enormous chip with crazy customisations out the wazoo when a more standard part would give them better performance. What's the advantage of this? Sony are probably paying less for a more powerful APU, so what gives? Just over-engineering?
 

Finalizer

Member
No, the XB1 GPU is just about dead on the HD 7770 in regards to processing power, pixel and texel fillrate.

Ah, I remembered the Xbone had more CUs, but didn't know that the 7770 was 1000Mhz... So yeah, I guess they're pretty much neck-and-neck. Heh.

EDIT:

Seems weird that they spent all that time and effort designing such an enormous chip with crazy customisations out the wazoo when a more standard part would give them better performance. What's the advantage of this? Sony are probably paying less for a more powerful APU, so what gives? Just over-engineering?

They needed to guarantee 8GB RAM from the outset for their system design -> They stuck with 8GB DDR3 and needed a small on-die cache to help avoid bottlenecking -> Chuck 32MB ESRAM on the chip, devise some other stuff to help the system along (Move engines or whatever) -> "bloated" design with lesser performance overall
 
Whenever I see these technical threads where the XBone is just getting bashed with all this technomombojombo I think, "well what does this mean for actually games".

Then I'm reminded...
images

xbox-one-games.png

Are you telling me you're gonna pay 100 more dollars for inferior hardware than the competition because it has TItanfall and timed-exclusive DLC ?

Good for you, buddy. In the meanwhile i'll get a PS4 play Titanfall on my PC... Cause you know what, it's system requirements are pretty low.
http://www.game-debate.com/games/index.php?g_id=8045&game=Titanfall
 

Klocker

Member
Seems weird that they spent all that time and effort designing such an enormous chip with crazy customisations out the wazoo when a more standard part would give them better performance. What's the advantage of this? Sony are probably paying less for a more powerful APU, so what gives? Just over-engineering?
that remains to be seen if that is so. They spent an enormous amount of time money and energy building huge models and testing to design a system that in many real world gaming situations could be just as, if not more, efficient than a typical pc brute force solution

only time in devs hands will prove if it paid off. We really can not know that by looking at numbers
 

gofreak

GAF's Bob Woodward
Seems weird that they spent all that time and effort designing such an enormous chip with crazy customisations out the wazoo when a more standard part would give them better performance. What's the advantage of this? Sony are probably paying less for a more powerful APU, so what gives? Just over-engineering?

The die size is mostly due to the eSRAM.

The eSRAM is due to DDR3's relatively low bandwidth.

The DDR3 is due to Microsoft's need to lock in 8GB of RAM at an early stage of the console's design. DDR3 was the best choice for guaranteeing 8GB. It was too uncertain if chip density of other memory types would economically accommodate 8GB in the launch timeframe.

Sony was more flexible about the amount of memory they were going to put into PS4 (at one time it was only 2GB!). That allowed a different, higher bandwidth choice of memory (GDDR5), which allowed them to spend more APU silicon on other things. As it later turned out their choice of memory afforded them 8GB anyway.
 

antic604

Banned
Thanks for clarification. Mea culpa.

It wasn't my intention to point you were wrong, so no need to feel sorry :)

I tried to make this point (3xCPU vs. 3xXbone) couple of times already in several threads, but it was never picked up by anyone - all people were arguing was whether or not the cloud can deliver 3xXbone performance, while MS never claimed that! For me, this puts the whole 'cloud' discussion into completely different perspective, with its impact probably being much smaller than most Gaffers make it out to be...

But I digress, sorry to derail the thread :)
 

Finalizer

Member
They spent an enormous amount of time money and energy building huge models and testing to design a system

Bear in mind that that doesn't automatically mean a better system for it. Not for nothing, but MS isn't exactly known for their efficiency... Just look at how they've been late to the market with under performing hardware in the past. Kin says hi.

Not trying to knock on their Xbone engineering effort; there's a logic behind it for sure. Just don't immediately assume it was the ideal logic to work with.
 

twobear

sputum-flecked apoplexy
They needed to guarantee 8GB RAM from the outset for their system design -> They stuck with 8GB DDR3 and needed a small on-die cache to help avoid bottlenecking -> Chuck 32MB ESRAM on the chip, devise some other stuff to help the system along (Move engines or whatever) -> "bloated" design with lesser performance overall

It's not just ESRAM, they've got like 15 custom processors in the SOC, didn't they say? Seems kind of ridiculous.
 

Klocker

Member
Bear in mind that that doesn't automatically mean a better system for it. Not for nothing, but MS isn't exactly known for their efficiency... Just look at how they've been late to the market with under performing hardware in the past. Kin says hi.

Not trying to knock on their Xbone engineering effort; there's a logic behind it for sure. Just don't immediately assume it was the ideal logic to work with.


I said... Clearly... It remains to be seen. :)

none of us know for sure yet. We are guessing based on numbers.
 
Seriously, people might be holding onto the hope for embedded memory special sauce but I cannot see how it can eliminate the sheer fact that the PS4's GPU is better than the Xbox One's GPU in literally every single way.

EmptySpace thinks that the xbone has at least compensated for the slower memory and weaker gpu through the ultra-fast esram, but the hardware setup of ps4 is just better at every angle.

cpu/gpu on-die, unified memory alone have eliminated most of the 'problems' with conventional pc setups and moving data back-and-forth and reading and writing and translating

but they also made ram gddr5 which means much higher bandwidth and faster speed, plus added 3rd bus to eliminate constant flushing and waiting.

plus 4 extra cus for compute, which is huuuuuge that is not an understatement at all. to think that infamous may not (?) even be using those and it has those particle effects...

overall just the better g-a-m-i-n-g hardware no doubt.

EmptySpace is not excited about 3rd party, because they will not encourage parity.
 

antic604

Banned
The die size is mostly due to the eSRAM.

The eSRAM is due to DDR3's relatively low bandwidth.

The DDR3 is due to Microsoft's need to lock in 8GB of RAM at an early stage of the console's design. DDR3 was the best choice for guaranteeing 8GB. It was too uncertain if chip density of other memory types would economically accommodate 8GB in the launch timeframe.

Sony was more flexible about the amount of memory they were going to put into PS4 (at one time it was only 2GB!). That allowed a different, higher bandwidth choice of memory (GDDR5), which allowed them to spend more APU silicon on other things. As it later turned out their choice of memory afforded them 8GB anyway.

Pretty good summary.

I can only imagine how Mark Cerny's and his MS's equivalent's meetings with devs influenced the designs. My theory is this:

- for PS4 - devs wanted more & unified memory, more bandwidth for alpha stuff that was choking the PS3, with beefy GPU that doesn't need help like RSX did from Cell - as a result, they designed a console with (at the time) 4GB of fast, unified GDDR5 and a GPU with adequate buffer for future GPGPU enhancements that - in reverse to PS3 situation - can offload the relatively weak CPU,

- for Xbox - devs were already very pleased with ease of developing for Xbox360, so they asked for more of the same - apart from some tweaks to how the memory works, the overall Xbone architecture is really 'just' improved Xbox360.
 

Finalizer

Member
I said... Clearly... It remains to be seen. :)

none of us know for sure yet. We are guessing based on numbers.

I would just make a clear distinction on what exactly remains to be seen. How much 3rd party devs will utilize the PS4's advantage is certainly up for debate, but the PS4's raw performance advantage is indisputable.
 

gofreak

GAF's Bob Woodward
It's not just ESRAM, they've got like 15 custom processors in the SOC, didn't they say? Seems kind of ridiculous.

The eSRAM is by far the bigger culprit when it comes to die size.

In terms of the task-specific processors, who says performance in those tasks would be >= as in a case where they weren't there? There is a technical case for putting some tasks into custom hardware. I'm not sure that's over-engineering if you're picking your tasks smartly. The silicon cost is relatively small.

And I'm not a hardware engineer, but it may have been the case that inclusion of eSRAM gave some die space to spare that they 'might as well' fill in with low-heat-producing silicon that does something useful. Depending on the shape of the die, and the shape of various groups of transistors for different components, I'm guessing it's possible that you sometimes have patches of empty area... so die size without custom processors may not have been very much smaller at all.
 

twobear

sputum-flecked apoplexy
The eSRAM is by far the bigger culprit when it comes to die size.

In terms of the task-specific processors, who says performance in those tasks would be >= as in a case where they weren't there? There is a technical case for putting some tasks into custom hardware. I'm not sure that's over-engineering if you're picking your tasks smartly. The silicon cost is relatively small.

If this were true, wouldn't every GPU do it?
 

Finalizer

Member
Titan Fall will be a system seller for Xbox one, that I'm sure of.

Frankly, I'm dubious of the notion. I feel like the 360 release will cannibalize Xbone sales. Problem with Titanfall is that it sells on gameplay, not graphics - unless the 360 port is a turd, I can see a big chunk of the audience just sticking around on the 360, especially since IIRC there's no 360/Xbone crossplay, so players will just wanna stick where the audience is at. Not to say it won't move systems at all, but I feel it'll be far less of an impact than if the game were totally Xbone exclusive, so it wont have the impact that Gears of War had for 360 sales, for example.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
It's not just ESRAM, they've got like 15 custom processors in the SOC, didn't they say? Seems kind of ridiculous.

It's not really ridiculous. It includes stuff like video/audio decode/encode, general audio processing, those move engines to help with ESRAM/DDR3 synchronization and related tasks, composition of display outputs (to blend stuff like TV and overlays), etc. People just immediately think of some crazy rendering-related special sauce even though every device nowadays incorporates special-purpose hardware.
 

EvB

Member
Are you telling me you're gonna pay 100 more dollars for inferior hardware than the competition because it has TItanfall and timed-exclusive DLC ?

Good for you, buddy. In the meanwhile i'll get a PS4 play Titanfall on my PC... Cause you know what, it's system requirements are pretty low.
http://www.game-debate.com/games/index.php?g_id=8045&game=Titanfall

And whilst you are are doing that , I'll buy them all and not be petty and piss on other people's preferences.

-------

You like Brocoli?

FUCK YOU IDIOT! ALL THE COOL KIDS ARE EATING ASPARAGUS!
 
Pretty good summary.

I can only imagine how Mark Cerny's and his MS's equivalent's meetings with devs influenced the designs. My theory is this:

- for PS4 - devs wanted more & unified memory, more bandwidth for alpha stuff that was choking the PS3, with beefy GPU that doesn't need help like RSX did from Cell - as a result, they designed a console with (at the time) 4GB of fast, unified GDDR5 and a GPU with adequate buffer for future GPGPU enhancements that - in reverse to PS3 situation - can offload the relatively weak CPU,
.

antic604 can just imagine what kind of monster ps4 would have been if it kept the cell instead of the jaguar. we could have gotten a 7-year-old cpu that is still better than an i7 and can do gpu tasks, plus a gpu that can do compute.

antic604 and EmptySpace can only dream.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
we could have gotten a 7-year-old cpu that is still better than an i7 and can do gpu tasks

That's not true. Cell had great SIMD-performance but the PPU was, frankly, shit and easy to stall. In addition, the SPUs were restricted to their local memory with no direct, care-free access to main memory. Theoretical peak performance doesn't mean much when you are talking about very different architectures.
 

Yoday

Member
I was listening to the videogamer.com podcast tonight, and during it one of the guys mentioned talking to the Need for Speed Rivals folks at Gamescom. During the interview the guy said flat out that one of the next gen versions of the game was going to look better than the other. Now, this is obvious to us, but it is interesting to hear of a developer confirming it. If they are going to mention that there is a difference, then it would have to be a fairly significant difference.

Skip to 44:00

http://www.youtube.com/watch?v=Itrb69FmqGQ&feature=c4-overview&list=UUR1wCXM9e7NshYfonwa5BAg
 

ekim

Member
I was listening to the videogamer.com podcast tonight, and during it one of the guys mentioned talking to the Need for Speed Rivals folks at Gamescom. During the interview the guy said flat out that one of the next gen versions of the game was going to look better than the other. Now, this is obvious to us, but it is interesting to hear of a developer confirming it. If they are going to mention that there is a difference, then it would have to be a fairly significant difference.

Skip to 44:00

http://www.youtube.com/watch?v=Itrb69FmqGQ&feature=c4-overview&list=UUR1wCXM9e7NshYfonwa5BAg

Yeah. We discussed that in several threads :)
 
Top Bottom