• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox One hardware breakdown by ExtremeTech.com after HotChips reveal

timlot

Banned
First of all, it wasn't I who went ahead and posted any of that guy's stuff to somehow prove a point. Someone else did, seeming to pass it off as their own, without sourcing where it came from, and I simply pointed people to where I know that's coming from, because I remember reading it.

Separately, if by fluff posts you mean some of the stuff I've posted from people on beyond3D, then you really do have no idea what you're talking about, because some of the stuff I've posted has either come from someone that actually worked on the Xbox One audio chip personally, or I've posted stuff from a Sony First Party developer, or confirmed game developers that have actually coded for and shipped Xbox 360 or PS3 titles. I don't see where the 'fluff' is in that. And if by fluff 'piece', you mean articles from websites such as Eurogamer or Anandtech with proven track records, especially when some of that stuff has been confirmed to be accurate in one form or another, then all I can do is laugh at the fact that this bothers you.

Outside of that, I hardly ever post anything from some random corner of the internet, so I don't know who you think you're confusing me with.

And, whether you like it or not, just know that when you go around making your console warrior accusations while pretending you yourself are nothing of the sort, or somehow more knowledgeable or qualified to speak about the things that I've posted from people who are either directly hands on with the actual hardware in question, or at least have confirmed contact with people who do, just know that I'm not the person with the mentality of a warrior, it is you. Exactly as others have said, there's this amazing degree of hutzpah where people like to pretend as if they somehow know more and can speak about what's going on in these machines with greater understanding than people who are actually qualified (not saying I'm one such individual, because I'm not. Some won't even admit that much.) to do so. These folks aren't disregarding the views of armchair, wannabe engineers, nope. In many cases, actual engineers somehow don't know what they're talking about if they aren't saying what you wish to hear. Not armchair programmers, but in many cases, actual game programmers that have years of experience making videogames, and who have recent and current experience developing games, but whose opinions you classify as 'fluff.' Come on, buddy, give me a break.

Only someone such as yourself can look at some of these people, which I'm quite sure you're not entirely ignorant of, and just shrug them off as somehow not knowing what they're talking about. Hell, I've even posted stuff from Dave Baumann, of all people, that folks on here have outright suggested is meaningless or makes no sense, like the man wouldn't know what he's talking about regarding, of all things, AMD Graphics hardware. Next time look before you jump off that bridge.
let-the-church-say-amen.jpg
 
First of all, it wasn't I who went ahead and posted any of that guy's stuff to somehow prove a point. Someone else did, seeming to pass it off as their own, without sourcing where it came from, and I simply pointed people to where I know that's coming from, because I remember reading it.

Separately, if by fluff posts you mean some of the stuff I've posted from people on beyond3D, then you really do have no idea what you're talking about, because some of the stuff I've posted has either come from someone that actually worked on the Xbox One audio chip personally, or I've posted stuff from a Sony First Party developer, or confirmed game developers that have actually coded for and shipped Xbox 360 or PS3 titles. I don't see where the 'fluff' is in that. And if by fluff 'piece', you mean articles from websites such as Eurogamer or Anandtech with proven track records, especially when some of that stuff has been confirmed to be accurate in one form or another, then all I can do is laugh at the fact that this bothers you.

Outside of that, I hardly ever post anything from some random corner of the internet, so I don't know who you think you're confusing me with.

And, whether you like it or not, just know that when you go around making your console warrior accusations while pretending you yourself are nothing of the sort, or somehow more knowledgeable or qualified to speak about the things that I've posted from people who are either directly hands on with the actual hardware in question, or at least have confirmed contact with people who do, just know that I'm not the person with the mentality of a warrior, it is you. Exactly as others have said, there's this amazing degree of hutzpah where people like to pretend as if they somehow know more and can speak about what's going on in these machines with greater understanding than people who are actually qualified (not saying I'm one such individual, because I'm not. Some won't even admit that much.) to do so. These folks aren't disregarding the views of armchair, wannabe engineers, nope. In many cases, actual engineers somehow don't know what they're talking about if they aren't saying what you wish to hear. Not armchair programmers, but in many cases, actual game programmers that have years of experience making videogames, and who have recent and current experience developing games, but whose opinions you classify as 'fluff.' Come on, buddy, give me a break.

Only someone such as yourself can look at some of these people, which I'm quite sure you're not entirely ignorant of, and just shrug them off as somehow not knowing what they're talking about. Hell, I've even posted stuff from Dave Baumann, of all people, that folks on here have outright suggested is meaningless or makes no sense, like the man wouldn't know what he's talking about regarding, of all things, AMD Graphics hardware. Next time look before you jump off that bridge.

Does this mean we have seen the last of nib95?
 
Virtual addresses are not seamless. They are mapped on top of pages, which at the the minimum are 4KiB in the size on GCN. However, textures (and render targets) have special tiling modes, and my not have a supported mode that actually resolves to 4KiB tiles. The PS4 and desktop GCN cards have texture tiles of 64KiB. Even if DME's can magically untile memory, it's not going to solve the issue that texture pages are likely to be 64KiB in size.

On top of that DMEs aren't going to be able to intercept pixels from the ROPs, which is really what you would need to do to scatter blended pixels. ROPs are going to operate on top of a render target, and render targets are going to follow texturing rules.


I think you're really overstating the system's flexibility. Virtual memory is flexible sure, but it still has a lot of rules.

I don't see the big limitation you are implying. So the virtual addresses are not granular enough to hold only a single pixel? Why wouldn't you still be able to scatter 64kib worth of pixels across both memories? That would still leave you with plenty of room to divide the framebuffer into both pools.



L1 latency is likely in the 100s of cycles (GDC GCN slides mention that L1 latency is on average 20x that of LDS, which is probably much closer to the 10s of cycles range as it's actually attached to the CU before the caches). I would have guess that the ESRAM was going to have really good latency too until I really looked into the latency of AMD cards. Sony employees are just as just as susceptible to speculation as anyone else. Latency numbers for things like caches aren't exactly bandied out in the SDK notes. You'd actually have to go looking for them to find them.
Yeah, amd gpus have terrible cache hits compared to nvidia. That's actually one of the reasons why nvidia gpus can outmatch them even though they theoretically have less compute resources.

That's exactly why I think Ms spent so much die area on this chip by adding more memory, instead of just adding more processing power... At this point I can only speculate, but the vgleaks info does point out, multiple times, that the advantages of the esram are low latency and that it's free of contention from other clients. I don't think their design doesn't even make sense if the esram latency is actually bad.
 
I don't see how you possibly could dynamically determine the memory location of individual pixels based on their properties. Maybe you have tile-based deferred rendering in mind? That works differently.

That part of preemptively splitting the buffer is just me tripping a little while thinking of the possibilities XD

My actual point was that the buffer might be split into both memory pools.
 

KidBeta

Junior Member
Yeah, amd gpus have terrible cache hits compared to nvidia. That's actually one of the reasons why nvidia gpus can outmatch them even though they theoretically have less compute resources.

That's exactly why I think Ms spent so much die area on this chip by adding more memory, instead of just adding more processing power... At this point I can only speculate, but the vgleaks info does point out, multiple times, that the advantages of the esram are low latency and that it's free of contention from other clients. I don't think their design doesn't even make sense if the esram latency is actually bad.

But if you have to check the caches for each access (and it seems that you do) then i guess it means it only adds a little bit ontop, but it still going to be high (hundreds of clock cycles + eSRAM access latency).

The irony here that being able to bypass the GPU's caches using something like the onion+ bus in the PS4 would provide a huge benefit to a low latency setup like the eSRAM.
 

Klocker

Member
But if you have to check the caches for each access (and it seems that you do) then i guess it means it only adds a little bit ontop, but it still going to be high (hundreds of clock cycles + eSRAM access latency).

The irony here that being able to bypass the GPU's caches using something like the onion+ bus in the PS4 would provide a huge benefit to a low latency setup like the eSRAM.

Waits for day when Ms announces that they have an onion+ feature in Xbone ;)
 

badb0y

Member
Not sure if serious.... o_O

Its actually a fact that performance on paper does not reflect 1:1 with actual games.

Its ok though, games like Watch Dogs and BF4 will give us a clear idea on what the difference really means in real world situations. This time there will be no common denominator effect if the PC version is the lead and is in fact the superior version, and with PS4 having the easiest architecture with no BS eSram and complicated move engines to reach theoretical bandwidth, there will be no excuse if PS4 games aren't showing a 40% advantage. But if they do and PS4 versions are running 60fps vs 30 for Xbone or a significant difference in visuals, the I'll be eating crow, but I'm pretty confident that I'll be feeding it instead lol.

I don't take things at face value, I actually did my own analysis on this a few weeks ago.

http://www.neogaf.com/forum/showpost.php?p=74541511&postcount=621

Basic conclusion I came to was the difference in power(teraflops) between the HD 7770 and HD 7850 was 37.5% but the HD 7850 had a performance advantage of 55% over the HD 7770 across 8 games.

Waits for day when Ms announces that they have an onion+ feature in Xbone ;)

If they did, wouldn't we know about it already? I mean the chip design and memory system is pretty much public knowledge at this point.
 

Codeblew

Member
Yes, PS4 has 32 ROPs and 8GB hUMA GDDR5, Xbox One only has 16 ROPs and the DDR3+eSRAM combo.

ROPs are a pretty good indicator to make a rough estimate of a GPU's weight class. Microsoft wouldn't have decided to use 16 ROPs if the GPU in Xbox One was as powerful as the GPU in PS4.

Yeah, I understand hUMA because that fits in with what I do at work on occasion, I just haven't been keeping up with all the GPU specs between the consoles. It will definitely be interesting from a technical standpoint to see the difference between both 3rd party and 1st party titles in the next 1-3 years.
 
First of all, it wasn't I who went ahead and posted any of that guy's stuff to somehow prove a point. Someone else did, seeming to pass it off as their own, without sourcing where it came from, and I simply pointed people to where I know that's coming from, because I remember reading it.

Separately, if by fluff posts you mean some of the stuff I've posted from people on beyond3D, then you really do have no idea what you're talking about, because some of the stuff I've posted has either come from someone that actually worked on the Xbox One audio chip personally, or I've posted stuff from a Sony First Party developer, or confirmed game developers that have actually coded for and shipped Xbox 360 or PS3 titles. I don't see where the 'fluff' is in that. And if by fluff 'piece', you mean articles from websites such as Eurogamer or Anandtech with proven track records, especially when some of that stuff has been confirmed to be accurate in one form or another, then all I can do is laugh at the fact that this bothers you.

Outside of that, I hardly ever post anything from some random corner of the internet, so I don't know who you think you're confusing me with.

And, whether you like it or not, just know that when you go around making your console warrior accusations while pretending you yourself are nothing of the sort, or somehow more knowledgeable or qualified to speak about the things that I've posted from people who are either directly hands on with the actual hardware in question, or at least have confirmed contact with people who do, just know that I'm not the person with the mentality of a warrior, it is you. Exactly as others have said, there's this amazing degree of hutzpah where people like to pretend as if they somehow know more and can speak about what's going on in these machines with greater understanding than people who are actually qualified (not saying I'm one such individual, because I'm not. Some won't even admit that much.) to do so. These folks aren't disregarding the views of armchair, wannabe engineers, nope. In many cases, actual engineers somehow don't know what they're talking about if they aren't saying what you wish to hear. Not armchair programmers, but in many cases, actual game programmers that have years of experience making videogames, and who have recent and current experience developing games, but whose opinions you classify as 'fluff.' Come on, buddy, give me a break.

Only someone such as yourself can look at some of these people, which I'm quite sure you're not entirely ignorant of, and just shrug them off as somehow not knowing what they're talking about. Hell, I've even posted stuff from Dave Baumann, of all people, that folks on here have outright suggested is meaningless or makes no sense, like the man wouldn't know what he's talking about regarding, of all things, AMD Graphics hardware. Next time look before you jump off that bridge.

No one would care about you quoting people like Dave Baumann, except it's usually in the context of you promoting your willful misinterpretation of whatever they said to support your pet theories or platform agenda.
 

ToyBroker

Banned
I clearly must not have read as many of his posts as you have, but the guy seems pretty damn neutral. Got any examples?

You know what's hilarious about you?

The way you talk tech like you know what you're talking about, and then assume this Mynd guy is worthy of discussing here when he posts silly things like that 768 instructions will favor the Xbone--right then and there you shoulda known...since you're the tech-wiz. Or are you only the tech-wiz when it's serving your agenda?

You literally have no idea what you're talking about most of the time, do you?

I'm not trying to be a dick, but I'm also not even CLOSE to the first person to point this out to you.
 

RayMaker

Banned
Xbox One is powerful enough to have great looking games.

in relation to the PS4,lol.

People might as well stop the bickering and place there bets and eat crow later.

I'll go first

In the launch window All 3rd party games will be piratically identical with the PS4 versions having slightly more punchy colours, barely noticeable to the naked eye higher res textures and a tiny bit less aliasing. In 2016 and beyond we might see these things stretched a little bit more in a few games.
 

Metfanant

Member
and then assume this Mynd guy is worthy of discussing here when he posts silly things like that 768 instructions will favor the Xbone--right then and there you shoulda known...since you're the tech-wiz. Or are you only the tech-wiz when it's serving your agenda?

not for nothing...but Mynd has ALWAYS seen things through green tinted glasses...he is intelligent, and for the most part knows what he is talking about, and is actually a developer from what i remember...

but he CERTAINLY has had a pro-MS slant since the beginning of the previous generation
 
You know what's hilarious about you?

The way you talk tech like you know what you're talking about, and then assume this Mynd guy is worthy of discussing here when he posts silly things like that 768 instructions will favor the Xbone--right then and there you shoulda known...since you're the tech-wiz. Or are you only the tech-wiz when it's serving your agenda?

You literally have no idea what you're talking about most of the time, do you?

I'm not trying to be a dick, but I'm also not even CLOSE to the first person to point this out to you.

Bubut... Move Engines... Audio... Efficiency!
 

ToyBroker

Banned
Bubut... Move Engines... Audio... Efficiency!

You forgot low latency, I think it's his favorite one.

And that Dave Buchanan (sp?) guy from AMD/B3D. He's posted that quote of his more than anyone on the internet combined. I've seen people mention it before, but if what Dave said was anything worthy of discussion in terms of the power of PS4/Xbone in relation to each other, it would have been posted on every single major news outlet.

You know who reported on it? No one--well except SenjutsuSage who posted it over 10+ times during that time.
 

jaypah

Member
Oww, the swarm is on! :p But was there ever anything new from this article? Earlier in the thread it seemed that it was all stuff that was already stated. Then I had to go and do real life stuff so I couldn't follow along.
 

Erasus

Member
Anyone who really thought the Xbox One was going to launch $100 more and significantly weaker was just being dumb or a crazy fanboy. It may, in fact, be weaker but it would never be so much so that third party games would look different.

The GPU still has less CUs brah. If they keep each frame in the eSRAM budget it will be fine. If devs cant they need to do the Halo3 "trick" and lower resolution or effects.
 
You know what's hilarious about you?

The way you talk tech like you know what you're talking about, and then assume this Mynd guy is worthy of discussing here when he posts silly things like that 768 instructions will favor the Xbone--right then and there you shoulda known...since you're the tech-wiz. Or are you only the tech-wiz when it's serving your agenda?

You literally have no idea what you're talking about most of the time, do you?

I'm not trying to be a dick, but I'm also not even CLOSE to the first person to point this out to you.

Interesting way to look at it. Well then I suppose I should be happy this puts me in pretty good company with a lot of other posters trying to talk about the exact same things. :p

original.gif
 

B_Boss

Member
Hey guys, have any of you read this? (Caution: English is not his 1st Language):

Is it where Microsoft hides second GPU inside Xbox One?
misterxmedia
August 28th, 19:22

Mistercteam: Analyisis about XBOX 1 actually bigger than 363 mm^2, also why MS said 363mm^2 is only for MainSOC

some backup data
- X1 MS claim X1 Main SOC = 363mm^2 = ~ 5 Billion
- 7970 is 365 mm^2
- Both 7970 and X1 use chip pacakaging that industri standard, that why you can perfectly match the rectangle area (Chip pacakaging dimension is the same but die area is not)

original source of image
-
15-big-amd-radeon-hd7970-asus-dc2.jpg

-
133573-34.jpg

-
01.jpg


First ReProcessed image source to make it same orientation and mark the rectagle area for overlay purposes:
*) to make it more openly i provide the source

7970
n8fx5cd.jpg


XB1
wpjZHEX.jpg


7970 rotate 45 degress
88Ho4ei.jpg


7970+X1 overlay each other plus reposition the X1 mark to 7970 die
KRQTuQK.jpg


Now Using Ruler tool
the 7970 --> 363-365 mm^2
the XB1 is suprisingly 522-525 mm^2
Basically there are 175 mm^2 space for something

All below assumption and further analysis is based on
remaining Transistor is for dGPU, also the dGPU SRAM already moved to MainSOC -> 47MB remember (more than 32MB, as 11-12 MB are for dGPU probably)
so with assumption all remaining Transistor budget for dGPU or Custom DX 11.2 or VI tech
then let we see. (remember 30-40% of 7970 for example is for SRAM budget, so 1792-2048 ALU without L2 SRAM takes less transistor budget than 4 billion of 7970)

so Analysis of 175mm^2

1st assumption 175 mm^2 is 28nm too (TSMC cowos can intermixed node process)
-> 175mm^2 without SRAM means : 2-3 Billion transitor enough for 1792-2048 ALU

2nd assumption 175 mm^2 is 20nm (die made from Glofo 20nm)
-> 175 mm^2 --> ~ 4Billion enough for 2304-2560 ALU

with SRAM & small CPU
--> 28nm => 1152-1280 ALU
--> 20nm -> 1792-2048 ALU

Imagine with 3D stack W2W

the good thing
it is 100% sure that 5 Billion is only for Main SOC, and MS still not tell the other transistor budget

My Own specualtion:
In my believe that probably it is only 2.5D COWOS and not full 3D but with above explanation who complaint, even with lower estimation the dGPU still pack > 3TF
but if W2W well damn.

Also explain why China rumor said 384bit --> radeon 8880 --> 1792/2048 ALU future GPU also why pastebin listed as 20-26CU, 2.5-3TF

Interesting

(source: http://misterxmedia.livejournal.com/132131.html)

So a good amount of MS fanboys are going crazy about the above posted material lol, and of course the same guy wrote an "interesting" article last summer regarding Sony:

(http://misterxmedia.livejournal.com/98352.html)

Surely his name should be "Suzy" now and should therefore be quiet but I digress lol...Anyway what do you guys think?
 
Anyone who really thought the Xbox One was going to launch $100 more and significantly weaker was just being dumb or a crazy fanboy. It may, in fact, be weaker but it would never be so much so that third party games would look different.

So third party games won't look different?

I'll take the traditional wait and see approach to that question.
 

Smash

Banned
Interesting way to look at it. Well then I suppose I should be happy this puts me in pretty good company with a lot of other posters trying to talk about the exact same things. :p



It puts you in the same company as Reiko. You have a few talking points which you don't even know what they mean since you copy them from other forums or some MS PR and you keep hammering them here, many times incorrectly, until someone calls you out.
 

B_Boss

Member
Its a custom AMD APU. Its not just 8 Jaguar cores + a 7790.

Well I know that but I was wondering what was going on with that article lol. I mean it almost seems like a uh-oh guys, XB1 may be more powerful than PS4" especially considering MS' stance that "numbers don't matter" lol.....
 

Chobel

Member
Hey guys, have any of you read this? (Caution: English is not his 1st Language):



(source: http://misterxmedia.livejournal.com/132131.html)

So a good amount of MS fanboys are going crazy about the above posted material lol, and of course the same guy wrote an "interesting" article last summer regarding Sony:

(http://misterxmedia.livejournal.com/98352.html)

Surely his name should be "Suzy" now and should therefore be quiet but I digress lol...Anyway what do you guys think?

A8pNprD.gif


But why stop there, MS can still fit 2 Titans.
 

itsgreen

Member
If it was in there MS would have said something by now. They would have said something when they unveiled it. And they would have outright said: "We are much more powerful than our competitor."

Shadow games after reveal is BS.
 

Piggus

Member
Well I know that but I was wondering what was going on with that article lol. I mean it almost seems like a uh-oh guys, XB1 may be more powerful than PS4" especially considering MS' stance that "numbers don't matter" lol.....

More like that guy has no fucking idea what he's talking about lol.
 

Buggzy18

Banned
yup. I bought a GPU last year and it packs 3.5 to 4 t flop performance. Both consoles are very weak, let's hope this gen last for 5 years

the 40% isn't bs, its a fact. Deal with it

Wow! You must be fending the women off with all that power.

Some gamers don't care that much about power and just want to play great games.
 

Chobel

Member
On a serious note, Is there an explanation to this "204GB/s"? and is it really possible to have 109GB/s minimum bandwidth? 109GB/s is the peak bandwidth when doing read or write only.
 

B_Boss

Member
If it was in there MS would have said something by now. They would have said something when they unveiled it. And they would have outright said: "We are much more powerful than our competitor."

Shadow games after reveal is BS.

That is exactly my feeling....while "numbers don't matter" I am certain they would've paraded this shit around till world's end if there were more to it.
 

B_Boss

Member
yup. I bought a GPU last year and it packs 3.5 to 4 t flop performance. Both consoles are very weak, let's hope this gen last for 5 years

the 40% isn't bs, its a fact. Deal with it

You know......."weak" isn't always objective...You mention your tflops etc....well ok, tell me why Sonic 1 or Tetris or name any well designed old school game and tell me...why are they awesome still? tflops? Polygon Count? No. Stats contribute to a games digital makeup not the enjoyment and actual design integrity...In any event Buggzy18 nailed it.
 

Spongebob

Banned
Hey guys, have any of you read this? (Caution: English is not his 1st Language):



(source: http://misterxmedia.livejournal.com/132131.html)

So a good amount of MS fanboys are going crazy about the above posted material lol, and of course the same guy wrote an "interesting" article last summer regarding Sony:

(http://misterxmedia.livejournal.com/98352.html)

Surely his name should be "Suzy" now and should therefore be quiet but I digress lol...Anyway what do you guys think?
980.gif
 
Hey guys, have any of you read this? (Caution: English is not his 1st Language):



(source: http://misterxmedia.livejournal.com/132131.html)

So a good amount of MS fanboys are going crazy about the above posted material lol, and of course the same guy wrote an "interesting" article last summer regarding Sony:

(http://misterxmedia.livejournal.com/98352.html)

Surely his name should be "Suzy" now and should therefore be quiet but I digress lol...Anyway what do you guys think?
I don't believe this at all, why wouldn't Microsoft be talking about it? Especially since the biggest complaint is higher price, lower specs..
But I really want it to come true just so i can see the internet explode.
 

toff74

Member
No, they never explained how they do the math for the 204GB/s. They never said whether they have hUMA or not. They never showed a single tech demo. They tried to downplay the relevance of spec sheets despite the fact that both consoles use the same architecture. They tried to sugarcoat standard features of AMD processors as "specialized XY". They claimed that the cloud will increase the performance of Xbox One by a factor of 3.

bloody hell.. Are you ok? Sounds like you're having a really shitty day and need to talk to someone about it!

to me its custom silicon and ms have just given it their own PR names, nothing more. The 204 number has to come from somewhere though, i just find it laughable that its cant be real!!!! They made it up!!! Quite a thing to do that and plaster it all over the internet.. Or is it just more MS bait to get the fanboys rabid lol
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
Just to add something productive to the warranted question about who to trust and not to trust when it comes to technical statements in anonymous internet forums. You can always google specific terms and look for other explanations from respectable sources. For instance, googling the term "gpu architecture" led to this quite nice set of slides

http://s09.idav.ucdavis.edu/talks/02_kayvonf_gpuArchTalk09.pdf

The slides themselves use other terms, but anything you google will eventually lead you to Wikipedia pages. When it comes to general compute science topics, Wikipedia's reliability is above average, so at least you don't have to buy and read "scholarly" and expensive books.

So, with no intention of name calling or shaming, if we take the statement that ALUs in a general GPU architecture are not saturated in cases of programs comprised of less "instructions" than the ALU count, we can look at slides 17 and 20 (and their combination in slide 24) to see what actually happens: a modern GPU is decomposed into several units, each capable of managing and executing an independent stream of instructions (= a compiled part of an overall program) in parallel, and each unit is again decomposed into several ALUs (= perform basic arithmetical operations) which execute the same instruction on many data items in parallel.

gpusimdiqsyc.jpg

(From those slides)

The benefit is that you can "reuse" the logic for managing instruction streams (which is expensive) for many individual data items in cases where you want to perform the same instruction on all of them anyway. You take one instruction stream and apply it to many data items. This general concept is what people are referring to when they say that a CPU is "more intelligent" but a GPU "can do more dumb things per time", although in practice things are little bit more complicated and less distinct. A CPU dedicates a lot silicon into managing instruction streams to improve "single thread" performance while a GPU employes (within each subunit with its own instruction management logic) mainly data parallelism to reuse that management logic for multiple number-crunching units. The level to which you can saturate a GPU is thus highly dependent on the size of the dataset, since data items are ultimately what a GPU scales on, you want to process, and not on the number of instructions that you want top apply to that data set. (In addition, those units with instruction management logic can execute independent tasks in parallel).

The limitation of that concepts is grounded in that fact that you need large, heterogeneous sets of data on which you want to perform the same instructions in the first place. Luckily, this is true for graphics rendering where the number of available data items is much bigger than the available ALU count.

So we don't have to call somebody a fanboy to assess such statements. Google is enough.
 
Hey guys, have any of you read this? (Caution: English is not his 1st Language):



(source: http://misterxmedia.livejournal.com/132131.html)

So a good amount of MS fanboys are going crazy about the above posted material lol, and of course the same guy wrote an "interesting" article last summer regarding Sony:

(http://misterxmedia.livejournal.com/98352.html)

Surely his name should be "Suzy" now and should therefore be quiet but I digress lol...Anyway what do you guys think?

why, oh why would you post anything from those idiots?

Things that they predicted:

3 different socs/apus
3 different gpus
ray tracing chip
hybrid memory cube
ddr4 ram
10tflop console

etc etc.

think logically for a second. If Microsoft was holding such a card in their favor, would they have held on to it for the last few months after all the negative press just to get some insane positive light for themselves?
 
Top Bottom