Vigilant Walrus
Member
If Ballmer had 8GB of GDDR5 would he still have his job in a years time?
Scared of RROD, overconfident on Kinect, and the misfortune of focusing everything around 8GB DDR3 first.
it's not so much a question of wanting them, it's paying a premium for them and then paying a yearly upkeep on top of that for continued access that's unacceptable to many, myself included.
Nope, they didnt say. It was a misunderstanding by venturebeat.It's not just ESRAM, they've got like 15 custom processors in the SOC, didn't they say? Seems kind of ridiculous.
Please show me your research that shows many people are willing to buy $500 hardware and pay $60/year to control their TV's with voice, which historically has NEVER been reliable enough to use for anything. Apple and Google couldn't do it with devices where the microphone is next to your mouth, what makes you think MS will be able to do it flawlessly with a microphone 10 feet away.You two are not alone. It may be heresy to say it, but a lot of people do want those multimedia features.
Ditto
Nope, they didnt say. It was a misunderstanding by venturebeat.
There is nothing cutting edge about GDDR5-5500 RAM. In fact it's one of the slowest speeds of GDDR5 available.The GDDR ram sony is using is cutting edge and thus will have a cutting edge price and avalibility
Oh my, it gets worse.It's stated on this slide:
(upper left)
Oh my, it gets worse.
There is nothing cutting edge about GDDR5-5500 RAM. In fact it's one of the slowest speeds of GDDR5 available.
Also I would like to see where you obtained the standalone pricing for GDDR5, which is based on DDR3. Mandatory kinect is a far bigger detriment to cost reduction than GDDR5. I guess it could be made optional, going against MS's vision as of today.
There is nothing cutting edge about GDDR5-5500 RAM. In fact it's one of the slowest speeds of GDDR5 available.
Also I would like to see where you obtained the standalone pricing for GDDR5, which is based on DDR3. Mandatory kinect is a far bigger detriment to cost reduction than GDDR5. I guess it could be made optional, going against MS's vision as of today.
I should have used the word desperate.What is getting worse?
Depends on new GPU releases before Nov 15there's nothing cutting edge about the speed, but their is in capacity, I believe it will be the first consumer product to use 4Gbit chips.
I should have used the word desperate.
I thought MS didnt claim it, it was a misunderstanding by venturebeat. smh.
Depends on new GPU releases before Nov 15
This is what people are referring to when talking about 3x the power of Xbox Ones with the cloudIn all fairness, this is not accurate - what they (Marc Whitten, I believe) explicitly said is that "for every physical Xbox One we build, we're provisioning the CPU and storage equivalent of three Xbox Ones on the cloud".
Clock speed has nothing to do with how cutting edge something is. PS4 is the first device to use 4Gbit chips and the ram is understood to be the most expensive singular component to manufacture. You should read zomgwtfbbqs breakdown, there's nothing wrong with what eastmen said...There is nothing cutting edge about GDDR5-5500 RAM. In fact it's one of the slowest speeds of GDDR5 available.
Also I would like to see where you obtained the standalone pricing for GDDR5, which is based on DDR3. Mandatory kinect is a far bigger detriment to cost reduction than GDDR5. I guess it could be made optional, going against MS's vision as of today.
No, the XB1 GPU is just about dead on the HD 7770 in regards to processing power, pixel and texel fillrate.
My motherboard has 10+ processors in it even without the CPU/GPU. Northbridge, Southbridge, Audio processor, Gigabit controller, Wireless controller, PCIe, Lucid, PWM ..Wait... you want to imply, that the X1 does not have 15 special purpose processors? How do you know that?
Dividing $80-90 of estimated cost of GDDR5 in the PS4 by 16 gives a little over $5 per chip. I doubt that's the most expensive SINGULAR component to manufacture in the PS4.Clock speed has nothing to do with how cutting edge something is. PS4 is the first device to use 4Gbit chips and the ram is understood to be the most expensive singular component to manufacture. You should read zomgwtfbbqs breakdown, there's nothing wrong with what eastmen said...
Seems weird that they spent all that time and effort designing such an enormous chip with crazy customisations out the wazoo when a more standard part would give them better performance. What's the advantage of this? Sony are probably paying less for a more powerful APU, so what gives? Just over-engineering?
No, the XB1 GPU is just about dead on the HD 7770 in regards to processing power, pixel and texel fillrate.
My motherboard has 10+ processors in it even without the CPU/GPU. Northbridge, Southbridge, Audio processor, Gigabit controller, Wireless controller, PCIe, Lucid, PWM ..
Yes I am. Maybe you should look at what they're counting there.They are not counting the stuff that's usually in there. Just look at those stuff on the left side.
I can't tell if serious.Dividing $80-90 of estimated cost of GDDR5 in the PS4 by 16 gives a little over $5 per chip. I doubt that's the most expensive SINGULAR component to manufacture in the PS4.
Difference wasn't near as much or as obvious, considering 360 and PS3 had such different architectures. I don't think there has ever been a difference this great between competitors, aside from the Wii/Wii U, but that's a bit different.
Perhaps you should use better words to explain yourself. In what world is 8GB GDDR5, made up by 16 chips of 512MB each, considered a SINGULAR component?I can't tell if serious.
Xbox vs PS2 difference was LEAGUES bigger.
PS2 Xbox Difference
Bandwidth.......3.2GB/s..............6.4GB/s...............200%
Ram total........32MB..................64MB...................200%
FLOPS........... 6.2Mflops...........21.6Mflops ......... 350%
Fillrate.............1.2GT/s..............8GT/s...................670%
You know, I won't even go into details of how vastly superior the Geforce 3 architecture is over Graphics Synthesizer. GF3 was the first programmable shader GPU, the core of what you still see today in modern GPUs.
40% the largest we've seen El oh El.
Perhaps you should use better words to explain yourself. In what world is 8GB GDDR5, made up by 16 chips of 512MB each, considered a SINGULAR component?
Yes I am. Maybe you should look at what they're counting there.
In the world of computing? Are you new or something? Or are you just playing cute? Yes, memory is a singular component in a computing system. Which is why it's always reported as a singular number...Perhaps you should use better words to explain yourself. In what world is 8GB GDDR5, made up by 16 chips of 512MB each, considered a SINGULAR component?
Although the flaws in your reasoning will be picked up by everybody else here, you could at least get the numbers right. The same applies to all of the ones you list but the percentage difference between 3.2GBs and 6.4GBs is 100% not 200%.Xbox vs PS2 difference was LEAGUES bigger.
PS2 Xbox Difference
Bandwidth.......3.2GB/s..............6.4GB/s...............200%
Ram total........32MB..................64MB...................200%
FLOPS........... 6.2Mflops...........21.6Mflops ......... 350%
Fillrate.............1.2GT/s..............8GT/s...................670%
You know, I won't even go into details of how vastly superior the Geforce 3 architecture is over Graphics Synthesizer. GF3 was the first programmable shader GPU, the core of what you still see today in modern GPUs.
40% the largest we've seen El oh El.
That you are wrong;I did. What's your point?
They are not counting the stuff that's usually in there. Just look at those stuff on the left side.
They needed the esram cache because they went with ddr3 unlike sony and gddr5. I'm guessing MS didn't know sony would go for gddr5 or maybe they did. They used a cache with the xbox 360 and chose it because of this.
What you said is wrong;
I think the 7790 is the most apt description. The gpu is pretty much a bonaire chip.
I don't care what the 'on paper' specs say, Dev's will get performance far closer to a 7790 than a 7770 on the Xbox and 7870 performance from the 'on paper' 7850 specs of the PS4.
Nope, my initial point was (giving MS the benefit of doubt) suspecting venturebeat of misinterpreting. I was wrong there.Let's assume I am, what was your initial point? (aka MS is desperate)
edit: Since you added this, I'll respond .. nobody does that. Like I said earlier, my motherboard would have 10+ "processors" with same logic.They are simply saying, they have 15 special purpose processors - what is so desperate about it?
Im talking about cooling.Ehm, well... no! Actually it's pretty complex compared to PS4. ^_^
(and "complex" is not a good thing in this case)
Nope, my initial point was (giving MS the benefit of doubt) suspecting venturebeat of misinterpreting. I was wrong there.
No, the HD7790 is a fair amount faster than what is in the XB1.
We are talking raw hardware specs.
Nope, my initial point was (giving MS the benefit of doubt) suspecting venturebeat of misinterpreting. I was wrong there.
edit: Since you added this, I'll respond .. nobody does that. Like I said earlier, my motherboard would have 10+ "processors" with same logic.
Although the flaws in your reasoning will be picked up by everybody else here, you could at least get the numbers right. The same applies to all of the ones you list but the percentage difference between 3.2GBs and 6.4GBs is 100% not 200%.
You two are not alone. It may be heresy to say it, but a lot of people do want those multimedia features.
http://forum.beyond3d.com/showthread.php?p=1762942#post1762942
Dave Baumann view, take it is as you wish. If the Esram is used effectively he'd wager the Xbox one will far outstrip a 7770 and a 7790. As I said take his views as you wish.
Anyone who really thought the Xbox One was going to launch $100 more and significantly weaker was just being dumb or a crazy fanboy. It may, in fact, be weaker but it would never be so much so that third party games would look different.
Alright, I forget...whose the sonofabitch who posted a link to that misterxmedia's livejournal?
I can't stop reading it now. I swear this dude might be mentally ill.
And reading the comments it ALMOST seems like it's just him posting as different people with slightly better or worse english. And the inside interview he just posted up? Holy fuck it's gold.
If you think about the sheer volume of coherency data that needs to go between the two CPU blocks, Microsoft probably had to beef up the L2 to NB links to almost match that of the L1 to L2 links. While specifics were not given out, SemiAccurate was told it was significantly wider along with beefed up buffers and deeper queues. Dont discount this as a minor change, it is both critical to the system performance and a very complex thing to do. It will be interesting to see how Sony did their variant if they ever give a talk on the PS4 architecture.
That means the XBox Ones 8 Jaguar cores are clocked at ~1.9GHz, something that wasnt announced at Hot Chips. Now you know.
The CPU NB also has coherent links to the GPU MMU and I/O MMU, something you would expect on any system that takes GPU compute work seriously. AMD has their HSA/HUMA architecture coming with Kaveri in short order but XBO is based on a design ~1+ generations older so no advanced AMD CPU/GPU coherency here. Luckily Microsoft is on the ball here and put their own mechanism in which they would unfortunately not go in to detail on. What SemiAccurate has heard about it says they did a pretty impressive job but until it is fully disclosed we cant comment with authority. Lets just leave things at, From what we can tell it looks good.
Why would Microsoft not announce that their CPU was 1.9Ghz? If it were true wouldn't they be making sure everyone knew?Semi-Accurate weighs in:
http://semiaccurate.com/2013/08/29/a-deep-dive-into-microsofts-xbox-ones-architecture/
More at the link.