Ps4 fanboys butthurt itt
Come on son, don't get your ass banned like this.
Ps4 fanboys butthurt itt
How about you read this http://www.reddit.com/r/Games/comments/1h2qxn/xbox_one_vs_ps4_memory_subsystems_compared/caqjldw
it proves that GDDR latency is almost or maybe even better than DD3
I'll try to copy it here.Here's a write-up I did a while ago that I'll just paste here:
My background is in VLSI design, but I know nothing about graphics programming. But here goes my analysis for the chip and system level design.
1) There are low to mid-range video cards that have both a DDR3 and GDDR5 version. This provides a direct comparison point. In the very low end, I found benchmarks showing that the GDDR5 version is only 10-20% faster. However, in mid-high range cards, the GDDR5 version can be almost 50% faster. http://ht4u.net/reviews/2012/msi_rad...ZfRJ3KdXukbpQQ
This makes sense. In low end cards, the GPU does not have enough processing power to be significantly bottlenecked by memory bandwidth, but in faster cards it definitely can.
So bandwidth is critical for GPU tasks. That's why high-end video cards use 384-bit wide interfaces to memory while CPU memory interfaces are only 64 bits wide (per channel)! It certainly is not cheap to dedicate that many IO pins to memory, so they do it for a good reason.
Memory bandwidth and latency is not too critical for most CPU tasks though. For the PC builders out there, you can find some benchmarks comparing different memory timings and speeds and in most cases you'd be better off buying a faster video card instead of spending money on better RAM.
2) GDDR5 having much higher latency than DDR3 is a myth that's been constantly perpetuated with no source to back it up. Go look up datasheets of the actual chips and you'll see that the absolute latency has always been the same, at around 10ns. It has been around that since DDR1. Since the data rates have been increasing, the latency in clock cycles has increased but the absolute latency has always been the same. Anyone who wants to argue with me should dig through datasheets to back their claims up.
From Wikipedia: DDR3 PC3-12800 @ IO frequency 800MHz has typical CAS latency 8. This means the absolute latency is 10ns. DDR2 PC2-6400 runs at IO frequency 400MHz, with CAS latency 4. This is also 10 ns.
Here's a typical GDDR5 chip datasheet: http://www.hynix.com/datasheet/pdf/g...FR(Rev1.0).pdf
Here is the table showing CAS latency vs frequency (page 43):
The data rates are a factor of 4x faster than the memory clock. So at a typical 5.0Gbps output data rate, the memory runs at 1.25GHz source: (page 6)
and supports CL latency of 15. This is 15/(1.25 GHz) = 12 ns
3) The Xbox One has additional SRAM cache to improve their bandwidth. However, they needed to dedicate additional silicon area and power budget in the cache and the cache controller. This is definitely a big cost-adder in terms of yield and power budget, but probably not as much as using GDDR5 chips. Chips these days are limited only by the amount of power they can dissipate and everything is a trade-off. By adding complexity in one area, the designer must remove it from another. So Microsoft spent some of their power budget on implementing a cache, while Sony could use it to actually improve the number of GPU cores. And it shows
Who knows how well the Xbox One's cache system will work to catch up to PS4's bandwidth advantage. But it is certainly not going to be _faster_ or simpler. When you're streaming in the huge textures needed for next-gen 720p to 1080p graphics, a 32MB cache is not big enough to constantly provide enough bandwidth.
Also, since the PS4 has more GPU power, it will definitely need all the bandwidth it can get.
GTX 780 and Titan are both based on GK110. PS4 and Xbox One's GPUs are not based on the same chip.
What? The 360 will probably be the biggest selling console ever in the UK.
Has anything of significance happened since this thread was started? Dont wanna read through 58 pages...
Both companies will have great games. This is not a competitive advantage.
Has anything of significance happened since this thread was started? Dont wanna read through 58 pages...
Sure but I want them lookin' fiiiiiine. I say couple of years down the line is when we'll see that true console next gen shine. Can't wait.*shrugs* Games look fine to me.
Thankyou!Welcome!
This may well be true, but it may boil down to personal preference of the developer. For example, I know both companies are using VS 2013 as an IDE now so regarding a developer, its just the same application with a different tool set in there. Regarding the software stacks on the console themselves, I would say MS have more of a feature rich environment purely because of the full DirectX stack.The drivers will probably never be finished as both companies will continue to optimize their consoles and tools for the developers. As it stands now the PS4 has better tools from what we have heard from developers, I forgot the exact name of the studio but I am sure someone will pull up the quote (I think it was Avalanche but not too sure). The hardware not being final is something that Microsoft keeps repeating so it would be better to ask them about that.
Yeah, no doubt about that. Both sides have very lightweight stacks which heavily tie into their hardware. If we brought business into it, unfortunately a lot of modern engines are specifically designed around DirectX in mind. Frostbite is a prime example. With that in mind, a lot of that code feels more natural to sit on the X1. This also makes me beg the question around BF4 on the X1, but that's a totally different subject.Again, this isn't some random conjecture we created in our mind, this type of stuff comes from the developers who have said the PS4 is easier to code for and has more mature development tools. We don't know how well the drivers are on either side but I think we can confidently say both will be very optimized.
I didn't actually know that, output streams can't really be matched. That will be a clear pro for the PS4. I'd argue against the comparable PC counterparts in the GPU. The numbers will be the only similarities whereas the silicon will be modified heavily in both parties.The problem is 6 more CUs is not the only thing that the PS4 has over the Xbox One. It also has 32 ROPS over the 16 in Xbox One and it has optimization specifically for GPGPU. The main problem here is that these 2 GPUs come from the same product family but from a different product stack. The GPU in Xbox One is more akin to a mainstream product(7700 series) while the GPU in the PS4 is mid-ranged enthusiast card(7800 series).
I've just had to read up on this, because its a common belief but just seems so inaccurate. GDDR has higher timings because of its architectural layout, it was designed with bandwidth in mind. Any Mark Cerny quote I've seen always refers to the "GPU ofcourse" because you can't avoid the fact that CPU's throws clock cycles out of the window. Unfortunately, GDDR does have high latency by design. The CPU is used for an awful amount while gaming, physics, AI, networking, audio processing. A lot of these tasks will just have to be offloaded to the GPU. Now it gets technical, OoO in a CPU is governed by the input streams of data, rather than the order of execution in processes. Unfortunately, GDDR is an input stream, so the problem isn't resolved with that.This is wrong, GDDR5 doesn't inherently have any more latency than DDR3. In fact both are pretty much the same memory just that one is optimized for latency while the other is optimized for bandwidth. The reason why PCs use DDR3 memory for RAM is because there are so many applications that require the CPU at the same time and so the low latency helps in that regard but on a closed hardware console what is going to be calling on the CPU while you are gaming? Nothing. Another thing is Jaguar is an OoO CPU so it doesn't have to wait around and do nothing while it's waiting for data to be retrieved.
It depends on the developers intention, the audio chip in the ps4 is just the encoder, so a chip somewhere has to handle the processes of the audio voices. The CPU would have a heck of a hard time doing this with GDDR hence his opinion of offloading it in the GPU.Audio will not be off-loaded to the GPU, Mark Cerney was talking about in the future when developers harness the power of GPGPU they will be able to offload some audio tasks to the GPU like audio raycasting. As far as I remember the Xbox One doesn't have a PPU so it's also going to be running physics on the GPU and that's not really a Playstation specific problem. On the CPU being "crippled" I have already addressed this above.
I'm not saying the Xbox will dominate, far from it. I'm saying it will be very close. The Xbox brand is very powerful in the US and UK, I don't see that changing. Microsoft's mistakes will cost them the lead they would otherwise have after such a successful generation but that is all. People who expect a PS4 domination are just letting their feelings cloud their judgment.
No, it is a hardware thing as to whether tiled resources need to be implemented via software or via hardware. There was a thread on Beyond3D where an AMD guy came in and spoke about Tier 1 and Tier 2 hardware. I am attempting to look it up now. We don't know which Tier either of the consoles fall into though.
Edit: Found it on this thread: http://forum.beyond3d.com/showthread.php?t=64206&page=9
A relevant post is here:
http://forum.beyond3d.com/showthread.php?t=64206&page=6
AMD guy in response to the post above:
In reference to people trying to figure out his vague posts.
That's an interesting read. A few things are wrong (as the author admits he's not a graphics guy), but the latency information seems solid, and it's the first time I've seen it laid out like that.How about you read this http://www.reddit.com/r/Games/comments/1h2qxn/xbox_one_vs_ps4_memory_subsystems_compared/caqjldw
it proves that GDDR latency is almost or maybe even better than DDR3
I'll try to copy it here.
Unfortunately its not that linear. I hate reverting back to PC because they're quite incomparable but for this instance its not too bad.
Look at the Titan compared to the GTX 780. The titan has a whole 0.5Tflops of power more, which is a 12.5% theoretical performance gain. Unfortunately, they're both very similar in benchmark results. For example:
http://www.videocardbenchmark.net/high_end_gpus.html
This is referring to GDDR v DDR latencies inside the GPU, which as I've said, doesn't matter inside the GPU. CPU doesn't have the bus to be able to handle GDDR, hence why it skips clock cycles.How about you read this http://www.reddit.com/r/Games/comments/1h2qxn/xbox_one_vs_ps4_memory_subsystems_compared/caqjldw
it proves that GDDR latency is almost or maybe even better than DDR3
I'll try to copy it here.
PS4 games look like Toy Story; Xbone like Pong. But balance makes Xbone more capable than PS4's wildest dreams.
News at 11.
Thanks for the link... Oh wait more assertions with no evidence.
Wait did you say the Xbone has balance?? Im a Libra, I like balance! Guess its time to cancel my PS4 preorder...
What? The 360 will probably be the biggest selling console ever in the UK.
According to some. PS4 has a magic memory controller that offsets the latency issues of GDDR5.
I've been looking through some technical papers and hynix data sheets. The popular position is that DDR3 has half the latency of GDDR5 but the data sheets don't seem to support that. The latency seems quite similar.
Either I'm missing a piece of the puzzle or there are a lot of folks out there who are wrong.
This is referring to GDDR v DDR latencies inside the GPU, which as I've said, doesn't matter inside the GPU. CPU doesn't have the bus to be able to handle GDDR, hence why it skips clock cycles.
My head now hurts, I'm off to play some games.
I was under the impression that at the same clock speed the latency of DDR3 was less than GDDR5 but GDDR5's significantly higher clock rates generally balance that out.
Did you read it? really?
Where does it say it doesn't work with CPU? How do you know it doesn't work with CPU?
How about some links, sources to prove your claim...?
You would think AMD and intel would spring into that market to enable it on mobo?
So what is keeping the cpu and mobo manufacturers from using gddr5 is it a cost thing?
You're correct.
Yup, had to wrap my head around it. Its mentioned when he's referring to busses:Did you read it? really?
Where does it say it doesn't work with CPU? How do you know it doesn't work with CPU?
How about some links, sources to prove your claim...?
You're correct.
Then he makes the point:So bandwidth is critical for GPU tasks. That's why high-end video cards use 384-bit wide interfaces to memory while CPU memory interfaces are only 64 bits wide (per channel)! It certainly is not cheap to dedicate that many IO pins to memory, so they do it for a good reason.
Whereas he's right in saying its not critical for most tasks, they don't care. Although, from the CPU's point of view, It's literally throwing its power out of the window. The best example which is effected by latency is audio because of its sync with the frame. If you're throwing clock cycles, your seeing big stutters.Memory bandwidth and latency is not too critical for most CPU tasks though.
I've been looking through some technical papers and hynix data sheets. The popular position is that DDR3 has half the latency of GDDR5 but the data sheets don't seem to support that. The latency seems quite similar.
Either I'm missing a piece of the puzzle or there are a lot of folks out there who are wrong.
Yup, had to wrap my head around it. Its mentioned when he's referring to busses:
Then he makes the point:
Whereas he's right in saying its not critical for most tasks, they don't care. Although, from the CPU's point of view, It's literally throwing its power out of the window. The best example which is effected by latency is audio because of its sync with the frame. If you're throwing clock cycles, your seeing big stutters.
Now I'm really off. Been insightful guys
That doesn't say latency for CPU is not the same latency with GPU.
Wow. That was a freebie.
You would think AMD and intel would spring into that market to enable it on mobo?
So what is keeping the cpu and mobo manufacturers from using gddr5 is it a cost thing?
The size of the bus directly effects the latency of the memory call. He mentions the bus being lower for CPU's because of its high cost (awfully right). Then specifies they can do that because its effect, which I can see what he means, but it does have a greater effect than he mentioned. Like he said himself, he doesn't architect software.
In consoles it seems brand power is pretty fickle thing. You'd recall the PS2, the wii, and the snes sold enormously well and were well regarded and their follow up disappointed. for one reason or another gamers seem more fickle and brand power isn't enough.
The missing piece of the puzzle is that the GPUs themselves are probably where the latencies come in.I've been looking through some technical papers and hynix data sheets. The popular position is that DDR3 has half the latency of GDDR5 but the data sheets don't seem to support that. The latency seems quite similar.
Either I'm missing a piece of the puzzle or there are a lot of folks out there who are wrong.
Price jumps are huge. That's why everyone was saying Sony got lucky with the GDDR5 panning out the way it did.
Saying MS is not a powerful brand in the UK when the current MS console is going to be the biggest selling ever is plainly nonsense.
The size of the bus directly effects the latency of the memory call. He mentions the bus being lower for CPU's because of its high cost (awfully right). Then specifies they can do that because its effect, which I can see what he means, but it does have a greater effect than he mentioned. Like he said himself, he doesn't architect software.
The size of the bus directly effects the latency of the memory call. He mentions the bus being lower for CPU's because of its high cost (awfully right). Then specifies they can do that because its effect, which I can see what he means, but it does have a greater effect than he mentioned. Like he said himself, he doesn't architect software.
Let me put it another way the majority of the UK has no Brand loyalty to anyone .
Both MS and Sony going to have there fans but what allow MS to over take the UK was not Brand .
It was price and release date they don't either of them this gen .
The same can be said for the rest of the world to a certain degree .
Price jumps are huge. That's why everyone was saying Sony got lucky with the GDDR5 panning out the way it did.
Saying MS is not a powerful brand in the UK when the current MS console is going to be the biggest selling ever is plainly nonsense.
Purely due to market growth, nothing to do with brand power. Keep believing what you want.
People attacking the Edge article for lack of sources and/or lack of technical explanation, still not factoring all the developers/people in the know who have either publicly or off the record spoken about the power difference.
Xbox360 has sold 8.4Mill, PS2 is at 10million. - I agree its a powerful Brand, but once the next Generation begins im sure the 360 sales will slow completely. Pre-orders + Hype suggests they will be giving up their console dominance this gen.