Not sure what you mean. Of course it has to access the RAM.
That it can address DDR3 directly without ESRAM.
Not sure what you mean. Of course it has to access the RAM.
THis is how AMD explains huma.
I have no clue, so for me in my simple mind, I don't see what changed from the unified memory perspective. Isn't this how it always was meant to be for both consoles?
Edit: Ok, I didn't read the added description..
Since they're all being rushed, though, I'm assuming that the detriment to each is the same. I'm not assuming that they're all indicative of what visuals will look like in three years, only that if the gap were as big as Xbox and DC (even PS2), we'd already see it. Unless you think Forza 5 is as good as Xbone will ever look and Driveclub is only utilising ~20% of the PS4's power, of course. Which is possible but seems unlikely to me.We're not going to know from these launch titles, because most of them are being rushed. Also, weird example, because at E3 DriveClub < Forza 5 = GT6.
Interesting.
Good. GOOD.
Also here is Yung Humma!
It's better to start at the top and scale down than it does to start at the bottom and scale up, because once you have your lead platform -- the platform that has the most to offer -- it's simply a matter of reducing the likes of internal rendering resolution, texture resolution, particle effects, etc. to accommodate the weaker hardware. Doing things the other way around cripples the higher-end platform/s needlessly with no practical benefit to the developer.
Well it matches up with CBOAT saying that PS4 games are ready to master while Xbox is stuck in development hell with the latest SDK dropping the performance by 20%. I'm sure once it is optimised over the next couple of months the performance gap will shrink. I'll get him to ask the third parties what they think will happen.
Whatever, we will see when the machines launch and we actually get our hands on the games. If there are any major differences (like this article suggests) then i will be very suprised.
GDDR5 only makes a small difference in latency, doubt it'll affect gameplay that much. Maybe right out of the gate but in the long run the systems will pretty much have the same performance on games. Not gona argue for either one here though remember all those graphics compraisons between the 360 and the PS3 and how everybody made it a huge deal but when it came down to it didn't really make a difference? I mean, TLOU looks amazing as does Halo 4.
The way I've heard it, devs will have an easier time getting better performance due to the ease of development for the PS4. While not exactly free, there's less overhead to deal with, so if the power is there it will be put to use.But wouldn't it make sense for developers to develop games for the weaker hardware? If the architectures are identical it should be easy to port games, right? So I would imagine they would develop for the xbone and port. So for third party games at least I don't imagine seeing significant improvements on the ps4 version. With development costs the way they are I can't see developers spending significantly more time on the ps4 version to improve visuals.
Ok, this article is strange, for an example of hUMA they posted PRT ...
AMD demonstrates the technologies functionality with the following examples, without hUMA, the CPU must first explicitly copy data to GPU memory, the GPU completes the computation and then the CPU must explicitly copies the result back to CPU memory in order for it to be read. With hUMA the CPU can simply pass a pointer to the GPU, which completes the computation and produces a result that the CPU can directly read without any copying required.
Yeah also ask him when the PS4 is launching and MS indie plans !
Since they're all being rushed, though, I'm assuming that the detriment to each is the same. I'm not assuming that they're all indicative of what visuals will look like in three years, only that if the gap were as big as Xbox and DC (even PS2), we'd already see it. Unless you think Forza 5 is as good as Xbone will ever look and Driveclub is only utilising ~20% of the PS4's power, of course. Which is possible but seems unlikely to me.
[edit] I picked DC and F5 because they're both racers. I think the point remains valid if you pick, say, Fable Heroes (Legends?) and Knack, though. The visuals in Knack are noticeably better but they're not Dreamcast to Xbox level.
At this point i think some here would probably commit suicide if they couldnt go another day without dick waving. But the games on both machines tell a different story. Theres no huge jump from one to the other. Both machines look like they could easily pull off the same visuals should their exclusives ever trade places.
On a classical system you have a RAM pool and a VRAM pool that are physically speperated. Copying data from one pool to the other creates latency. The GPU is very good ad hiding latency. What it needs most is high bandwidth. The CPU on the other hand is extremely sensitive to latency. The CPU needs extremely low latency to work efficiently. Copying data from the RAM (CPU) to the VRAM (GPU) creates latency, but that's okay for the GPU. Copying data from RAM (CPU) to VRAM (GPU) and back to the RAM (CPU) creates even more latency. It's too much for the CPU. The copying alone takes longer than the computation wich makes this roundtrip highly ineffective.
Xbox360 and older APUs have a unified RAM. This means that the RAM is no longer physically seperated, but even though it's the same RAM chips, the system still distincts between memory partition for the differenct processors. You still need to copy the data between CPU partition and GPU partition, but this will be much more efficient than copying it between physically seperated pools. But it's still too much latency for a CPU, GPU, CPU roundtrip.
PS4 will have hUMA wich means that you no longer need a distinction between CPU partition and GPU partition. Both processors can use the same pieces of data at the same time. You don't need to copy stuff and this allow for completely new algorithms that utilize CPU and GPU at the same time. This is interesting since a GPU is very strong, but extremely dumb. A CPU is extremely smart, very weak. Since you can utilize both processors at the same time for a single task you have a system that is extremely smart and extremely strong at the same time.
It will allow for an extreme boost for many, many algorithms and parts of algorithms. On top of that it will allow for completely new classes of algorithms. This is a game changer.
Welp, I learned something new today. I just assumed it was easier the other way around since that's how it was done with 360 and PS3. But given that they had different internal architectures, I suppose that makes sense.
And Soul Destroyer said the pc will be the lead platform this gen so it sounds like the pc version will look best, followed by PS3 and 360. Hopefully that means no gimped pc versions this time around (I'm looking at you Dark Souls)
I don't believe that, sounds press release talk about the new technology.
There's nothing in the hUMA outlines I've seen that indicates any sort of 3D performance increase. It's all to do with making GPGPU calculations easier.
Really? This is a tech based statement from the company supplying th tech for both consoles. Making this statement publicly is pretty surprising and worth a thread and discussion.yay! another thread about how the ps4 is so superior to the xbox one and we should all buy ps4
/sarcasm
DerZuhälter;77502041 said:Someone explain to me what this means in Dragonball Z terms!
Fusion?Kaioken?Babidi Mindcontrol?
Absolute bollocks.
Great counter argument there. Totally invalidates what AMD said. Time to lock up the thread I guess.
Absolute bollocks.
...and Driveclub looks better as a result. Not sure what your point is here? You think that DC is 4.5x the computational workload of F5? Okay...But those games have a really different approach, that illustrates most first party games for both consoles.
Forza play the safe card without using too taxing techs and having nice eye candy, while DC try to be a next gen showcase of every costly things imaginable.
I guess the Xbox will support that via its Data Move Engines.
AMD even spoke, at one point, about the idea of using an embedded eDRAM chip as a cache for GPU memory essentially speaking to the Xbox Durangos expected memory structure. The following quote comes from AMDs HSA briefing/seminar:
Game developers and other 3D rendering programs have wanted to use extremely large textures for a number of years and theyve had to go through a lot of tricks to pack pieces of textures into smaller textures, or split the textures into smaller textures, because of problems with the legacy memory model Today, a whole texture has to be locked down in physical memory before the GPU is allowed to touch any part of it. If the GPU is only going to touch a small part of it, youd like to only bring those pages into physical memory and therefore be able to accommodate other large textures.
With a hUMA approach to 3D rendering, applications will be able to code much more naturally with large textures and yet not run out of physical memory, because only the real working set will be brought into physical memory.
This is broadly analogous to hardware support for the MegaTexturing technology that John Carmack debuted in Rage.
It will allow for an extreme boost for many, many algorithms and parts of algorithms. On top of that it will allow for completely new classes of algorithms. This is a game changer.
This is from April. Don't see a reason why the X1 shouldn't have hUMA.
Can someone of TechGAF explain what is the true advantage of this technology?
what is the difference between hUMA and normal Unified memory? and how this impact in PS4 and X1 architecture?
I guess the Xbox will support that via its Data Move Engines.
THis is how AMD explains huma.
I have no clue, so for me in my simple mind, I don't see what changed from the unified memory perspective. Isn't this how it always was meant to be for both consoles?
... and it could be stated that more and more calculations for graphical related tasks too would move to compute shaders/GPGPU calculations.
But PRT is GCN technology, even PC 7xxx cards have it already.
On a classical system you have a RAM pool and a VRAM pool that are physically speperated. Copying data from one pool to the other creates latency. The GPU is very good ad hiding latency. What it needs most is high bandwidth. The CPU on the other hand is extremely sensitive to latency. The CPU needs extremely low latency to work efficiently. Copying data from the RAM (CPU) to the VRAM (GPU) creates latency, but that's okay for the GPU. Copying data from RAM (CPU) to VRAM (GPU) and back to the RAM (CPU) creates even more latency. It's too much for the CPU. The copying alone takes longer than the computation wich makes this roundtrip highly ineffective.
Xbox360 and older APUs have a unified RAM. This means that the RAM is no longer physically seperated, but even though it's the same RAM chips, the system still distincts between memory partition for the differenct processors. You still need to copy the data between CPU partition and GPU partition, but this will be much more efficient than copying it between physically seperated pools. But it's still too much latency for a CPU, GPU, CPU roundtrip.
PS4 will have hUMA wich means that you no longer need a distinction between CPU partition and GPU partition. Both processors can use the same pieces of data at the same time. You don't need to copy stuff and this allows for completely new algorithms that utilize CPU and GPU at the same time. This is interesting since a GPU is very strong, but extremely dumb. A CPU is extremely smart, but very weak. Since you can utilize both processors at the same time for a single task you have a system that is extremely smart and extremely strong at the same time.
It will allow for an extreme boost for many, many algorithms and parts of algorithms. On top of that it will allow for completely new classes of algorithms. This is a game changer.
It is now a part of DirectX, nothing Xbox specific, PCs will have PRT capabilities with it.
FixedPS4 CPU = Piccolo
PS4 GPU = SSJ3 Goku
PS4 gddr5 memory = Mystic Gohan
At this point i think some here would probably commit suicide if they couldnt go another day without dick waving. But the games on both machines tell a different story. Theres no huge jump from one to the other. Both machines look like they could easily pull off the same visuals should their exclusives ever trade places.
But tech enthusiasts and especially GAF will so its alright.mainstream audience will not bother with this difference at all
So please explain it. I understand contact law, not computer architecture.
Aren't both MS and Sony costumers of AMD. Weird to make one of them look bad.
Although his wording/phrasing isn't quite right it's easy to see the point he is trying to put across,(Which is that changes/calculations are seen by both the CPU/GPU in memory in real time without the need to copy or flush cache to update) so dismissing his post as "Absolute bollocks" is not needed.He clearly said at the start of the post it was not a tech interpretation, it was what he thought with his knowledge.
If you are not happy with the way the console of your choice is shaping up, please refrain from throwing the toys out the pram.