• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

VGleaks: Orbis Unveiled! [Updated]

Respawn

Banned
I don't think so also, but MS isn't stupid. They had a valid reason for choosing DDR3 + ESRAM over GDDR5. They had the option to go for GDDR5 but chose not to.

Let's wait and see how this pans out when everything is unveiled.
They had the option?? When was this revealed??
 
Most of GAF doesn't know the difference either. They just see one has a bigger number (8 over 4). 4gig GDDR5 in Orbis has a bandwidth of 178GB/s, while the 8gig DDR3 is at 68GB/s with an eSRAM of 32mb at 102GB/s.

The bandwidth is how fast the memory can be accessed.
The Orbis has half as much ram, but it's 2.5 times faster than the Durango's DDR3.
And the ram is still 75% faster than the eSRAM (which is a meager 32mb).

Having faster RAM means more ram is available per frame.
Having more RAM means you can have more data loaded.

That data is useless if it can't be accessed as quickly as the faster ram.

Per frame that GDDR5 memory would allow about 3gigs available per frame while the DDR3 would only allow 1gig per frame.

Even AMD, the chip manufacter for both Durango's and Orbis' CPU and GPU's say GDDR5 provides a performance increase over DDR3.

This is why most high end GPU's use GDDR5 over the more "budget" DDR3.

AMD's APU's are also bandwidth limited. Tests have shown, increasing the bandwidth of the system memory significantly increases performance.

So, yes, Orbis "only" has 4 gigs of ram, which would require more streaming if you wanted extremely complex open world environments, but the bandwidth is capable of keeping up.

Yes, DDR3 can allow more ram and data loaded into the RAM, requiring less HDD access, and less streaming, but you'd suffer in terms of things like frame rate, resolution, alpha resolution, shadows, texture filtering, and more.

EDIT: Mind you the Durango still has that 32mb of ram, but it still has a lower bandwidth than that GDDR5. Sure the latency (time to be accessed) is lower, but for GPU processes, that's irrelevant.


Yes, give me 4gb of gold instead of 8gb silver.

These have to be two of the bests posts in the entire thread. For completely different reasons of course lol.
 

Ashes

Banned
If I get banned for defending myself when others are just mindlessly attacking what I say with nothing to back themselves up, then so be it.

If members get banned for shitting up a thread, it is usually a temporary ban.
For jnrs, it's one strike and out. And you have I think a day or two left before you become a full member. Just sayin...

But go ahead. Do as you please.

For honour and console war glory. ;P
 

daveo42

Banned
Unified RAM being easier does not imply that the PS4 is like a PC.

Split RAM is closer to how a PC operates anyway (System RAM and VRAM linked by a southbridge). This will continue to be the case until true APUs are released with everything under one die.

Split RAM hurt the PS3 from a programming standpoint. Unified RAM on a console makes sense since you are only there to push graphics and perform physics calcs all day.
 

spwolf

Member
Whatever happens, a game with a streaming engine will be held back by the slowest data,which will be the bluray drive (or HDD if installed)

sure, if you are talking about PS3, then yes. However amount of RAM, speed of BD drive and hardware decoding make these non issues at all. It is pretty silly to talk about it. As much as I remember, main problem with BD drives were seek times, not throughput. This is where hardware deflate decoder helps a lot, as it keeps data together.

Next generation game engines do not use half of the ram that PS4 has right now. 4 GB seems to be there for the future and/or things different than video cache.
 

Lord Error

Insane For Sony
And where was any relevant evidence to prove I was wrong?
I don't think you really provided any evidence that you're right either on that front. Not that you really could, because the only evidence that would matter is how said RAM would perform vs. other types of RAM under the setup that was used on PS3 (which was not the same as what is on PC configurations)
 

Arkham

The Amiga Brotherhood
You don't know jack shit.

Come on. Keep that shit in the other thread. This one's slightly more balanced.

Take the hint: Even people you'd think would support you are pointing out your douchieness. But they're civil. Take a step back for a day or two.
 
I don't think you really provided any evidence that you're right either on that front. Not that you really could, because the only evidence that would matter is how said RAM would perform vs. other types of RAM under the setup that was used on PS3 (which was not the same as what is on PC configurations)
Well, if you want to argue its not like an ordinary CPU in regards to ram, then I would believe you if I had some data that shows it. If neither of us have any real data, then neither of us is right or wrong. Yet people are claiming Im wrong without proof. XDR with all its numbers lack any real world relevance. If it was so great, there would be more applicable data with it in relation to general and specialized processors.

XDR and XDR2 has its uses but Rambus charges a ton of money for it. After intel bailed on Rambus, they have been in the business of patent trolling and overall just hindering technology. Very few products actually use their designs. Even IBM doesn't license XDR to use in their super computers when they used cell. They used DDR.

So why is XDR so useful in the Cell inside the PS3? was it worth it for sony to pay for it? This was the original question and I said no. Anyone else arguing against it has put forth no logical argument or remotely useful data, numbers in paper from the XDR specs are the only ones put forth, they are not so useful when you are looking at using them in implementation. I don't really care to pursue this argument any further but I still stand by what I said.
 

Perkel

Banned
please explain what you do know and how SRAM affects memory bandwidth. And please explain how DRAM is accessed because I would love to know how your little mind comprehends this stuff.

And where did you get the numbers for your SRAM price and GDDR5? SRAM is much more expensive than using GDDR5. You not only need a large die, you need components to control 2 pool of memory and additional buses. But please do tell how you got that idea.

And please do give you analysis on texture bandwidth and how that is used. And how your opinion matters.

I would love, love to laugh at what you try and write.

Why should i ? You are the one that drops knowledge without any supporting material.

It is even more funny since you are dismissing XDR being shitty for games and use SRAM as your main point. Both XDR and SRAM were created to be very low latency mainly.. but continue it's entertaining.
 

jett

D-Member
Well, I even said from the start, it's for data caching, but why is data streaming out of the picture all of a sudden?

You also mention that PC bw's are slower compared to Orbis and Durango, and you're right.

Max Payne 3. No MSAA is sub 1G for vram.



With 8xMSAA and everything on Max, VRAM is still sub 2gb. (Lol, 120FPS)


I think we'll be fine.

And you won't be seeing much use of MSAA either. The future is post-AA, it seems to me.
 

Boss Man

Member
please explain what you do know and how SRAM affects memory bandwidth. And please explain how DRAM is accessed because I would love to know how your little mind comprehends this stuff.

And where did you get the numbers for your SRAM price and GDDR5? SRAM is much more expensive than using GDDR5. You not only need a large die, you need components to control 2 pool of memory and additional buses. But please do tell how you got that idea.

And please do give you analysis on texture bandwidth and how that is used. And how your opinion matters.

I would love, love to laugh at what you try and write.
Dude, come on with this. You don't look smart like that, you are just coming off as super agitated and defensive. It honestly seems like you spent a lot of time reading about things that you don't actually understand.
 

coldfoot

Banned
So why is XDR so useful in the Cell inside the PS3? was it worth it for sony to pay for it? This was the original question and I said no. Anyone else arguing against it has put forth no logical argument or remotely useful data, numbers in paper from the XDR specs are the only ones put forth, they are not so useful when you are looking at using them in implementation. I don't really care to pursue this argument any further but I still stand by what I said.

Could you please show us the licensing agreement between Sony and Rambus and how much Sony is paying for royalties for the memory in PS3? XDR in itself is not any more expensive to produce than DDR as it's just a certain mm2 of IC. So unless you can show us the document that says Sony is paying X amount for royalties, you really don't know if Sony is paying too much.
 

RoboPlato

I'd be in the dick
And you won't be seeing much use of MSAA either. The future is post-AA, it seems to me.

I agree, at least on consoles. I'm hoping SMAA 2x and 4x get used some. It does some MSAA passes but complements them with a good post-AA solution to improve performance and has pretty nice IQ.
 

Pimpbaa

Member
MSAA is horrible...it eat's up way too much performance, anything over 2X MSAA plummets the frame rate big time on my 7970

It's horrible in a lot modern games that use deferred rendering. Performance hit is way less with games that use forward rendering, but they are becoming less and less.
 

USC-fan

Banned
I agree with most of your ram analysis but what you are being criticized is true. Data isn't useless when its cached into ram. Also having a larger pool increase performance when dealing with slow interfaces like the HDD or BR drive.

The speedier ram is nice but size isn't pointless. Also quoting AMD's quote doesn't do anything because these consoles are designed with their owned memory architecture. Would AMD have said the same thing if eSRAM could be used inside PCs? What if the DDR3 was 68GB/s instead of 28GB/s? The quote from AMD is looking at ram when compared from ~30GB/s to ~70GB/s. It doesn't really relate well to the next gen consoles.

Having eSRAM is parallel access can in theory make it hit much better efficiency than GDDR5 just by the way DRAM is accessed in compared to SRAM. You have much less wasted data loaded with SRAM compared to DRAM. Also when the SRAM is use in parallel, data can come from both the DRAM and SRAM, the efficiency won't be as high but considering the the eSRAM can be used to store repeated used data and the DRAM can be used for everything else. It is a very efficient solution.

Microsoft's solution is probably better in the long run. It also costs more to implement eSRAM than just going with GDDR5. Games are coming close to using 4 GBs of ram today on PC(VRAM + System ram), I can think it will only go up. Sony's solution is easier to use from the get go since it s exactly like a PC. The won't have to deal with 2 pool of memory and having the controller to deal with both pools.

Its not black and white as 176>68 so that 8>4 doesn't matter. or that 8>4 so its better. Both solutions have drawbacks. Anyone just saying GDDR5 is better just because its faster really are just blind.
GDDR5 is better. There is no question. Performance across the board is better. Having 16 GB of slow ddr3 is no better than 5GB. You can only use 1GB per frame the rest is just a cache. At a point the cache size is useless.

Faster ram lets you access more of the ram. Xbox 720 will always be limited to 1GB at 60fps. Ps4 will have access to ~3GB per frame. Massive difference....

Its clear MS design the system with so much ram not for gaming but for running OS/media features. Also its reported that MS has went with some other "secret sauces" to help it keep up.

32mb of esram will help but even that is no match for Gddr5...
 
It's horrible in a lot modern games that use deferred rendering. Performance hit is way less with games that use forward rendering, but they are becoming less and less.

AMD has a new tiled forward render that supposedly can cope with lots of lights without degradation of performance, and still maintains the memory and bandwidth of forward rendering.

It was used on a tech demo even, where a puppeteer adjust a scene with a knight and a dragon dolls.

Edit: Maybe it will pick up as a viable competitor to deferred rendering.
 
If I get banned for defending myself when others are just mindlessly attacking what I say with nothing to back themselves up, then so be it.

Problem is you're trying to defend yourself, and attacking people rather than give legit facts to prove your claims. Not to mention there have been people in the game/tech industry who have given their 2 cents about this and have mentioned how much better 4GB of GDDR5 then 8B of GDDR3 are. What about you? Are you a game developer? Have you worked with a PS3 devkit and seen how XDR performs or at least do you have any comments from developers who have done so? Do you work with Intel/AMD or in the same industry?

Lastly, please don't make yourself a victim. Seriously.
 
This is probably an incredibly stupid question that no one could possibly answer right now but I'm on the verge of buying a Logitech G27 wheel for PC and PS3 and I was wondering, should I be confident that it will continue to be supported on the PS4?
 

i-Lo

Member
Well, I even said from the start, it's for data caching, but why is data streaming out of the picture all of a sudden?

You also mention that PC bw's are slower compared to Orbis and Durango, and you're right.

Max Payne 3. No MSAA is sub 1G for vram.



With 8xMSAA and everything on Max, VRAM is still sub 2gb. (Lol, 120FPS)


I think we'll be fine.

Newer models has much less for flash.

Anyway, you don't need more than 8 for OS, and use another 8 for games. 8 gig relatively fast cache would suffice.

EDIT: Also, Vita has a 4g flash, used for OS and likely games as well.



If it has the built in flash, it really wouldn't need to, since they'd be able to optimize off of that.

Here's me hoping that flash part especially pertaining the function most people here would want our of it is true.

As for the rest, especially, if those two pictures show anything is that with optimization and working within the limits of framerate and resolution, the next generation machines should deliver some truly remarkable results.

And pertaining to PS4's capabilities if it is indeed going with GDDR5, then it would be best exploited and displayed by Sony's 1st parties. Despite my ultra conservative expectations about the results despite the preceding paragraph, I quiver in excitement at the thought of what teams like ND, Sucker Punch, PD, GG, SSM will deliver.

Here is me hoping one of their devs creates a western RPG.

This is probably an incredibly stupid question that no one could possibly answer right now but I'm on the verge of buying a Logitech G27 wheel for PC and PS3 and I was wondering, should I be confident that it will continue to be supported on the PS4?

Yes, it should be.
 

mrklaw

MrArseFace
And you won't be seeing much use of MSAA either. The future is post-AA, it seems to me.

Hopefully post-AA applied with HUD elements overlaid so they don't get blurred.


sure, if you are talking about PS3, then yes. However amount of RAM, speed of BD drive and hardware decoding make these non issues at all. It is pretty silly to talk about it. As much as I remember, main problem with BD drives were seek times, not throughput. This is where hardware deflate decoder helps a lot, as it keeps data together.

Next generation game engines do not use half of the ram that PS4 has right now. 4 GB seems to be there for the future and/or things different than video cache.

How doesnt it matter? If you are driving at 100mph through your world, and your bluray drive can only transfer 27MB/s (assuming 6x) then you can only bring in 27MB new data each second - or barely 1MB per frame if you're running at 30fps.
That's optimal and if you need to seek it'll drop even lower.

Of course you keep as much common stuff in memory as possible, but that can start to benefit Durango with more space for that.

Mandatory HDD installs will help.


Edit: another great post by sebbbi on virtual texturing. Long read but worth it for those interested.

http://forum.beyond3d.com/showpost.php?p=1580827&postcount=39

According to him, using virtual texturing you only need about 40 MB of textures for 720p and around 100MB for 1080p, and you'd need around 15MB/s streaming in for 1080p. So within the bounds of bluray, and modern engines are optimised already for virtual texturing, so hopefully we should be in good shape.
 

Fafalada

Fafracer forever
spwolf said:
This is where hardware deflate decoder helps a lot, as it keeps data together.
I've said this before - games have been deflating optical-disc data as a standard method for the past 15 years (at least). And there is no special-sauce that can make a lossless algorithms significantly more efficient - the compression rates are limited by physics.

Hw unit can offload CPU overhead, but the real win would be if this is encoder AND decoder combo, because encoding is the heavier of two processes, and more and more data needs to be written/sent as well.
 
Top Bottom