• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

WiiU technical discussion (serious discussions welcome)

You're right of course. I should've read that more carefully, sorry.
No need to say sorry, I knew it because I researched it, and had read the very same PDF you presented.
Still, that seems almost ridiculously high to me and would make the PS4 250+ W. Much more than I'd have expected. Are you sure that GDDR5 chips are not quite a bit less power hungry today on more modern production processes (sadly I don't find anything on that on the net)?
It is ridiculously high, yes.

As for them being less power hungry, not really. These chips are still stuck in the 46 nm process and using the same voltage as before. The best part money can buy here is 4Gbit chips at 1.35V; any other solution will eat up even more energy
Interesting; wasn't aware they had 512MB chips. Thought it was all 256 still at this point (at any sane price). I was envisioning 32 of those things on the board, lol. 16 isn't as bad.
Perhaps it'll have 32 of them; but since this seems like a late change doubling density seems like the only sane way to not return to the drawing board with the whole circuit and power source design/testing.

I'm also assuming the dummies didn't go with a second DDR3 bank because the GPU and the CPU lack a DDR3 controller and it's kinda tight to change that at this point.

Most chips are still 256 MB, a lot of manufacturers are not offering 512 MB ones or do so in limited numbers. They're also kinda expensive seeing production is clearly not fully ramped up.

But the possibility of 32 chips for 8 GB is just too much power drain wise.
8 GB of GDDR5 is overkill, hell, it makes me sad thinking 1/2 GB of that will be allocated for OS duties and perhaps another extra chunk will go for caching bluray data, being fed at a whooping 27 MB/s.
Wii U has 1 GB out of 2 GB for OS functions. How does that make you feel?
It's not about the amount of RAM, it's the type. GDDR5 for shit like OS and caching is like using a Ferrari for city traffic crossing.

And everyone is ignoring the real bottleneck of this upcoming generation; sure fast RAM is nice and all that, but too much RAM increases the absolute bottleneck here.

And that is Blu-Ray transfer rate and how many times the usable RAM fits into a disc; because most data populating the RAM pool comes from the disc.

To illustrate my point:


PSone had 3 MB of RAM (2 MB plus 1 MB of VRAM), and the CD-ROM drive had a 300 KB/s throughput, this puts 3 MB of data being loaded at 10 seconds (ignoring seek time).

These 3 MB also fitted 217 times in a disc, meaning that (in a non linear way, just defining a pattern here as there is always shared data) ou could have 217 completely separate scenes in there to fill a 650 MB disc, which is why devs could only do that by stepping up on FMV's.


On PS2, the drive had a 5.54 MB/s transfer rate, which for 32 MB of RAM means 32 MB of data take 6 seconds to stream, also 32 MB can fit 147 times in a 4.7 GB DVD.


This generation... Most multiplatform games were bound by X360 DVD storage limits so we're talking 8.54 GB DVD's (and throughout most of the generation they only used 7 GB as the rest was reserved for security counter measures) so, going by the same logic it took 33 seconds to stream 512 MB out of the disc at 15.85 MB/s and 7/8.54 GB fit 14/17 times (respectively) on 512 MB.

This generation, I'll consider PS4 will only have 6 GB for games; that's 3 minutes and 48 seconds for 6 GB of data streaming at 27 MB/s, and 6 GB fit roughly 4 times in a 25 GB Bluray.


This means tendency for huge loadings, short games or repetitive when it comes to the assets if the whole RAM pool is used on a regular basis. It's a major bottleneck seeing you can't possibly make 16 minute installs before you play a game mandatory. (that's the time 25 GB of data take to get transfered)


For games that do pre-emptive caching, they could be using 170 GB/s RAM for 27 MB/s transfers, that's what makes me sad, they're pouring money on a RAM type that is overkill for the tasks the hardware has to do to compensate the fact the drive is hellish slow in comparison with the ammount of data it has to feed the machine with. You do have comparatively shitty DDR3 on PC's and the like for a reason; it's cheap and it consumes 2.58W per 16 chip DIMM. This... This is nuts, it's megalomaniac in a time I thought Sony had learned it's lesson.


Wii U will probably have textures with more compression (less resolution too, seeing it's meant for 720p) but way more sustainable loadings; in the end most people won't notice and it'll feel more like a console seeing installs won't be mandatory and games simply run more seamlessly sans-pauses in that sense. Although I do think it could have more RAM, just in case/for future proofing reasons; 1 GB right now seems just right.
 

USC-fan

Banned
Did you forget about the standard hdd? With a hdd everything you said doesnt matter.

Not sure why this talk is being in the wiu tech thread.
 

Kenka

Member
Did you forget about the standard hdd? With a hdd everything you said doesnt matter.

Not sure why this talk is being in the wiu tech thread.
Well true, but to be fair to him the issue he raised precisely explained why HDDs were mandatory for the PS4.

But why they went with a unified pool of GDDR5 (8 GB of it!) is just beyond me. I simply don't get it. They expect people to shell out 600$ for the console or do they plan on letting them sign subscription-based contracts to afford it?
 

LCGeek

formerly sane
Seeing last night's demos raises expectations from Nintendo to produce some impressive visuals for the Wii U's 2nd wave. I know there is a gap between them and Sony, but it shouldn't be that big. As I understand it, Wii U's GPU still has got the same, ballpark, architecture just less powerful. So, I'm expecting results like what we saw last night, just limited in scope.

* stares at Retro Studios *

They have stepped every generation not worried and the more I think about what they could be doing the more I realize I shouldn't, too much pain.

At least my base prediction is coming true the feature set and performance of DX11 is going to low for clear gaps especially as this drags on. The cityscapes and the fidelity level are finally at a respectable pc level I can tolerate and to think it's 1080p in some cases. Regardles of res this generation games should plz most who look at them.
 

USC-fan

Banned
Well true, but to be fair to him the issue he raised precisely explained why HDDs were mandatory for the PS4.

But why they went with a unified pool of GDDR5 (8 GB of it!) is just beyond me. I simply don't get it. They expect people to shell out 600$ for the console or do they plan on letting them sign subscription-based contracts to afford it?
Its clear ram was one thing devs really wanted. I dont see $600 launch price. Whole console is based on one chip.

Again not sure why this debate is in this thread...
 

Kenka

Member
They announced the price?
No but 8 GB of GDDR5 of all RAM varieties doesn't come cheap.

A link to the past

When Microsoft was finalizing hardware specifications for Xbox 360, they apparently asked developers to make a choice: a bundled hard drive standard to every Xbox 360 or 512MB RAM, compared to the previously planned 256MB RAM. Epic Games' VP Mark Rein told this story at an Xbox Community Party in Canada recently, as recorded in a Major Nelson podcast and picked up on by Team Xbox.

Epic Games chose RAM and produced a screen shot of what Gears of War would look like with just 256MB RAM for Microsoft. "...the 512 megs of RAM was way more important, cause otherwise you couldn't do this level of graphics if you had to both write your program and do your graphics in 256 megs. Nothing would really look that HD," he said.

Despite the costs, Microsoft gave into the pressure. "So the day they made the decision, we were apparently the first developer they called; we were at Game Developers Conference, was it two years ago, and then I got a call from the chief financial officer of MGS and he said 'I just want you to know you cost me a billion dollars'"
And no, memory prices haven't fallen from that time. So think how much Sony invested in those 8GB if 256MB was worth 1B$. We don't talk about the same type of RAM obviously (Rambus as opposed to GDDR5) but the sacrifice is huge nonetheless
 

Kenka

Member
Regarding WiiU talk, the gap with the PS4 looks all the more important. But again, let's see how the scalability of the different engines used this gen will deliver the wow factor to less potential hardware. Well, or not well ? We'll see with Watch Dogs.
 

lherre

Accurate
It's still going to be much more expensive for a motherboard that can take on the complexity of 16 chips, I don't see how this thing will be even the same size as launch PS3, and that is with taking the PSU out of the box.

Although 8GB GDDR5 did win me over, if I get another console beyond Wii U, based on tech, it will be the PS4, this is a bit off topic but PS4 should graphically actually exceed XB3 thanks to the same architecture and having a bit more grunt on the GPU dedicated to gaming (so frame rates should be more solid and might even produce a larger draw distance, other small effects that might look nice. The real kicker though is the extra power that is dedicated to the GPGPU operations, so stuff like better physics, realistic wind effects, much more realistic destruction and smoke/dust other particle effects could be added on top of the XB3's game when porting over. PS4 obviously won't have as mature of a development enviroment since XB3 will use a modified DX11 while PS4 will have it's custom API being redesigned to introduce the new architecture, but that won't matter too much since deep down the basic GPUs are going to do the same thing, and even Wii U should be able to handle just about everything those GPUs are doing at a scaled back version of the same games.

Sorry again for being a bit off topic, I just wanted to sort of tail end the discussion of the PS4 we were having here thanks to the event yesterday.

Sony basically has a "copy" of DX11 api with other name for PS4.
 

Gahiggidy

My aunt & uncle run a Mom & Pop store, "The Gamecube Hut", and sold 80k WiiU within minutes of opening.
How does the api on Wii U compare? Will it be a hurdle for devs wanting to port games built for other platforms?
 
Did you forget about the standard hdd? With a hdd everything you said doesnt matter.

Not sure why this talk is being in the wiu tech thread.
Ahah, are you serious? HDD also can only do caching at 27 MB/s; and I could write a huge ass post detailing caching and loading techniques but I get the feeling you only superficially skim through stuff and then proceed to reply with utopic one liners like this one so I honestly won't bother.

The transfer speed was half of the equation; but I'll reiterate, even with all the caching techniques, hard drive and tricks available to man nothing saves you from a 3+ minute loading screen if you're using all that ram and are loading straight from the disc; you have to though, seeing obligatory installs defeat the purpose of a console, and the moment you put a disc, go to the start menu and start playing probably not 4 minutes have elapsed; hence the transfer rate being a real bottleneck that you have to tackle; start with a smaller area or stall the player so it can load it in the background.

The HDD is there to counter it, yes, but how much memory do you have for that task? And how many games can it support at once? you have to be really clever about it and devtools rarely help in that matter; pre-emptive and intelligent caching takes work and can only disguise/reduce a bottleneck being that evident.

Even if it caches it transparently (like you go to a save point, turn off the console and it saves that level in the HDD so loading will be faster next time around). Insert another game disc (or a few) and it'll need that memory and overwrite it, it's gone, hello huge ass loading the next time you decide to play.

It's a bottleneck alright, and if you don't acknowledge it it's you that's delusional, you can design games around it, you can not fill the whole RAM and you can use two types od texture LOD's, but it's a bottleneck alright, one that hampers the full use of all the memory you have available.

As I said though, that's half the equation; notice how games lost in scope and lenght this generation; sure content takes more time to create now, but it also has to do with storage limitations; you couldn't do it even if you wanted unless the solution is making it look same'ish (same assets over and over) because you simply don't have disc space; and the more RAM you have the more disc storage you spend per scene.

That ratio is further screwed with 6 GB of RAM for games and 25 GB discs, it's kinda obvious.


I don't post on the other threads you're talking about because they move too fast, and I don't want to argue in circles with blockheads. I'm putting stuff into perspective here, you either agree or you don't; simple; I care to be read not to state my point over and over again for people that refuse to understand my logic.
 

Log4Girlz

Member
No need to say sorry, I knew it because I researched it, and had read the very same PDF you presented.It is ridiculously high, yes.

As for them being less power hungry, not really. These chips are still stuck in the 46 nm process and using the same voltage as before. The best part money can buy here is 4Gbit chips at 1.35V; any other solution will eat up even more energyPerhaps it'll have 32 of them; but since this seems like a late change doubling density seems like the only sane way to not return to the drawing board with the whole circuit and power source design/testing.

I'm also assuming the dummies didn't go with a second DDR3 bank because the GPU and the CPU lack a DDR3 controller and it's kinda tight to change that at this point.

Most chips are still 256 MB, a lot of manufacturers are not offering 512 MB ones or do so in limited numbers. They're also kinda expensive seeing production is clearly not fully ramped up.

But the possibility of 32 chips for 8 GB is just too much power drain wise.It's not about the amount of RAM, it's the type. GDDR5 for shit like OS and caching is like using a Ferrari for city traffic crossing.

And everyone is ignoring the real bottleneck of this upcoming generation; sure fast RAM is nice and all that, but too much RAM increases the absolute bottleneck here.

And that is Blu-Ray transfer rate and how many times the usable RAM fits into a disc; because most data populating the RAM pool comes from the disc.

To illustrate my point:


PSone had 3 MB of RAM (2 MB plus 1 MB of VRAM), and the CD-ROM drive had a 300 KB/s throughput, this puts 3 MB of data being loaded at 10 seconds (ignoring seek time).

These 3 MB also fitted 217 times in a disc, meaning that (in a non linear way, just defining a pattern here as there is always shared data) ou could have 217 completely separate scenes in there to fill a 650 MB disc, which is why devs could only do that by stepping up on FMV's.


On PS2, the drive had a 5.54 MB/s transfer rate, which for 32 MB of RAM means 32 MB of data take 6 seconds to stream, also 32 MB can fit 147 times in a 4.7 GB DVD.


This generation... Most multiplatform games were bound by X360 DVD storage limits so we're talking 8.54 GB DVD's (and throughout most of the generation they only used 7 GB as the rest was reserved for security counter measures) so, going by the same logic it took 33 seconds to stream 512 MB out of the disc at 15.85 MB/s and 7/8.54 GB fit 14/17 times (respectively) on 512 MB.

This generation, I'll consider PS4 will only have 6 GB for games; that's 3 minutes and 48 seconds for 6 GB of data streaming at 27 MB/s, and 6 GB fit roughly 4 times in a 25 GB Bluray.


This means tendency for huge loadings, short games or repetitive when it comes to the assets if the whole RAM pool is used on a regular basis. It's a major bottleneck seeing you can't possibly make 16 minute installs before you play a game mandatory. (that's the time 25 GB of data take to get transfered)


For games that do pre-emptive caching, they could be using 170 GB/s RAM for 27 MB/s transfers, that's what makes me sad, they're pouring money on a RAM type that is overkill for the tasks the hardware has to do to compensate the fact the drive is hellish slow in comparison with the ammount of data it has to feed the machine with. You do have comparatively shitty DDR3 on PC's and the like for a reason; it's cheap and it consumes 2.58W per 16 chip DIMM. This... This is nuts, it's megalomaniac in a time I thought Sony had learned it's lesson.


Wii U will probably have textures with more compression (less resolution too, seeing it's meant for 720p) but way more sustainable loadings; in the end most people won't notice and it'll feel more like a console seeing installs won't be mandatory and games simply run more seamlessly sans-pauses in that sense. Although I do think it could have more RAM, just in case/for future proofing reasons; 1 GB right now seems just right.

Larger amounts of RAM improve load times and with proper design, a game can hide them incredibly well. I remember very long load times on the PSONE in some games. Additionally, there is nothing keeping a dev from using less than the full amount of RAM, but this wouldn't be necessary. More RAM is boon.
 
Larger amounts of RAM improve load times and with proper design, a game can hide them incredibly well. I remember very long load times on the PSONE in some games. Additionally, there is nothing keeping a dev from using less than the full amount of RAM, but this wouldn't be necessary. More RAM is boon.
Large amounts of ram only improve loading if you can get the data to the ram fast enough to make that big of a difference. His/her point is that the transfer speed to the ram isn't fast enough for the amount of ram. That's why it's overkill, much of it will probably go unused for the reasons stated. It was never said that more Ram doesn't help loading, but that much of the expensive ram will likely go to waste.
 

nordique

Member
No need to say sorry, I knew it because I researched it, and had read the very same PDF you presented.It is ridiculously high, yes.

As for them being less power hungry, not really. These chips are still stuck in the 46 nm process and using the same voltage as before. The best part money can buy here is 4Gbit chips at 1.35V; any other solution will eat up even more energyPerhaps it'll have 32 of them; but since this seems like a late change doubling density seems like the only sane way to not return to the drawing board with the whole circuit and power source design/testing.

I'm also assuming the dummies didn't go with a second DDR3 bank because the GPU and the CPU lack a DDR3 controller and it's kinda tight to change that at this point.

Most chips are still 256 MB, a lot of manufacturers are not offering 512 MB ones or do so in limited numbers. They're also kinda expensive seeing production is clearly not fully ramped up.

But the possibility of 32 chips for 8 GB is just too much power drain wise.It's not about the amount of RAM, it's the type. GDDR5 for shit like OS and caching is like using a Ferrari for city traffic crossing.

And everyone is ignoring the real bottleneck of this upcoming generation; sure fast RAM is nice and all that, but too much RAM increases the absolute bottleneck here.

And that is Blu-Ray transfer rate and how many times the usable RAM fits into a disc; because most data populating the RAM pool comes from the disc.

To illustrate my point:


PSone had 3 MB of RAM (2 MB plus 1 MB of VRAM), and the CD-ROM drive had a 300 KB/s throughput, this puts 3 MB of data being loaded at 10 seconds (ignoring seek time).

These 3 MB also fitted 217 times in a disc, meaning that (in a non linear way, just defining a pattern here as there is always shared data) ou could have 217 completely separate scenes in there to fill a 650 MB disc, which is why devs could only do that by stepping up on FMV's.


On PS2, the drive had a 5.54 MB/s transfer rate, which for 32 MB of RAM means 32 MB of data take 6 seconds to stream, also 32 MB can fit 147 times in a 4.7 GB DVD.


This generation... Most multiplatform games were bound by X360 DVD storage limits so we're talking 8.54 GB DVD's (and throughout most of the generation they only used 7 GB as the rest was reserved for security counter measures) so, going by the same logic it took 33 seconds to stream 512 MB out of the disc at 15.85 MB/s and 7/8.54 GB fit 14/17 times (respectively) on 512 MB.

This generation, I'll consider PS4 will only have 6 GB for games; that's 3 minutes and 48 seconds for 6 GB of data streaming at 27 MB/s, and 6 GB fit roughly 4 times in a 25 GB Bluray.


This means tendency for huge loadings, short games or repetitive when it comes to the assets if the whole RAM pool is used on a regular basis. It's a major bottleneck seeing you can't possibly make 16 minute installs before you play a game mandatory. (that's the time 25 GB of data take to get transfered)


For games that do pre-emptive caching, they could be using 170 GB/s RAM for 27 MB/s transfers, that's what makes me sad, they're pouring money on a RAM type that is overkill for the tasks the hardware has to do to compensate the fact the drive is hellish slow in comparison with the ammount of data it has to feed the machine with. You do have comparatively shitty DDR3 on PC's and the like for a reason; it's cheap and it consumes 2.58W per 16 chip DIMM. This... This is nuts, it's megalomaniac in a time I thought Sony had learned it's lesson.


Wii U will probably have textures with more compression (less resolution too, seeing it's meant for 720p) but way more sustainable loadings; in the end most people won't notice and it'll feel more like a console seeing installs won't be mandatory and games simply run more seamlessly sans-pauses in that sense. Although I do think it could have more RAM, just in case/for future proofing reasons; 1 GB right now seems just right.

I never even considered this before

very informative post, thanks lostinblue :)
 
But why they went with a unified pool of GDDR5 (8 GB of it!) is just beyond me. I simply don't get it. They expect people to shell out 600$ for the console or do they plan on letting them sign subscription-based contracts to afford it?
Probably two reasons.

First is Sony listens way too much to developers and developers value their numbers way too much; x720 had 8 GB, so the pressure was on Sony to feature match them. Plus, they asked around "what did you dislike the most in the PS3?" and the second most popular reason must shave been the separate memory pools with different kinds of RAM, because current gen consoles were really starved for memory, used most RAM for graphics anyway and this configuration made them have to move stuff around all the time (instead of just dropping it there) and it was therefore a bottleneck on top of a bottleneck, further hampered by the fact the ingame OS footprint for PS3 was bigger than on the X360 and it didn't have a dedicated framebuffer, since they were both memory starved everything counted giving X360 the edge.

Second, it's probably too late to make significant changes to design; chips (CPU and GPU, or whatever coordinates buses) would have to have DDR3 buses at place (which they don't) and doing revisioned tape-out chip/chips now is probably already too tight and costly in R&D, could mean a delay on or lack of launch stock. Same for redesigning the PCB/circuit board; so if they were using sitxeen 256 MB chips they only had to go for sixteen 512 MB ones; it's a matter of soldering a different part.

IMO that's probably it.
 

joesiv

Member
PSone had 3 MB of RAM (2 MB plus 1 MB of VRAM), and the CD-ROM drive had a 300 KB/s throughput, this puts 3 MB of data being loaded at 10 seconds (ignoring seek time).

These 3 MB also fitted 217 times in a disc, meaning that (in a non linear way, just defining a pattern here as there is always shared data) ou could have 217 completely separate scenes in there to fill a 650 MB disc, which is why devs could only do that by stepping up on FMV's.


On PS2, the drive had a 5.54 MB/s transfer rate, which for 32 MB of RAM means 32 MB of data take 6 seconds to stream, also 32 MB can fit 147 times in a 4.7 GB DVD.


This generation... Most multiplatform games were bound by X360 DVD storage limits so we're talking 8.54 GB DVD's (and throughout most of the generation they only used 7 GB as the rest was reserved for security counter measures) so, going by the same logic it took 33 seconds to stream 512 MB out of the disc at 15.85 MB/s and 7/8.54 GB fit 14/17 times (respectively) on 512 MB.

This generation, I'll consider PS4 will only have 6 GB for games; that's 3 minutes and 48 seconds for 6 GB of data streaming at 27 MB/s, and 6 GB fit roughly 4 times in a 25 GB Bluray.
It's an interesting way of looking at things, good post.

It also shows the trend for larger more seamless worlds. In older generations games were split into smaller chunks (levels), now it is much more common to have large worlds with multiple scenarios within it.

While you're right there is a balancing act between intial load time and streaming, I think most would opt for a mix of the two. In fact, with this mentality, the game could be loading the actual game world from the moment the application has started (during logos/NIS's/Menus), but the time you are actually ready to start playing, there might not be an initial load. If worlds are big enough, and LOD's are managed well enough, you could stream in higher resolution models/textures as you play, eventually everything will be in memory. This isn't too much of problem anyways unless you're in the sky, as you typically see only a fraction of a level at a given time.

If it takes 5-10 minutes to load in the full level without it being apparent while you're playing, and then you get zero load times after that, I'd say it's a net win.

This also highlights the feature Sony touted with the instant on (powering down, but keeping the game in memory, leaving no need to reload into memory on startup) being very useful.

Oh, and indeed the Wii U with a lower target resolution may do decently, though the ratios aren't really in it's favor.
1080p -> 720p = around 2x's pixels
6-7.5GB -> 1GB = 6x+ usable RAM

So even with the drop in resolutoin, there will have to be a greater drop in visual fidelity, and more need for load times for large worlds (though still might be well within the feasibility of streaming)
 
Larger amounts of RAM improve load times and with proper design, a game can hide them incredibly well. I remember very long load times on the PSONE in some games. Additionally, there is nothing keeping a dev from using less than the full amount of RAM, but this wouldn't be necessary. More RAM is boon.
Yes, but that's my point. Using GDDR5 for caching and OS duties is kinda sad, overkill to say the least.

Assuming proper design and tailored made to be the norm is never a good idea. You and I could think of ways to counter these bottlenecks with smart level design (start in a small room, limited draw distance, circumventing areas not being too big thus easier to load while you pre-cache a bigger area for instance) but most won't. It's a bottleneck alright and sidestepping while trying to get the most out of it has an impact on level design.

PSone could have huge ass loadings because it was CLV, seek times were actually enormous and it was way before file repeating was thought off; keeping useful often accessed data in the outer part of the disc was also useless seeing the read speed was constant.
It's an interesting way of looking at things, good post.

It also shows the trend for larger more seamless worlds. In older generations games were split into smaller chunks (levels), now it is much more common to have large worlds with multiple scenarios within it.

While you're right there is a balancing act between intial load time and streaming, I think most would opt for a mix of the two. In fact, with this mentality, the game could be loading the actual game world from the moment the application has started (during logos/NIS's/Menus), but the time you are actually ready to start playing, there might not be an initial load. If worlds are big enough, and LOD's are managed well enough, you could stream in higher resolution models/textures as you play, eventually everything will be in memory. This isn't too much of problem anyways unless you're in the sky, as you typically see only a fraction of a level at a given time.

If it takes 5-10 minutes to load in the full level without it being apparent while you're playing, and then you get zero load times after that, I'd say it's a net win.

This also highlights the feature Sony touted with the instant on (powering down, but keeping the game in memory, leaving no need to reload into memory on startup) being very useful.

Oh, and indeed the Wii U with a lower target resolution may do decently, though the ratios aren't really in it's favor.
1080p -> 720p = around 2x's pixels
6-7.5GB -> 1GB = 6x+ usable RAM

So even with the drop in resolutoin, there will have to be a greater drop in visual fidelity, and more need for load times for large worlds (though still might be well within the feasibility of streaming)
I agree with everything you said. Although I think storage isn't cooperating with a varied open world, level design certainly progressed from closed/contained areas.

Also because before in order to load things you needed CPU action, these days that kind of loading can work in parallel without impact (and CPU's are more powerful too). Kinda like one of the differences between usb and firewire (firewire is cpu independent, usb isn't thus takes CPU resources).


Problem is, it's certainly manageable, but it's still a problem because you can't simply ignore it, rather you have to take it into account and design around it. Smart loading methods have to be programmed, different texture LOD's, or dynamic pre-emptive loading have to be well thought out and implemented on a game by game basis, assisting tools for that are not abundant.

The more complex machines become the more developers are less likely to properly use everything that's in there, I'm betting a lot of games won't really be all that different than they are now, those games can also decide not to tackle the full RAM bank and instead go for a middleground of sorts. Really depends on the type of game though.


As for that differential ratio, I think Nintendo probably doesn't care, they're the type of company that compromise features in order to make the end result more predictable; that's most likely not the only factor but I'm kinda sure they see smaller load times as an advantage. I certainly think they should have went with 3 GB for the console though. 1 GB of RAM for games is IMO fine for now though; but that's pretty subjective of course.
 

Donnie

Member
Isn't this the WiiU technical discussion thread? :) Not that this isn't a worthwhile discussion, it is, but there isn't much WiiU involved.
 
So even with the drop in resolutoin, there will have to be a greater drop in visual fidelity, and more need for load times for large worlds (though still might be well within the feasibility of streaming)

On that point, how feasible would it be to stream worlds at an acceptable rate? At with its disc transfer speed 22MB/s, it should be a bit better at this than current-gen consoles, right?
 
On that point, how feasible would it be to stream worlds at an acceptable rate? At with its disc transfer speed 22MB/s, it should be a bit better at this than current-gen consoles, right?
Going by the proportion method I proposed, yes and theoretically no; it has double the X360's RAM available for games but doesn't double the 15.85 MB/s of X360's DVD drive so it could potentially perform a little worse if all RAM is to be used on the same scenario across both platforms albeit with heavier assets on the Wii U; of course like most tech speak it was somewhat simplified.

X360 is CAV (Constant Angular Velocity), meaning outer data in the disc and the very last few MB's at that are the only thing achieving 15.85 MB/s, inner disc reading will amount to less MB, 6.6 MB/s at worse; averaging at 11.2 MB/s.

Now, we lack much data on Wii U's drive, but Bluray drives from which it's derives are CLV (Constant Linear Velocity) meaning 22 MB/s are the minimum and the max it'll provide. So it actually doubles it.

And against PS3's 9 MB/s CLV it's a no contest.

Tesselation can also help that equation seeing complex vertex data takes space, so one can feed simpler vertex data to the GPU in order to spare RAM and bandwidth that way, it surfaced that 3DS actually has it on spec for those very same reasons (I suspect it to be because designers expected crap RAM designs in embedded designs, so for bandwidth reasons mostly). Although I doubt it'll get much usage for those reasons.

It's not too bad; but streaming expansive worlds is always hard, you have to have a framework at place for that, and lots of scripting to define behaviour. That's the hardest part really.
Quick question

PC / Xboxnxt / PS4 - X86
v's
WiiU - PowerPC

How much is this an issue for x-platform development?
Not huge, recompiling across architectures can be pretty straightforward these days.

It's an issue for optimization for PowerPC though seeing every hardware has it's quirks (in Wii U's CPU case I'd say probably paired singles) that you have to optimize specifically for. With the competition using the same architecture they might have a reason to optimize more for it than for the Wii U (of course, the Wii U being less powerful might also warrant some custom optimization for it being mandatory rather than optional).

Thankfully though the CPU is simple, kinda "what you see is what you get" otherwise, very little bottlenecking to take into account; so not much effort to tap it.
 
Going by the proportion method I proposed, yes and theoretically no; it has double the X360's RAM available for games but doesn't double the 15.85 MB/s of X360's DVD drive so it could potentially perform a little worse if all RAM is to be used on the same scenario across both platforms albeit with heavier assets on the Wii U; of course like most tech speak it was somewhat simplified.

X360 is CAV (Constant Angular Velocity), meaning outer data in the disc and the very last few MB's at that are the only thing achieving 15.85 MB/s, inner disc reading will amount to less MB, 6.6 MB/s at worse; averaging at 11.2 MB/s.

Now, we lack much data on Wii U's drive, but Bluray drives from which it's derives are CLV (Constant Linear Velocity) meaning 22 MB/s are the minimum and the max it'll provide. So it actually doubles it.

And against PS3's 9 MB/s CLV it's a no contest.

Tesselation can also help that equation seeing complex vertex data takes space, so one can feed simpler vertex data to the GPU in order to spare RAM and bandwidth that way, it surfaced that 3DS actually has it on spec for those very same reasons (I suspect it to be because designers expected crap RAM designs in embedded designs, so for bandwidth reasons mostly). Although I doubt it'll get much usage for those reasons.

It's not too bad; but streaming expansive worlds is always hard, you have to have a framework at place for that, and lots of scripting to define behaviour. That's the hardest part really.Not huge, recompiling across architectures can be pretty straightforward these days.

It's an issue for optimization for PowerPC though seeing every hardware has it's quirks (in Wii U's CPU case I'd say probably paired singles) that you have to optimize specifically for. With the competition using the same architecture they might have a reason to optimize more for it than for the Wii U (of course, the Wii U being less powerful might also warrant some custom optimization for it being mandatory rather than optional).

Thankfully though the CPU is simple, kinda "what you see is what you get" otherwise, very little bottlenecking to take into account; so not much effort to tap it.

Thanks for the detailed answer. One of the last points you made about the Wii U being less powerful warranting some custom optimization was something I suggested earlier, but you explained it better than I did.

The RAM won't ultimately decide whether Wii U gets downports anyway. That fate will be decided by business drivers.
Agreed.
 

disap.ed

Member
The RAM won't ultimately decide whether Wii U gets downports anyway. That fate will be decided by business drivers.

They shouldn't have cheaped out on the RAM though. I mean it's effin DDR3, it would have probably costed them less than 10 bucks (including raised motherboard complexity) and they (or the devs) would ultimately have one (maybe the biggest) concern less regarding ports.
Would have also meant they get twice the bandwidth.

They should have also put 2 more CPU cores in there, wouldn't have meant a big increase in die size nor power draw. Including the DSP this would have meant parity with the 6 XB360 threads and wouldn't be too far off the 8 PS4 threads.
 
No need to say sorry, I knew it because I researched it, and had read the very same PDF you presented.It is ridiculously high, yes.

I now think we both misunderstood the document: Each GDDR5 chip is connected via a 32 bit bus. Therefore, "2 Gbit at 256 bit" mean 8 2G chips that together draw 11.8W or 8.7W respectively. Sounds much more like what I'd have expected.
 

Kenka

Member
It's not too bad; but streaming expansive worlds is always hard, you have to have a framework at place for that, and lots of scripting to define behaviour. That's the hardest part really.Not huge, recompiling across architectures can be pretty straightforward these days.
OK, so that's one of the real challenges: developers will have to optimize their code to put a greater emphasis on procedural generation of terrains when they downport on WiiU and make some efforts to make "virtual backgrounds" that would give the impression the world is vast, while it would actually be "built" for a smaller radius around the player.

OK, now, how does this condition the speed at which you can travel across the world ?

Good post Blue ! You pack them in combos I see.
 

z0m3le

Banned
I now think we both misunderstood the document: Each GDDR5 chip is connected via a 32 bit bus. Therefore, "2 Gbit at 256 bit" mean 8 2G chips that together draw 11.8W or 8.7W respectively. Sounds much more like what I'd have expected.

http://originus.samsung.com/us/business/oem-solutions/pdfs/Green-GDDR5.pdf this is the document I am reading, the problem I see is that bolded part in your statement.

That would be 8x32MByte chips, which makes absolutely no sense. I think it is more likely that he means each 256MByte of GDDR5 uses 8.7watts.

http://www.anandtech.com/show/3621/amds-radeon-hd-5870-eyefinity-6-edition-reviewed

If you compare the HD5870 1GB vs the 2GB eyefinity version, you'll notice the increase is more display ports but the only spec change really is a 40watt bump to the TDP, adding another 1GB is likely in line with our first line of math, which was about a ~17.5watt bump per GByte of GDDR5. (you'll have to excuse that the HD 5870 is using 2Gbit chips rather than the 4Gbit chips available today though)
 
SRAM doesn't have to wait for its banks to refresh like eDRAM, so I believe there is a longer latency associated with eDRAM, but there was a debate on that a bunch of pages back and it appears at around 4MB the latencies are pretty close, and larger than that eDRAM actually benefits. The Wii U CPU having 3MB it would probably be slower than SRAM, but there being 3x more of it than they could have without the die being bigger probably offsets that by far.

I never heard about the 512KB thing for Gekko/Broadway, but bearing in mind the higher clock speed and more cores, more cache makes sense. Each core can get more done per second than either of the previous obviously, so they would benefit from larger caches.

Some modern CPUs have 8MB l2 and 8MB l3 for a total of 16MB just on one die, so cache obviously continues to help.
Yes, but then we have the fact that when on Wii mode, WiiU uses the same ammount of eDram than the Wii had of SRAM.
I assume that eDram on WiiU has identical latencies than SRAM on Wii, because if it weren't the case then Backward compatibility wouldn't be achievable at 100% accuracy.
 
http://originus.samsung.com/us/business/oem-solutions/pdfs/Green-GDDR5.pdf this is the document I am reading, the problem I see is that bolded part in your statement.

That would be 8x32MByte chips, which makes absolutely no sense. I think it is more likely that he means each 256MByte of GDDR5 uses 8.7watts.

What I think it means is that there are 8x256MB chips. Otherwise the "256 bit" would make no sense, since GDDR5 chips are always 32 bit.


http://www.anandtech.com/show/3621/amds-radeon-hd-5870-eyefinity-6-edition-reviewed

If you compare the HD5870 1GB vs the 2GB eyefinity version, you'll notice the increase is more display ports but the only spec change really is a 40watt bump to the TDP, adding another 1GB is likely in line with our first line of math, which was about a ~17.5watt bump per GByte of GDDR5. (you'll have to excuse that the HD 5870 is using 2Gbit chips rather than the 4Gbit chips available today though)

The measured power consumptions only differs in 6 Watts though (http://www.anandtech.com/show/3621/amds-radeon-hd-5870-eyefinity-6-edition-reviewed/9). Which fits nearly perfectly with the 11.8W in Samsung's document (for what I think is 8x256MB = 2 GB).
 

z0m3le

Banned
What I think it means is that there are 8x256MB chips. Otherwise the "256 bit" would make no sense, since GDDR5 chips are always 32 bit.
2Gbit is 256MBytes, not 2GBytes, which is why what you are saying doesn't make sense.

The measured power consumptions only differs in 6 Watts though (http://www.anandtech.com/show/3621/amds-radeon-hd-5870-eyefinity-6-edition-reviewed/9). Which fits nearly perfectly with the 11.8W in Samsung's document (for what I think is 8x256MB = 2 GB).

Andantech said:
Power Consumption
AMD did list a slight increase in power consumption for the 5870 Eyefinity 6 cards. In real world usage it amounts to a 6 - 7W increase in power consumption at idle and under load. Hardly anything to be too concerned about.

It's worth mentioning that these power numbers were obtained in a benchmark that showed no real advantage to the extra 1GB of frame buffer. It is possible that under a more memory intensive workload (say for example, driving 6 displays) the 5870 E6 would draw much more power than a hypothetical 6-display 1GB 5870.

It is far more likely that the GPU just isn't using the 2GBs of memory there. If they had for instance used a higher resolution test, it could of shown a more dramatic difference... for instance, 7watts per GB, is still beyond the 11.8watts you are talking about, 8GBytes of 11.8watt per 2GBytes is 47watts, but what this test is showing is a 7watt increase in GPU power draw with only 1GByte being added, which makes no sense with the PDF document at all unless of course Anandtech's second paragraph is correct, and the extra 1GByte of memory is hardly being used, thus allowing a much lower increase in power performance than if all 2GBytes of memory is being used.

Honestly I think you are misunderstanding the document, but I am not an engineer and can only read that document and your quotes of it for what they are, and that is 2Gbits, not 2GBytes.

Edit: I missed something. "In real world usage it amounts to a 6 - 7W increase in power consumption at idle and under load." This almost certainly points to the reality that the 2GBytes are never being fully used, unless you believe that the RAM's difference from a load and idle state is only 1Watt.
 
OK, so that's one of the real challenges: developers will have to optimize their code to put a greater emphasis on procedural generation of terrains when they downport on WiiU and make some efforts to make "virtual backgrounds" that would give the impression the world is vast, while it would actually be "built" for a smaller radius around the player.

OK, now, how does this condition the speed at which you can travel across the world ?

Good post Blue ! You pack them in combos I see.
Believe that is a good method you propose, and that is a good question. I would think it would heavily depend on how detailed the world is. For faster games, you may be able to get away with blurring the background a bit during gameplay since the player would likely have to focus more on controlling the game.
 
2Gbit is 256MBytes, not 2GBytes, which is why what you are saying doesn't make sense.

Let me explain:
The GDDR5 interface is 32 bit wide. Therefore for a 256 bit bus, 8 chips are used in parallel (32 * 8 = 256), regardless of their size. It makes no sense to speak of 256 bit when you only mean a single chip. Conclusion: What they mean is 8x 2Gbit chips.
I admit it's a bit confusing for people who don't look at such specs often.

Concerning PS4, we can even determine the RAM's clock rates depending on the size of the RAM chips:
The formula for bandwidth is "effective clock rate * bus width" --> formula for effective clock rate: "bandwidth / bus width". The effective clock rate for GDDR5 is "real clock rate * 4". The PS4's memory bandwidth is specified at 176 GB/s.
In case they are using 2 Gb chips, they need 32 of them --> 512bit = 64 Byte bus width. That'd make the clock rate: 176 / (4 * 64) = 0.6875 GHz = 687.5 MHz.
In case of 4 Gb chips, there are 16 of them --> 256bit bus width --> 1375 MHz.


It is far more likely that the GPU just isn't using the 2GBs of memory there. If they had for instance used a higher resolution test, it could of shown a more dramatic difference... for instance, 7watts per GB, is still beyond the 11.8watts you are talking about, 8GBytes of 11.8watt per 2GBytes is 47watts, but what this test is showing is a 7watt increase in GPU power draw with only 1GByte being added, which makes no sense with the PDF document at all unless of course Anandtech's second paragraph is correct, and the extra 1GByte of memory is hardly being used, thus allowing a much lower increase in power performance than if all 2GBytes of memory is being used.

I don't think that the power draw varies a lot depending on usage, since DRAM has to be constantly refreshed anyway.
 

z0m3le

Banned
Let me explain:
The GDDR5 interface is 32 bit wide. Therefore for a 256 bit bus, 8 chips are used in parallel (32 * 8 = 256), regardless of their size. It makes no sense to speak of 256 bit when you only mean a single chip. Conclusion: What they mean is 8x 2Gbit chips.
I admit it's a bit confusing for people who don't look at such specs often.

Concerning PS4, we can even determine the RAM's clock rates depending on the size of the RAM chips:
The formula for bandwidth is "effective clock rate * bus width" --> formula for effective clock rate: "bandwidth / bus width". The effective clock rate for GDDR5 is "real clock rate * 4". The PS4's memory bandwidth is specified at 176 GB/s.
In case they are using 2 Gb chips, they need 32 of them --> 512bit = 64 Byte bus width. That'd make the clock rate: 176 / (4 * 64) = 0.6875 GHz = 687.5 MHz.
In case of 4 Gb chips, there are 16 of them --> 256bit bus width --> 1375 MHz.




I don't think that the power draw varies a lot depending on usage, since DRAM has to be constantly refreshed anyway.

That isn't what the PDF is saying, it is likely saying that a 256Bit bus is achievable, and that 2Gbit uses 8.7watts, which means every 256MBs of GDDR5 32bit memory consumes 8.7watts, why would they give you power consumption of 8 2Gbit chips? that is far less useful, because it only gives you an energy consumption guide for 2GBytes of GDDR5.

I'll have to leave it to more experienced people than myself, but this is what it looks like to me. Otherwise I don't see GPUs with twice the memory jumping up so dramatically in TDP as we see on the anandtech review posted above.

I would like to point out though, that what you are suggesting is that 256MBs of 16bit GDDR5 consumes ~.54watts, thanks to the notebook part of that PDF and if 4GBit chips were used, .54watts could be achievable for 512MB of GDDR5... that number seems very small, even if it is only 16bits, you are talking about 2GB of GDDR5 for only ~2.2watts, 4.3watts for 32bit.
 
So tech guys, now that PS4 specs are out and based on the MS specs leaked.

For Wii U 3rd party analysis we could drop PS4 out of the discussion as that system seems it will be the top technically unless MS drops a surprise.

So how does Wii U 3rd party support look taken this into account.

Note: lets leave in a fantasy world where all 3rd party games are ported over to Wii U for sake of discussion.

Edit: yes a bit off topic my mistake. nvm
 

z0m3le

Banned
So tech guys, now that PS4 specs are out and based on the MS specs leaked.

For Wii U 3rd party analysis we could drop PS4 out of the discussion as that system seems it will be the top technically unless MS drops a surprise.

So how does Wii U 3rd party support look taken this into account.

Note: lets leave in a fantasy world where all 3rd party games are ported over to Wii U for sake of discussion.

http://www.youtube.com/watch?v=VjdXAbQ11Yo I think this is what you are looking for, there isn't a real example I can give you, so I'll just show you the worst case senario. Remember that PS2 and XBox were separated by a ton of graphical effects that the xbox could pull off while the PS2 could not. This isn't the case with next gen consoles, they will all more or less do everything the others do, just at varying degrees. With that disclaimer, enjoy. PS the graphical comparisons start around the 2 minute mark.
 

ozfunghi

Member
Concerning 3rd party multiplat games, i'll leave this here under spoiler tags, because it is going off topic in this "tech thread", yet probably more relevant to such games than the actual hardware comparison.

People are having a laugh at Iwata's expense in the old bumped thread where he has his doubts about tech advancements in PS4/Durango. I completely agree with him.

Developers and publishers need to think about what has happened the past 7 years. Studio's closing, thousands losing their jobs, losses all around. Devs are steering clear of any form of risk taking and are taking the tried and true approach which worked in the past. Having one big-budget game fail, could be "game over". The only studio's willing to take a risk, are smaller/indie studio's which are a lot less likely to be pushing hardware on a tech level to begin with. The big budget games on the other hand, have everything to gain getting their games on as many platforms as possible. I think a lot will depend on the common sense devs and publishers show and if or not PS4 and Durango are runnaway hits. If the latter turns out not to be the case, i bet WiiU multiplat games will not only be more likely, but also be more likely to get decent development attention.
 

z0m3le

Banned
Maybe this can tie all of that talk together, Wii U's DDR3 is suppose to eat about 2-3watts of power for it's 4x4Gbit 128bit design clocked at 800MHz for 2GBytes. The discussion above lends itself to the question, why didn't Wii U just use GDDR5 then? well because the power consumption of Graphics class DDR3 (GDDR5) with more than double the clock rate and double the memory bandwidth, doesn't in fact draw only 8watts of power, but over twice that, since GDDR5 is based on DDR3 memory.

In fact, since we know that DDR3 would be around 2-3 watts for 4 chips at 16bits, we can assume it would be 4-6watts at 32bits with the same 800MHz clock, so we can see why Nintendo didn't go with GDDR5 in their low power design, it simply would of been far too memory hungry, thanks to lost in blue and the PDF on this page, we can see that 32bit GDDR5 chips consume ~8.7watts of power. Maybe this can help us close the book on our discussion and move along.
 
That isn't what the PDF is saying, it is likely saying that a 256Bit bus is achievable, and that 2Gbit uses 8.7watts, which means every 256MBs of GDDR5 32bit memory consumes 8.7watts, why would they give you power consumption of 8 2Gbit chips? that is far less useful, because it only gives you an energy consumption guide for 2GBytes of GDDR5.

Of course a 256 Bit bus is achieveable. You can do whatever you want with the modules.
As for the reason why they give power consumption for 8 2Gbit chips: Because they want to show off a real world scenario. For graphics cards 8 chips is a usual case.

Now if you still have doubts: Just look at the following pages of the document.
On page 5 they list power consumptions for a 128 bit bus. They are of course exactly half the ones from the previous page, because it's only 4 modules instead of 8.
Then on page 6 it's even more apparent: The graph shows a power consumption of ~8 Watts on the Y-axis (btw. unlabeled, bad style) for GDDR5 with 128 GB/s bandwith. This bandwidth is unachievable with a single module. It's the 8 modules from before again.


I would like to point out though, that what you are suggesting is that 256MBs of 16bit GDDR5 consumes ~.54watts, thanks to the notebook part of that PDF and if 4GBit chips were used, .54watts could be achievable for 512MB of GDDR5... that number seems very small, even if it is only 16bits, you are talking about 2GB of GDDR5 for only ~2.2watts, 4.3watts for 32bit.

Honestly, I'm not sure what you are talking about. 16 bit what?
 
http://www.youtube.com/watch?v=VjdXAbQ11Yo I think this is what you are looking for, there isn't a real example I can give you, so I'll just show you the worst case senario. Remember that PS2 and XBox were separated by a ton of graphical effects that the xbox could pull off while the PS2 could not. This isn't the case with next gen consoles, they will all more or less do everything the others do, just at varying degrees. With that disclaimer, enjoy. PS the graphical comparisons start around the 2 minute mark.

The flaw with comparing 3D games from the previous generation (especially with GTA) is that their limitations in graphic rendering were alot more obvious. We are now in the era in technology where current-gen vs next-gen is "good/great vs even better" rather than "rough vs not as rough and blocky"
 

z0m3le

Banned
The flaw with comparing 3D games from the previous generation (especially with GTA) is that their limitations in graphic rendering were alot more obvious. We are now in the era in technology where current-gen vs next-gen is "good/great vs even better" rather than "rough vs not as rough and blocky"

I know, I figured showing him the absolute worse case I can think of, was a good way answer his problem, I gave him a disclaimer as well.
lightchris said:

As for the GDDR5 128bit, couldn't this be achieved with 8 16bit modules? or does GDDR5 only come in 32bits? that is what I was thinking from the PDF. If you are right though, then yes GDDR5 competes with DDR3 in power consumption with twice the clock rate and a higher bus. Seems completely against logic though IMO.
 

Effect

Member
Need for Speed: Most Wanted U. Is there anything that can be taken from the fact that you can change from day to night instantly in the game (with the PC textures and improved lighting systems) and there appears to nothing wrong graphically during the change?
 

Meelow

Banned
From what we know from the leaked/rumored 720 specs, is it possible that Nintendo is hoping the Wii U will fall back on 720 multiplats?
 

z0m3le

Banned
From what we know from the leaked/rumored 720 specs, is it possible that Nintendo is hoping the Wii U will fall back on 720 multiplats?

PC ports make more sense, the PC low setting should be below Wii U. so Wii U will fit in that medium/low specs. Mostly however Nintendo seems to be going after exclusives with more intensity than multiplats. Using their resources for collaborations with developers and doing crossovers, paying to bring out cancelled games from other studios. That seems to be their focus more than making sure GTAV is coming.
 

joesiv

Member
Need for Speed: Most Wanted U. Is there anything that can be taken from the fact that you can change from day to night instantly in the game (with the PC textures and improved lighting systems) and there appears to nothing wrong graphically during the change?
No not really, most engines these days can change from day/night at will. Since the other consoles have real time lighting, I would say it doesn't tell us much. For the most case, gone are the days of baked in lighting.
 
Top Bottom