• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

WiiU technical discussion (serious discussions welcome)

Honest question - Where do we stand on the old idea that the Watson twitter leak was actually accurate and hushed by an NDA about a Power7 variant? I know that the Wii U doom and gloomers have already ruled this out for us, but where do the people who know what they are seeing and talking about stand on this old piece of leaked info from IBM? I know Thraktor had brought this up in the past, but Van Owen derailed his thoughts fairly quickly....
The procedures used to replace the L2 with eDRAM may be based on Watson/POWER7, but it appears that stating that the processor is "based on the technology used for Watson" it is taking very loosely.
 

Thraktor

Member
Just found something regarding the power consumption discussion:

According to Chipworks, Latte is manufactured using an "advanced 40nm TSMC process". So I checked TSMCs website, and it turns out there's a process called 40LPG. 40LPG is a "new" process in the sense that while it was scheduled for 2009, it seemingly didn't become available until 2011. It combines two different processes, namely 40LP (low power) and 40G (high performance), on a single die. Essentially, some blocks of the chip are 40G, others are 40LP, depending on their characteristics and requirements, leading to a lower power consumption and potentially higher density. This process was never used in the PC space as far as I know, only in certain SoCs like Tegra 3.

Interesting. I'd thought something like that might be the case, and there's also the general improvements you get with a mature process, allowing you to hit clocks more reliably on lower voltages than you would earlier on in the processes life.

Honest question - Where do we stand on the old idea that the Watson twitter leak was actually accurate and hushed by an NDA about a Power7 variant? I know that the Wii U doom and gloomers have already ruled this out for us, but where do the people who know what they are seeing and talking about stand on this old piece of leaked info from IBM? I know Thraktor had brought this up in the past, but Van Owen derailed his thoughts fairly quickly....

A Power7 core is almost certainly larger than the entire Espresso die, let alone each of the cores you can see in the Espresso die thread. The quote is likely talking about technologies like eDRAM cache, which was first used in the Power7.
 

tipoo

Banned
Honest question - Where do we stand on the old idea that the Watson twitter leak was actually accurate and hushed by an NDA about a Power7 variant? I know that the Wii U doom and gloomers have already ruled this out for us, but where do the people who know what they are seeing and talking about stand on this old piece of leaked info from IBM? I know Thraktor had brought this up in the past, but Van Owen derailed his thoughts fairly quickly....


What Thraktor said, a single Power7 core is huge even with all the cache stripped out. We know from a hacker (Marcan42) that the cores are all compatible with the PowerPC 750 processors (which is not the Power7 line, it's what was commonly called the PowerPC G3 in macs), which the Gekko and Broadway are also based on, so it's likely a further tweaked version of that core. The Power7 association which spread like wildfire was from a statement saying it was "based on" similar technologies, and the eDRAM alone would explain that, maybe some other core tweaks that we don't know of. But it's certainly not a Power7 core itself.

IBMs official Watson account did later apologize and say it was mistaken, that it was simply "Power based" and not Power7 based. Power based is pretty broad, that's like saying x86-64 based.
 

The_Lump

Banned
What happened to that interview with the TSMC guy (think wsippel dug it up) where he was talking about 'stacking'? Does that make any more sense now given the new info/insights?


Edit: nvm. Here it is and it's not relevant at all. Not sure what I was thinking of really!

http://semimd.com/blog/tag/osat/
 

tipoo

Banned
What are the advantages of using eDRAM for cache over what RAM was used for L2 cache in Gekko/Broadway?

It's 3x as dense per unit memory. So 3MB of this would be the size of 1MB SRAM. Also consumes less power due to needing less transistors to do the same thing.
 
Area and power draw.
Ah, I see. So the actual performance of the cache is roughly the same, correct?

Back when we were having technical discussions about the Wii, I thought I recall someone saying that Gekko/Broadway wouldn't benefit much by having a cache bigger than 512kB, but we now have a Espresso core 4x bigger than that. Was that previously statement very flawed?

Edit: thanks for the additional detail, Tipoo.
 
Ah, I see. So the actual performance of the cache is roughly the same, correct?

Back when we were having technical discussions about the Wii, I thought I recall someone saying that Gekko/Broadway wouldn't benefit much by having a cache bigger than 512MB, but we now have a Espresso core 4x bigger than that. Was that previously statement very flawed?

Edit: thanks for the additional detail, Tipoo.

I think you meant kb not mb
 
Just found something regarding the power consumption discussion:

According to Chipworks, Latte is manufactured using an "advanced 40nm TSMC process". So I checked TSMCs website, and it turns out there's a process called 40LPG. 40LPG is a "new" process in the sense that while it was scheduled for 2009, it seemingly didn't become available until 2011. It combines two different processes, namely 40LP (low power) and 40G (high performance), on a single die. Essentially, some blocks of the chip are 40G, others are 40LP, depending on their characteristics and requirements, leading to a lower power consumption and potentially higher density. This process was never used in the PC space as far as I know, only in certain SoCs like Tegra 3.
This feels like the missing piece to reconcile the expected 320 SPUs and the observed low power usage.
 
It would be two to three times as fast on paper but less efficient and very unbalanced.

Actually I don't think that would be the case with the latter points.

As far as I'm concerned you can look at the wiiu two ways:

If you ignore the release date/360/ps3 (ie nintendo fans coming from the wii), its a very very impressive GPU for the power it uses

If you take into account the above (ie multi plat owners), the power the machine has is rather pathetic and outdated

Pathetic I can agree with, but outdated no sense low power GPUs are still made and will continue to be made.

Just found something regarding the power consumption discussion:

According to Chipworks, Latte is manufactured using an "advanced 40nm TSMC process". So I checked TSMCs website, and it turns out there's a process called 40LPG. 40LPG is a "new" process in the sense that while it was scheduled for 2009, it seemingly didn't become available until 2011. It combines two different processes, namely 40LP (low power) and 40G (high performance), on a single die. Essentially, some blocks of the chip are 40G, others are 40LP, depending on their characteristics and requirements, leading to a lower power consumption and potentially higher density. This process was never used in the PC space as far as I know, only in certain SoCs like Tegra 3.

Good info. I'd guess the SIMDs are made on the 40G process.


It's ok. If Nintendo makes another console in a similar fashion to Wii U in the future, then that means the next one would have like a 2TFLOP GPU. :p
 

tipoo

Banned
Ah, I see. So the actual performance of the cache is roughly the same, correct?

Back when we were having technical discussions about the Wii, I thought I recall someone saying that Gekko/Broadway wouldn't benefit much by having a cache bigger than 512kB, but we now have a Espresso core 4x bigger than that. Was that previously statement very flawed?

Edit: thanks for the additional detail, Tipoo.

SRAM doesn't have to wait for its banks to refresh like eDRAM, so I believe there is a longer latency associated with eDRAM, but there was a debate on that a bunch of pages back and it appears at around 4MB the latencies are pretty close, and larger than that eDRAM actually benefits. The Wii U CPU having 3MB it would probably be slower than SRAM, but there being 3x more of it than they could have without the die being bigger probably offsets that by far.

I never heard about the 512KB thing for Gekko/Broadway, but bearing in mind the higher clock speed and more cores, more cache makes sense. Each core can get more done per second than either of the previous obviously, so they would benefit from larger caches.

Some modern CPUs have 8MB l2 and 8MB l3 for a total of 16MB just on one die, so cache obviously continues to help.
 

Lizardus

Member
This doesn't pertain to hardware but is it possible for Nintendo to make all your purchases tied to an account instead of the hardware? Would this require a simple update or a hardware overhaul?
 

FyreWulff

Member
This doesn't pertain to hardware but is it possible for Nintendo to make all your purchases tied to an account instead of the hardware? Would this require a simple update or a hardware overhaul?

They're already tied to an account. They just have to add the ability to transfer the account to another machine.
 

Lizardus

Member
They're already tied to an account. They just have to add the ability to transfer the account to another machine.

So it's just a matter of if and when Nintendo decides to implement it? Thats kind of a relief because I was never planning on buying digitally.
 

FyreWulff

Member
So it's just a matter of if and when Nintendo decides to implement it? Thats kind of a relief because I was never planning on buying digitally.

They actually did say it was so people could buy content and take the account forward to future Nintendo consoles.
 
SRAM doesn't have to wait for its banks to refresh like eDRAM, so I believe there is a longer latency associated with eDRAM, but there was a debate on that a bunch of pages back and it appears at around 4MB the latencies are pretty close, and larger than that eDRAM actually benefits. The Wii U CPU having 3MB it would probably be slower than SRAM, but there being 3x more of it than they could have without the die being bigger probably offsets that by far.

I never heard about the 512KB thing for Gekko/Broadway, but bearing in mind the higher clock speed and more cores, more cache makes sense. Each core can get more done per second than either of the previous obviously, so they would benefit from larger caches.

Some modern CPUs have 8MB l2 and 8MB l3 for a total of 16MB just on one die, so cache obviously continues to help.
I understand. Thanks for the thorough response.
No idea. I would imagine that LP blocks can be denser as the leakage is lower. Considering Lattes eDRAM density seems very high, that part of the chip could be an LP block for example. Whereas the shader clusters would obviously be G blocks. If that process is used at all, that is. I have no idea if it's even possible to tell by looking at die shots.
That is some very interesting data. Perhaps someone can contact Chipworks to see what they stated Latte to be on an "advanced" 40nm process.
 
Hi Tech guys, good job.

Can this scenario work:

Wii U: like a middle end pc (all medium options plus maybe some nice surprise like the lightning)
PC and Next Gen (High or ultra high): everything maxed out including very high textures, better particle effects and lightning.

I am curious as such scenario would place the Wii U well for porting some types of games. I can see a game later having issues being ported but at least 2 years from now this can be the case.
 

joesiv

Member
Hi Tech guys, good job.

Can this scenario work:

Wii U: like a middle end pc (all medium options plus maybe some nice surprise like the lightning)
PC and Next Gen (High or ultra high): everything maxed out including very high textures, better particle effects and lightning.

I am curious as such scenario would place the Wii U well for porting some types of games. I can see a game later having issues being ported but at least 2 years from now this can be the case.
Sure, if there was a PC version this could definitely be true, depends on the game, and what the different profiles (low/med/high/ultra) are setup as. In some games it may be low, others medium, others maybe high, who knows.

However, I think it's going to be mainly about the game engine, if the game engine is really pushing the CPU for example, changing graphics settings won't help much, instead things like reducing NPCs, Physics, Max Users, may need to be reduced as well.

But as you suggest, in 2, 3, 4 years time, the low end PC's will be like medium PC's of today... which means the Wii U will be dropped down another notch or two.
 

AzaK

Member
Hi Tech guys, good job.

Can this scenario work:

Wii U: like a middle end pc (all medium options plus maybe some nice surprise like the lightning)
PC and Next Gen (High or ultra high): everything maxed out including very high textures, better particle effects and lightning.

I am curious as such scenario would place the Wii U well for porting some types of games. I can see a game later having issues being ported but at least 2 years from now this can be the case.

I don't think you're far off. PC games support a wide variety of configurations so I imagine there will be lower fidelity Wii U versions whilst Orbis/Durango get the super sexy ones.

One thing I'm interested in us seeing how the cycle lenght goes. This one will be 7&8 years for Sony and MS respectively and with high end consoles again I can see then going long once more. What I'd like to see is Nintendo go for a 5 year gen. This would put them in the 4th year of O/D or close to 1/2 way in their cycle. That would mean we'd likely get a machine a bit more powerful than them so port friendly, but without their successors on the horizon, keeping development at bay.
 
A

A More Normal Bird

Unconfirmed Member
Is it technically feasible to port a game built around 8GB GDDR5 RAM down to Wii U with it's paltry 1GB slow RAM?

As mentioned above, it depends on what you're using that RAM for. It's possible to take games like Skyrim and GTA4 on PC and use mods to make them choke on systems with way more system and video memory than is required for playing the regular game, simply by increasing draw distance, texture resolution and rendering effects.
 

Van Owen

Banned
It will be interesting to see if the next Elder Scrolls games comes to Wii U. The PS3's memory caused a lot of problems.

And Reggie mentioned he would have games like Skyrim now that Wii U was powerful...
 

AzaK

Member
It will be interesting to see if the next Elder Scrolls games comes to Wii U. The PS3's memory caused a lot of problems.

And Reggie mentioned he would have games like Skyrim now that Wii U was powerful...

8GB was a big surprise to me. We were all expecting Durango is 8 and Orbis 4 with about 5 and 3.5 for games respectively. Now both others have 8 that's going to make it a little harder for Wii U I think as the baseline will possibly be the 5 for games now. Wii U will maybe get to 1.5 for games. This doesn't make it impossible just a little more restrictive.

Hopefully the next gen engines scale down enough.
 

ozfunghi

Member
It will be interesting to see if the next Elder Scrolls games comes to Wii U. The PS3's memory caused a lot of problems.

And Reggie mentioned he would have games like Skyrim now that Wii U was powerful...

When did this happen??

rimshot.wav

What i'd like to see is multiplat games looking better at a lower resolution on the gamepad. Enable some extra effects, maybe low res textures, lower poly models for the low res display, but with better draw distance etc. Obviously i'm thinking about Watch_Dogs etc.

8GB was a big surprise to me. We were all expecting Durango is 8 and Orbis 4 with about 5 and 3.5 for games respectively. Now both others have 8 that's going to make it a little harder for Wii U I think as the baseline will possibly be the 5 for games now. Wii U will maybe get to 1.5 for games. This doesn't make it impossible just a little more restrictive.

Hopefully the next gen engines scale down enough.

They need to get that in order asap. In time for W_Dogs to use it :)
 
A

A More Normal Bird

Unconfirmed Member
It will be interesting to see if the next Elder Scrolls games comes to Wii U. The PS3's memory caused a lot of problems.

And Reggie mentioned he would have games like Skyrim now that Wii U was powerful...

If the system starts to sell really well then they might consider it. 256MB was just too small for a game like Skyrim, but I don't see the sort of information related to save bloating etc increasing drastically for Bethesda's next gen games. Visual fidelity is probably going to be a larger drain on RAM than underlying game functions. A more likely bottleneck would be the CPU imo.
 

wsippel

Banned
Is it technically feasible to port a game built around 8GB GDDR5 RAM down to Wii U with it's paltry 1GB slow RAM?
Depends on the game. Also, don't compare total RAM to usable RAM. Wii U has 2GB RAM. We don't know how much of that is available to games now, we only know how much was available a few months ago. And we have no idea how much Sony has to lock down for all the fancy background features.


8GB was a big surprise to me. We were all expecting Durango is 8 and Orbis 4 with about 5 and 3.5 for games respectively. Now both others have 8 that's going to make it a little harder for Wii U I think as the baseline will possibly be the 5 for games now. Wii U will maybe get to 1.5 for games. This doesn't make it impossible just a little more restrictive.

Hopefully the next gen engines scale down enough.
With sufficient optimization and depending on what other background features Nintendo plans to introduce, I'm pretty sure they could get the OS memory footprint down to 128MB or less, so up to 1.9GB for games.
 

ozfunghi

Member
Depends on the game. Also, don't compare total RAM to usable RAM. Wii U has 2GB RAM. We don't know how much of that is available to games now, we only know how much was available a few months ago. And we have no idea how much Sony has to lock down for all the fancy background features.



With sufficient optimization and depending on what other background features Nintendo plans to introduce, I'm pretty sure they could get the OS memory footprint down to 128MB or less, so up to 1.9GB for games.

Let's hope so but i'm not holding my breath. I'd be extatic with a 1792MB + 256MB devide. Heck, i'd be happy with an announcement they're even considering to drop 512MB.
 
How efficient is it to stream data from the disc with these consoles? Wii U disc speed is ~5x Blu-ray, which is 2.5x faster than the PS3 disc drive, and only a little bit slower than the 6x speed of the PS4 and Durango.
 

AzaK

Member
Depends on the game. Also, don't compare total RAM to usable RAM. Wii U has 2GB RAM. We don't know how much of that is available to games now, we only know how much was available a few months ago. And we have no idea how much Sony has to lock down for all the fancy background features.



With sufficient optimization and depending on what other background features Nintendo plans to introduce, I'm pretty sure they could get the OS memory footprint down to 128MB or less, so up to 1.9GB for games.

Yeah 128 isn't bad in an embedded system, but I guess it depends on how much they want to leave for future apps. I imagine the browser is the biggest memory hog at present.
 

Van Owen

Banned
Watching the PS4 conference really made me wish Nintendo would let someone from the West design their console. The PC like hardware is going to make development super easy plus it's much more capable.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Watching the PS4 conference really made me wish Nintendo would let someone from the West design their console. The PC like hardware is going to make development super easy plus it's much more capable.
The whole point of consoles is that they don't have to be pc-like to achieve similar results. PC-like = PC BOM.

While on the subject of memory consumption, here's something which I've been considering for a long time but never got to post about it.

Even with the current 'hard' split of main mem in the WiiU, there's a strict 'task-switching' paradigm which allows to be exploited here. Basically, the idea is the following:

Games have 1GB of 'persistent' RAM (content guaranteed to persist until the game exits), but they can also have an unspecified mount of 'transient' RAM allocated in the OS-reserved space. The contract is that everything stored in that memory is subject to eviction at the user's press of home button. What does that immediately allow us to place there? For one, all kinds of 'per-frame' transient data - render targets that get re-generated on a frequent basis (yes, that does included all front framebuffers), or generally things that can *or* do get re-generated within the timespan of a frame.

Such a paradigm would allow games to behave just as seamlessly at the game->home-button->game scenario as in the currently alleged 'hard split' case.
 
Watching the PS4 conference really made me wish Nintendo would let someone from the West design their console. The PC like hardware is going to make development super easy plus it's much more capable.
I didn't see any console design there; no console to be seen!

Seriously though, you have two spectrums, being too low end and being too overkill. 8 GB of GDDR5 is overkill, hell, it makes me sad thinking 1/2 GB of that will be allocated for OS duties and perhaps another extra chunk will go for caching bluray data, being fed at a whooping 27 MB/s.

Microsoft seems to have the more balanced machine this time around, somewhere in between, and cheaper to manufacture/turn a profit, for sure.

8 GB of GDDR5 should waste like 140 W alone.
Yeah 128 isn't bad in an embedded system, but I guess it depends on how much they want to leave for future apps. I imagine the browser is the biggest memory hog at present.
tbh, I wish they would offload that to a cloud service.

Kinda like Opera Mini does work, it's fed data by a server that pre-caches and compresses the pages so there's no real browser engine there (and as a bonus makes transfers smaller too); they'd retain the whole functionality and liberate lots of RAM that way.
 

Log4Girlz

Member
I didn't see any console design there; no console to be seen!

Seriously though, you have two spectrums, being too low end and being too overkill. 8 GB of GDDR5 is overkill, hell, it makes me sad thinking 1/2 GB of that will be allocated for OS duties and perhaps another extra chunk will go for caching bluray data, being fed at a whooping 27 MB/s.

Wii U has 1 GB out of 2 GB for OS functions. How does that make you feel?
 
8 GB of GDDR5 should waste like 140 W alone.

Certainly not. I'm not sure what the specifications are nowadays, but according to this 2 GB of GDDR5 at a 256 bit bus should be around 9-12W (of course it depends on the exact voltage or clocks).
That's still a lot of course, but shouldn't be that problematic to cool because it's distributed among the board and not all in one (hot) spot like with a CPU or GPU.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Certainly not. I'm not sure what the specifications are nowadays, but according to this 2 GB of GDDR5 at a 256 bit bus should be around 9-12W (of course it depends on the exact voltage or clocks).
That's still a lot of course, but shouldn't be that problematic to cool because it's distributed among the board and not all in one (hot) spot like with a CPU or GPU.
The fact it's not concentrated over a small area means no large heatspreaders, but the heat disposal system still has to get rid of those 50W, lest ambient temp escalation ensue.
 
The fact it's not concentrated over a small area means no large heatspreaders, but the heat disposal system still has to get rid of those 50W, lest ambient temp escalation ensue.

The upcoming physical teardown of the console and its contents should be most interesting. Especially so that we can see the alignment of those gddr5 chips on the board.
 

AzaK

Member
The whole point of consoles is that they don't have to be pc-like to achieve similar results. PC-like = PC BOM.

While on the subject of memory consumption, here's something which I've been considering for a long time but never got to post about it.

Even with the current 'hard' split of main mem in the WiiU, there's a strict 'task-switching' paradigm which allows to be exploited there. Basically, the idea is the following:

Games have 1GB of 'persistent' RAM (content guaranteed to persist until the game exits), but they can also have an unspecified mount of 'transient' RAM allocated in the OS-reserved space. The contract is that everything stored in that memory is subject to eviction at the user's press of home button. What does that immediately allow us to place there? For one, all kinds of 'per-frame' transient data - render targets that get re-generated on a frequent basis (yes, that does included our front/back framebuffer), or generally things that can *or* do get re-generated within the timespan of a frame.

Such a paradigm would allow games to behave just as seamlessly at the game->home-button->game scenario as in the currently alleged 'rigid split' paradigm.
Interesting blu and so obvious now you've said it :) basically a lot like the old Direct X where certain resources could be pulled from under you and you'd need to reallocate/reinitialise them.

Question though, wouldn't these typically generated-per-frame resources want to be in eDRAM? Given the speed of the Wii U Mem2(?) I'd have thought the best course of action is to use it as read-only if you can.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Interesting blu and so obvious now you've said it :) basically a lot like the old Direct X where certain resources could be pulled from under you and you'd need to reallocate/reinitialise them.

Question though, wouldn't these typically generated-per-frame resources want to be in eDRAM? Given the speed of the Wii U Mem2(?) I'd have thought the best course of action is to use it as read-only if you can.
That's a reasonable remark, and I see what brought it - I shouldn't have said 'back buffer' in the example, only front. Let me correct that post.

So - yes, some render targets will never leave edram (which BTW automatically makes them of no concern wrt main mem consumption), but others could be resolved to main ram, and those could land directly in that transient mem. But such render targets are just low-hanging fruits. Other things could be delegated as well, including heap allocations that have the lifespan of a frame. Other things are data structures which are long-lived, but could be re-generated within the duration of a frame.
 

Gahiggidy

My aunt & uncle run a Mom & Pop store, "The Gamecube Hut", and sold 80k WiiU within minutes of opening.
Seeing last night's demos raises expectations from Nintendo to produce some impressive visuals for the Wii U's 2nd wave. I know there is a gap between them and Sony, but it shouldn't be that big. As I understand it, Wii U's GPU still has got the same, ballpark, architecture just less powerful. So, I'm expecting results like what we saw last night, just limited in scope.

* stares at Retro Studios *
 
Certainly not. I'm not sure what the specifications are nowadays, but according to this 2 GB of GDDR5 at a 256 bit bus should be around 9-12W (of course it depends on the exact voltage or clocks).
That's still a lot of course, but shouldn't be that problematic to cool because it's distributed among the board and not all in one (hot) spot like with a CPU or GPU.
You're doing it wrong.

From your source... 2G is 2 Gigabit chip, no such thing as 2 GB GDDR5 chip; so that's 8.7/11.8W per 256 MB chip depending on the voltage, add it up until you get to 8192 MB (or 64 Gigabits) for 8 GB.

They'll probably be using 4G chips (512 MB), and 1.35V at that, so around 8.7W per chip; that's still 16 chips for 8 GB.

8.7*16=139.2W



Look at GPU TDP's, it's severely impacted from the GDDR5 amount.
 
You're doing it wrong.

From your source... 2G is 2 Gigabit chip, no such thing as 2 GB GDDR5 chip; so that's 8.7/11.8W per 256 MB chip depending on the voltage, add it up until you get to 8192 MB (or 64 Gigabits) for 8 GB.

They'll probably be using 4G chips (512 MB), and 1.35V at that, so around 8.7W per chip; that's still 16 chips for 8 GB.

8.7*16=139.2W



Look at GPU TDP's, it's severely impacted from the GDDR5 amount.

You're right of course. I should've read that more carefully, sorry.

Still, that seems almost ridiculously high to me and would make the PS4 250+ W. Much more than I'd have expected. Are you sure that GDDR5 chips are not quite a bit less power hungry today on more modern production processes (sadly I don't find anything on that on the net)?
 

prag16

Banned
You're doing it wrong.

From your source... 2G is 2 Gigabit chip, no such thing as 2 GB GDDR5 chip; so that's 8.7/11.8W per 256 MB chip depending on the voltage, add it up until you get to 8192 MB (or 64 Gigabits) for 8 GB.

They'll probably be using 4G chips (512 MB), and 1.35V at that, so around 8.7W per chip; that's still 16 chips for 8 GB.

8.7*16=139.2W



Look at GPU TDP's, it's severely impacted from the GDDR5 amount.

Interesting; wasn't aware they had 512MB chips. Thought it was all 256 still at this point (at any sane price). I was envisioning 32 of those things on the board, lol. 16 isn't as bad.
 

z0m3le

Banned
Interesting; wasn't aware they had 512MB chips. Thought it was all 256 still at this point (at any sane price). I was envisioning 32 of those things on the board, lol. 16 isn't as bad.

It's still going to be much more expensive for a motherboard that can take on the complexity of 16 chips, I don't see how this thing will be even the same size as launch PS3, and that is with taking the PSU out of the box.

Although 8GB GDDR5 did win me over, if I get another console beyond Wii U, based on tech, it will be the PS4, this is a bit off topic but PS4 should graphically actually exceed XB3 thanks to the same architecture and having a bit more grunt on the GPU dedicated to gaming (so frame rates should be more solid and might even produce a larger draw distance, other small effects that might look nice. The real kicker though is the extra power that is dedicated to the GPGPU operations, so stuff like better physics, realistic wind effects, much more realistic destruction and smoke/dust other particle effects could be added on top of the XB3's game when porting over. PS4 obviously won't have as mature of a development enviroment since XB3 will use a modified DX11 while PS4 will have it's custom API being redesigned to introduce the new architecture, but that won't matter too much since deep down the basic GPUs are going to do the same thing, and even Wii U should be able to handle just about everything those GPUs are doing at a scaled back version of the same games.

Sorry again for being a bit off topic, I just wanted to sort of tail end the discussion of the PS4 we were having here thanks to the event yesterday.
 
So.. nobody answered my question eariler :)

How efficient is it to stream data from the disc
with these consoles? Wii U disc speed is ~5x Blu-ray, which is 2.5x faster than the PS3 disc drive, and only a little bit slower than the 6x speed of the PS4 and Durango.

Seeing last night's demos raises expectations from Nintendo to produce some impressive visuals for the Wii U's 2nd wave. I know there is a gap between them and Sony, but it shouldn't be that big. As I understand it, Wii U's GPU still has got the same, ballpark, architecture just less powerful. So, I'm expecting results like what we saw last night, just limited in scope.

* stares at Retro Studios *

I'm curious if we may see more impressive games due to companies attempting to downporting PS4/Durango games to the Wii. Now that they are playing with the other next-gen machines, they have higher ambitions so that they will try to access more on the system's extra RAM and modifications..


It's still going to be much more expensive for a motherboard that can take on the complexity of 16 chips, I don't see how this thing will be even the same size as launch PS3, and that is with taking the PSU out of the box.

Although 8GB GDDR5 did win me over, if I get another console beyond Wii U, based on tech, it will be the PS4, this is a bit off topic but PS4 should graphically actually exceed XB3 thanks to the same architecture and having a bit more grunt on the GPU dedicated to gaming (so frame rates should be more solid and might even produce a larger draw distance, other small effects that might look nice. The real kicker though is the extra power that is dedicated to the GPGPU operations, so stuff like better physics, realistic wind effects, much more realistic destruction and smoke/dust other particle effects could be added on top of the XB3's game when porting over. PS4 obviously won't have as mature of a development enviroment since XB3 will use a modified DX11 while PS4 will have it's custom API being redesigned to introduce the new architecture, but that won't matter too much since deep down the basic GPUs are going to do the same thing, and even Wii U should be able to handle just about everything those GPUs are doing at a scaled back version of the same games.

Sorry again for being a bit off topic, I just wanted to sort of tail end the discussion of the PS4 we were having here thanks to the event yesterday.
The think Sony decision with RAM is a bit excessive, as it has increased the size, the cooling required, and the BOM of the system to possibly another level than what Durango is going for, and a lot of companies may not take full advantage of it due to the increase of multi-platform games. This reminds me on how they increased Vita's main RAM from 256MB to 512MB. Nice for games, but it result to them making weird sacrifices to keep the price down (like not having customer memory)
 
Top Bottom