The thing I'd really like to know about the memory bandwith is how important the issue really is. Were the 360 or PS3 ever bottlenecked by the ram bandwidth? I haven't done any low level coding for either, but I can't remember hearing much complaints about it from the engine coders at my old job, or other devs, apart from possibly the frame buffer access for PS3 for lacking edram.
the 360 should theoretically be able to access 373 MB each frame ( at 60 fps). That's almost the entire memory, each frame. Obviously some memory will be accessed many times per frames, while others won't be touched. But it still seems quite a lot to me. Most of the data for nearly any game is textures, so let's say 50% for textures. At 1280x720, that's 200 bytes of data access per pixel. meaning you could fill the entire screen 50 times with unique uncompressed native res 32 bit textures. On the Wii U it should be a bit more than half than that so 25-30 times; still not something you'd want to do.
I'm not saying it's not important. I honestly don't know how big of an issue it is, but I doubt most of the doomsayers do either, and I'm guessing some of the backlash from this news is a bit exaggerated. Lower latencies, bigger caches and edram should be able help offset the issues in many cases, though perhaps not all of them.
It seems the real problem is not so much "if" a well coded game can't live with it as it is that a lot of games this gen were really "creative" to say the least in the way they worked; Wii U compared to that is looking more and more like a fixed function architecture of sorts.
I'll explain, this generation was all about streaming and move data from here to there, working it around, re-transfering; and dealing with the latency coming from that.
For example, MLAA on PS3 is all about sending the finished frame for the CPU (to the SPE's) for it to do some image processing on top of the finished result and sending it back to the framebuffer. I don't know via which method they deal with it on the RAM side, but for all I know they could be transfering the frame from the GPU RAM bank into the CPU bank (and dealing with the fact they're different kinds of RAM with different latencies so they have to wait cycles for communication to occur) processing it and sending it back. I noticed a big amount of lag (for an action game) in God of War 3 for instance.
Anywho, that kind of way of going about things was less than ideal (inventive, yes) but fact is... you could do it.
X360 also did some crazy things, some games used part of the RAM as framebuffer or buffer of sorts, some used solutions like the Halo 3 HDR rendering technique (instead of rendering HDR the normal way it opted to render the image twice, at two light levels, one dark the other lighter than the final result, then combining them). 32 MB eDRAM is quite a bit, but by approaches like this one anything but unlimited and thing is they're loosing overhead for doing some of those tasks "the wrong way" which would increase latency but still get the work done.
I've only talked about exclusives so far, and that's the thing, exclusives will be just fine on the Wii U; problem is the third party games; possibly even late ports. No developer wants to optimize a game from the ground or having to cater to a lot of rules just so the game runs like it should, he just expects to drop the code in and see it run sufficiently well.
The expression "But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid" comes to mind, but it's still Nintendo's fault for making it more of a one trick pony than a jack of all trades.
Gamecube (and by extent) the Wii were one trick ponies, ports fared so-so there, but if you wrote a game from the ground you'd have the whole lot of shortcuts hardwired into the hardware and the game could shine big time.
The reason embossed surfaces, EMBM water maps and even fur shading was so cheap is because not only was it very good at texturing but it had 1 MB of dedicated eDRAM on the GPU, these textures were often low res (not a lot of room) but they had it's own memory channel, bandwidth and could essentially spam the GPU and be accessible at all times; doing fur shading on the Xbox (conker's bad fur day) was comparably heavier, as is doing it on normal main ram on the X360 (and no way the 10 MB eDRAM buffer could have the overhead for it). With that in mind it's easy to see why Nintendo went with 32 MB of it, not as a means to have a big framebuffer to endure any kind of usage, but for main screen rendering, auxiliar screen, and then extra RAM for doing some stuff the Nintendo way (and then it doesn't seem that big all of a sudden). Thing is no one's rendering pipeline is being optimized for "ex-Nintendo Gamecube developer logic" and being the most different child on the block never brings the best results; even if that path of doing things works and is effectively the best, they should never have deprecated others.
This machine is not about moving data around, it's about placing it in the right place right from the beginning and making it so it is as effective as possible. But we just had a generation that was pretty anarchist in that sense. They're enforcing the wrong kind of rule and getting the wrong results, possibly due to it.
Biggest problem though, is that they put themselves in a really vulnerable situation to beat as they're coming to market first, it's not about besting current gen platforms, it's about keep being respectable (not a beast of a platform, just respectable) once the other platforms get out there; and depending on the poker play the other manufacturers have in mind or still in the table now that Nintendo has shown theirs, they might be laughing or planning on making it so that they definitely perform better on that area (or that their architecture relies heavily on having bandwidth and in developers using it while also using compression) so Nintendo has a hard time. And I can't blame them for doing just that.