cyberheater
PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 Xbone PS4 PS4
Killzone 3 on Xbox360?
I couldn't find clear examples of CPU audio usage on Xbox360. I can at least link to my sources. Can you?
Killzone 3 on Xbox360?
Some games (Like racing games) can take up a whole core on the 360 just for the sound.
Depends on the game and the developers. Also, the guy you replied to was specifically referring to the 360. The PS3's Cell can probably handling sound resources better than the 360 due to the SPUs, though I'm not sure.This again.
And example. The audio for Killzone3 took less then 3% of the CPU budget. Audio does not take a large part of a games CPU budget.
The Cell is primarily designed as a multimedia processor. It's not surprising that it would handle audio without problems.Depends on the game and the developers. Also, the guy you replied to was specifically referring to the 360. The PS3's Cell can probably handling sound resources better than the 360 due to the SPUs, though I'm not sure.
I find this insane. I mean yeah, the Dashboard/XMB can be a bit slow sometimes, but they still manage to do pretty much everything a console OS could possibly need. 1GB just seems like such a mind boggling level of overkill.
There is a very simple explanation to the massive OS RAM: Internet Browser
They should probably be able to reduce it quite a bit. But that still put's it at a completely different level than the PS3 or 360
I couldn't find clear examples of CPU audio usage on Xbox360. I can at least link to my sources. Can you?
Wait till you see what Durango uses, if the rumors turn out to be true.
And there is a simple rebuttal to that - vita
You answered your own question. With limited range RGB you lose precision, since the transmission range is, well, limited. It's not a massive issue (though I can easily see it on some gradients), but it's really an unnecessary restriction.What's wrong with limited range RGB? Assuming you're using a HDTV, that matches the levels used by HD broadcast and bluray. 16-235 I think, rather than 0-255. If you're calibrated for those, then your console should be set to limited anyway
Every game is different. IIRC Halo 3 used an entire core for audio while using a thread in Reach.
Thanks KageMaru.
Yeah, I'm shaking my head at those rumors. If this turns out true, I wonder how much Sony will reserve.
You answered your own question. With limited range RGB you lose precision, since the transmission range is, well, limited. It's not a massive issue (though I can easily see it on some gradients), but it's really an unnecessary restriction.
The Cell is primarily designed as a multimedia processor. It's not surprising that it would handle audio without problems.
On a general purpose CPU audio can really clog up the capacity.
Racing games in particular are extremely taxing on the sound. A single car uses a multitude of "voices" at any time, wich multiplies with each car on track.
"general purpose CPU"s are perfectly fine to process audio, the Cell does not have some big advantage (other than the fact it has around double the cores of the 360).
Audio is easy to handle.
Just because one game used a core on the 360 for something like 300 sounds at a time does not mean a damn thing for most games.
"general purpose CPU"s are perfectly fine to process audio, the Cell does not have some big advantage (other than the fact it has around double the cores of the 360).
Might be easy to handle, but it's still taking up valuable threads and clock cycles that the CPU could be used else where. I know a few years ago they estimated on the PC you gained like 5-10% performance gain cpu wise if you grabbed a dedicated soundcard verses onboard. That's not a ton, but every little bit can count!
So, is it fair to say that the wii u is a side step rather than a jump forward compared to current gen hardware?
Might be easy to handle, but it's still taking up valuable threads and clock cycles that the CPU could be used else where. I know a few years ago they estimated on the PC you gained like 5-10% performance gain cpu wise if you grabbed a dedicated soundcard verses onboard. That's not a ton, but every little bit can count!
Obviously. Most sound cards don't feature DSPs, and even if they do, they hardly get used anymore.That was a very long time ago.
Modern soundcards will give you no performance increase what so ever.
That was a very long time ago.
Modern soundcards will give you no performance increase what so ever.
Yes, but if Sony or MS went the DDR3 route as well, increasing the amount of RAM also allows them to use a wider bus and increase overall bandwidth, and thus performance, even if the total amount/amount reserved for apps seem like overkill. If Nintendo had relented a little bit and used 8 RAM chips, like the original 360, instead of the 4 they went with to cut costs, a 128-bit bus would have been possible and we wouldn't be having this discussion right now. I fully expect MS and Sony to go with a bunch of RAM chips next gen. Like 12 perhaps.
Edit: Responded to the correct post this time.
That'd be fair.
It seemingly has a GPU of fairly modern featureset with similar capability, more (but slower memory).
But it's also fair to say it has drawbacks in comparison to the PS3/360. Mentioned memory bandwidth, and a CPU that lacks any kind of grunt.
Culminating in something that should be marginally more powerful, getting there in different ways.
Every display device I'm using can be switched between full and limited range RGB (most recent ones auto-detect it).You lose some numbers, but its more accurate to a calibrated display - it would be calibrated so that 16 is black. If you pass it below 16 with a full range signal, you won't get very dark grey, you'll still get black so you're clipping the signal.
Conversely if you calibrate for full range (eg a computer monitor) then playing a bluray on it would have dark grey blacks, and light grey whites. (Unless you have a PC with software bluray player set to compensate for that)
It will certainly handle branchy code better than Xenon/Cell per clock. But I really wish we knew how much lower the clock is.To be fair I think it'll handle general purpose code better than the Xenon or Cell. It's lacking in floating point performance and I imagine that's where most of the weak CPU complaints are coming from.
The whole point of paired singles was to use one double precision register to store two single precision values and then do math on both values across registers. If the values have to be in separate registers, it's not really a paired single anymore.As I understood that post, it's not really 96 bit registers. It's that there are general 64 bit registers, but the paired single instructions don't use those 64 bits, they use 32 bits of that and another extra 32 bit register.
Which does honestly sound confusing to me (I'm used to SIMD architectures that simply pack N X-bit numbers into one N*X bit register).
Wait till you see what Durango uses, if the rumors turn out to be true.
PPC FPRs are normally 64bit because PPCs normally support double precision (even a lowly embedded CPU does that). From there on, when processing singles, half of the FPR is used. When processing doubles, the entire FPR is used. When using paired singles 750CL (aka Gekko, aka Hollywood) uses the low 32 bits of the FPR for ps0 (paired-single-0) and the high 32 bits for ps1 (paired-single-1). That slashdot anonymous post is a joke.Some anonymous dude on Slashdot who claims to be a developer working on Wii U wrote something interesting: According to him, the Wii U CPU has no VMX or VSX units. Instead, Nintendo still uses a SIMD capable FPU with support for paired singles. That makes sense, paired singles would be required for Wii compatibility, so they have to be supported. No surprise here.
But he also wrote that he expected the FPRs (floating point registers) to be 64bit, enough for one double or two singles. 64bit FPRs seem to be standard for all PowerPC cores, no matter whether or not the cores support paired singles (like the 476FP). What he found instead were 96bit registers. I've never heard of a 96bit floating point unit or 96bit registers. I don't even know if there would be a point. Coordinates are typically 96bit I believe, so maybe having one vertex coordinate per register would be beneficial?
Guys, can anyone enlighten me? I would like to know how the capacities and rates we hear left and right influence the quality of the output (framerate, polygons, visual effects). All that I know is that data from the disc is read and ends up being displayed on the screen (yeah, I am that clueless). I have no idea how the components, be it GPU, CPU, RAM, EDRAM, disk player, work with one another to make that happen and how crucial their attributes are. That's what I want to learn (if you could help me).
So, is it fair to say that the wii u is a side step rather than a jump forward compared to current gen hardware?
What a username!I have a question about the form factor of the U. Is it just the way things turned out to be because of the parts they used, they used specific parts in order to get a small form factor, or is there is a business case for it? I'm assuming having small form factors results in a higher costs.
I don't know where else to post this, and it's not worth making a new thread for (plus I don't know if it's old or not), but Panasonic are supplying Wii U's optical discs drives, which are equipped with optical pickup (whatever that is).
http://www.asahi.com/digital/nikkanko/NKK201211200004.html
http://ieeexplore.ieee.org/xpl/logi...rg/iel1/30/11320/00536189.pdf?arnumber=536189ABSTRACT
The optical pick-up for DVD with CD (compact disc) compatibility is discussed. The difference of the substrate thickness between DVD and CD causes the different spherical aberration and prevents the laser beam from being focused into a diffraction limit spot size with only one objective lens. Several methods of reducing this aberration and possessing compatibility with CD are proposed. The twin lens type optical pick-up is one solution in overcoming this problem. It incorporates objective lenses for both DVD and CD. Each lens results in an optimum focused spot without the extra spherical aberration for each type of disc
Thanks for the explanation. I already knew a few things about how paired singles work and that they are limited to certain operations (which are listed here) - I mostly wondered if extending the system by adding a ps2 would make sense, to modify a 3D coordinate with two registers in a single cycle for example.PPC FPRs are normally 64bit because PPCs normally support double precision (even a lowly embedded CPU does that). From there on, when processing singles, half of the FPR is used. When processing doubles, the entire FPR is used. When using paired singles 750CL (aka Gekko, aka Hollywood) uses the low 32 bits of the FPR for ps0 (paired-single-0) and the high 32 bits for ps1 (paired-single-1). That slashdot anonymous post is a joke.
edit: Ok, I'm in the middle of a gargantuan build, so let me elaborate a bit. First, a disclaimer: I'm in no way familiar with U-CPU, never touched it, never smelled it, never even removed its heatsink. Yet, there's a single, very useful public bit of info re U-CPU - it's binary compatible with the Gekko/Hollywoord. And that is a well-studied CPU. So back to 750CL (which is IBM's stock name for Gekko, in case somebody missed that).
Gekko supports paired singles by utilizing its super-scalar FPU pipeline and through some enhancements to its load/store unit. As every PPC I've had contact with, Gekko features 64bit floating-point registers (FPR), and unsurprisingly, bluntly uses those to store paired singles (as I said in the part before the edit). Now, there's a catch in that - while the organisation of the singles is indeed in the logical high/low spit of the FPR, the CPU _cannot_ interpret doubles as a singles pair, and vice versa. What that means, is that if you load a pair of singles (through its dedicated load instruction, blessed be anonymous cowards) to an FPR, that FPR cannot be treated in a subsequent instruction as if it contained a double - no subsequent math on doubles can use that register as a double - such math will produce unpredictable results. But the restriction does not end there - not only math cannot treat this FPR as a double, but also the store instructions that work with doubles cannot use that register as a double. So if you naively expect that an FPR which contains a singles pair could be stored via a doubles store op - you'd be wrong! You have to use the dedicated paired-singles op to store an FPR containing paired singles.
*build done*
While I would not rule out a hypothetical ps2 as having some value, 3D math is normally a 4D vector matter - homogeneous coordinates and stuff.Thanks for the explanation. I already knew a few things about how paired singles work and that they are limited to certain operations (which are listed here) - I mostly wondered if extending the system by adding a ps2 would make sense, to modify a 3D coordinate with two registers in a single cycle for example.
What does the fourth value do? I only code much higher level, and only ever used three floats to translate coordinates.While I would not rule out a hypothetical ps2 as having some value, 3D math is normally a 4D vector matter - homogeneous coordinates and stuff.
Conversely if you calibrate for full range (eg a computer monitor) then playing a bluray on it would have dark grey blacks, and light grey whites. (Unless you have a PC with software bluray player set to compensate for that)
Hey while my two techies buddy are here, how could you explain that a developer could be circumspect in front of the alleged speed of the Wii U ram after the recent teardowns, stating that they haven't witnessed such performances ? (yes i received some impressions on this affair).
I suspect there are two scenarios:
1) All the memory layout is built in a way no developer could suspect that in the middle of its processing from storage to the screens, their code is having a hard time on the RAM, and on the contrary the general performances of all this area is good. (the more likely case, with the eDram, etc, as said many times).
2) There is more to it than those rather simple - at least the publicly available ones - analysis (with pictures of the chips, finding of the references, searches on the net), and the bandwidth could be clearly higher than what those innards studying results push us to think it is (so more than those around 12GB/S) ?
Yeah, that really should have been there from the start. But I'm sure Nintendo will patch it in soon enough, that shouldn't be particularly hard.Why would you calibrate a TV for RGB anything with Blu-rays and DVDs? YCbCr is what you should be using. YCbCr also happens to be what the Wii U uses in Wii mode.
I'm not going to be especially happy until Full Range RGB is patched onto the Wii U.
This seems exceedingly unlikely. The (maximum) bandwidth to the 2GB DDR3 memory is quite simply what it says on the chips.2) There is more to it than those rather simple - at least the publicly available ones - analysis (with pictures of the chips, finding of the references, searches on the net), and the bandwidth could be clearly higher than what those innards studying results push us to think it is (so more than those around 12GB/S) ?
Hey while my two techies buddy are here, how could you explain that a developer could be circumspect in front of the alleged speed of the Wii U ram after the recent teardowns, stating that they haven't witnessed such performances ? (yes i received some impressions on this affair).
I presume there are two scenarios:
1) All the memory layout is built in a way no developer could suspect that in the middle of its processing from storage to the screens, their code is having a hard time on the RAM, and on the contrary the general performances of all this memory department is good. (the more likely case, with the eDram, etc, as said many times).
2) There is more to it than those rather simple - at least the publicly available ones - analysis (with pictures of the chips, finding of the references, searches on the net), and the bandwidth could be clearly higher than what those innards studying results push us to think it is (so more than those around 12GB/S) ?
The issue was the amount of eDRAM and the lack of CPU muscle to handle tiling in many games. IIRC eDRAM can be beneficial to deferred rendering, saving memory where the G buffer would typically be stored.
Homogeneous coordinates are N-dimensional coordinates where one of the coordinates (read: normally the last one) is an 'out-of-this-world' component, figuratively speaking. It allows matrix transformations (among other things) to feature the entire set of spatial transformations you'd normally care about, as long as the matrices are NxN, or Nx(N-1). Basically, the extra component is an 'out-of-band' thing which carries extra information, which you cannot encode in a 'normal' vector of just the bare dimensionality.What does the fourth value do? I only code much higher level, and only ever used three floats to translate coordinates.