jeff_rigby said:
I edited Patsu's comment to explain what I was amplifying. Starting with Patsu:
Originally Posted by patsu:
(with editing) The ability for general purpose SPEs and GPU to work seamlessly together acting like an internally firmware programmable GPU would not be possible with other GPU packages. It's pretty forward thinking. See MLAA and deferred shader.
The original PS3 2006 OpenGL library of calls was provided by Nvidia and had to be modified by Sony because of their SPE - RSX GPU combination. Since low level would be more efficient Sony did not bother to do a good job of it (at least that's my guess).
Two years after the release of the PS3 someone came up with MLAA and deferred shader rendering. Because the SPE-RSX could act like an internally programmable GPU at least for some functions involving the SPE, these features were added/included in a game engine that used low level calls and at GDC 2011 there are now two engines touting those features.
A more modern GPU might now include MLAA and some form of deferred rendering and also support OpenGL efficiently but 2005 GPUs didn't. I didn't research if one did but today several GPUs designed for portables do TBDR. It's a process vs. brute force approach with process being more efficient as far as batteries and now with memory and bandwidth bottlenecks more are adopting TBDR.
Your explanation of OpenGL matches my understanding. Differed shader rendering and MLAA may now be a transparent part of the PS3 OpenGL library of calls. THE PS3 SPE-RSX as far as programmers are concerned may NOW support OpenGL as efficiently as a more modern GPU which internally supports deferred rendering to increase performance.
I guess I kinda see what you are saying, but I don't think it really works that way. For one, I don't think modern PC GPUs "internally support deferred rendering" - maybe the confusion here is in the tile-based deferred rendering hardware in the PowerVR based GPUs vs. the deferred shading/lighting used in many modern games, despite the similarity in the names, I don't think those algorithms are the same.
Modern PC GPUs have added more functionality to make it easier to write a renderer using deferred shading/lighting, but this functionality is pretty low level.
I don't think there was really a plan for the SPEs and the RSX to work "seamlessly" - where I will define "seamlessly" as "transparent to the programmer." (as opposed to "graphics pipeline is implemented to use both resources", certainly you can argue that any final implementation is "seamless" because it works...) You always have to plan on what your SPEs would do vs. what you wanted the RSX to do and architect your rendering pipeline accordingly.
I also do not believe that there are special PS3 specific OpenGL extensions to enable MLAA/deferred rendering (and if there were, I'm not sure I would understand what they would be - I would think that those techniques are higher level than the aim of OpenGL).
That said, I'm pretty sure there's nothing stopping you from implementing deferred rendering or MLAA even if you are using Sony's version of OpenGL. (To be fair, one thing I am not is an OpenGL expert, so maybe I have been looking at lower level stuff too long
It's basically analogous to an algorithm vs. a programming language. Generally speaking, a programming language doesn't have to have an algorithm "built-in" for it to be possible for you to implement it in that language. For example, what is deferred shading/lighting? My understanding of it is that instead of writing color values into your intermediate buffers, you write other data to the buffers like normals and what not, then you build the final color buffer by running a pixel shader that implements your lighting, reads the values from all of your intermediate buffers as textures, and writes the color to the final framebuffer. That said, what's stopping you from implementing this in OpenGL? Just allocate the buffers as needed and write your shaders to implement this algorithm.
Similarly, MLAA is a post process, once you have the near final framebuffer, you basically throw your SPEs at your framebuffer and let them process the image. Is there really any difference between rendering that near final framebuffer with a low-level library vs. OpenGL? Why implement that as some kind of extension to OpenGL when you can just write a generic implementation that says "allocate a framebuffer, point this library at it and it will kick off a bunch of SPE tasks to antialias your framebuffer"?
Additionally, I don't think communication between the RSX and the SPEs would ever be "seamless" because of various constraints as to where your data has to live if you want the SPEs to do work on them...with the current PS3 architecture, you definitely have to do some planning on how your rendering pipeline works if you want to use the SPEs.
Edit: sorry, you're editing as I reply
This:
jeff_rigby said:
Think about it, OpenGL is a library of GPU calls, I.E. the GPU must support the OpenGL calls. The OpenGL library is a standard to allow cross platform support for Graphics.
...is not correct.
Let me put it this way - the GPU, at its core, implements an instruction set that tells it what to do. What the OpenGL library does is generate a buffer full of GPU instructions, and then the GPU goes off and runs it. It's the library that "supports" the OpenGL calls, the calls into the OpenGL library call the lower level GPU instructions for you. Some of the overhead is that the CPU has to do a bit more work to translate things into a format that the GPU understands.
The OpenGL API is generic and cross platform, cross GPU. Maybe you can have hardware that makes this translation closer to 1:1, but generally IMHO you're better off using those transistors for other things (like more shader processing units, etc.)
If you have a lower level library, what you basically have is an API that more closely mimics the actual GPU microcode vs. the generic OpenGL API. Does that make sense? It's late here so I am sorry if this is confusing or hard to follow
Edit #2, and I am going to bed after this!
OpenGL is needed for the webkit port and ports to the PS3 via PS Suite. This supports the reasons behind a rewrite of the OpenGL library.
Newer PS3 games may have Sony suggesting developers use OpenGL calls to insure portability between PS3 and NGP. Support for this last thought is from a developer comment; "PS3 and NGP calls just work" (with the NGP at full resolution = 1/2 the PS3). With OpenGL being used on the PS3, remote play on the NGP is easily possible as is remote desktop if the XMB is rewritten to use OpenGL calls.
I'm not sure I follow this - what in Webkit requires OpenGL? Is it the implementation of WebGL? (You could just choose not to support it, if so, and if you did want to support it, I would think that Sony's OpenGL implementation is good enough. Even though I do not believe many modern games use it these days, it has definitely shipped a bunch of PS3 titles...)
I don't know much about NGP programming, but when I read those comments, I assumed they meant "we brought our assets, shaders, and rendering pipeline over from the PS3 and they just worked" (which makes sense - doesn't every GPU provider have some sort of shader compiler that takes HLSL/Cg shaders and outputs native GPU microcode?)
I don't understand what you mean about Remote Play, the current implementation encodes the output as video and streams it over the network, what would be different under OpenGL? You could maybe send the stream of OpenGL calls over the network instead but both sides would need similar data loaded to render the image...and I have a gut feeling that stream of calls would take up more bandwidth than a compressed video stream...just a hunch, maybe I am totally wrong on that