There will be differences. (Maybe not at launch)
Lord Cerny says it won't happen until year 3 or 4. Will just have to see what 2017 brings us.
With regards to the above statement - I understand its benefit over PC tech, but how does it compare to PS4?
I'd assumed he meant PS4, especially given the quote was re-tweeted by Yoshida.
What were seeing with the consoles are actually that they are a little bit more powerful than we thought for a really long time ESPECIALLY ONE OF THEM, but Im not going to tell you which one," Nilsson told VideoGamer.com at Gamescom earlier today.
And that makes me really happy. But in reality, I think were going to have both those consoles pretty much on parity maybe one sticking up a little bit.
Good. If XB1 is capable of the same type of CPU-GPU algorithms as PS4 then everybody wins big.
I think Yoshida jumped on the quote but the implication in my comprehension was one of them is "surprisingly more powerful than we thought" (xbone was the one thought to not be as powerful) ergo, the Xbox one is surprising with its power meaning they will be "consoles pretty much on parity ".... one sticking up a little was the difference on PS4 having a slight edge over xbone but not as much as raw specs have led people to believe
However, all of this is no secret sauce. It is still "just" memory. It is there to provide additional bandwidth that the DDR3 in itself just can't provide. It's downside in comparison to the PS4 is that this pool is just 32MB big whereas the PS4 has more bandwidth on the entire memory pool. 32MB are enough to store the most important pixelbuffers. Nevertheless, you can still saturate the pool easily if you are using deferred (two-pass) rendering which uses multiple "auxiliary" buffers for storing information gathered in the first pass. KZ:SF, for instance, uses 39MB at 1080p for pixelbuffers alone.
Lord Cerny says it won't happen until year 3 or 4. Will just have to see what 2017 brings us.
It's amazing how some of you can truly think you know more (or better) than the people working (and engineering) for these companies. Let alone websites like ExtremeTech, that specialize in knowing about computer tech in extreme detail. The "they are absolutely wrong" statements made by some here really amuse me.
I can't get into a fill on tech discussion with you as you know more than I but the implication from the feedback trickling in is that it possibly makes up a fair margin of the original memory speed deficiency perception.
But "having similar onion , garlic bus to ps4" is new information to what we knew last week even if we don't know all the details of its operation yet
AmenThis is not a correct interpretation of what Cerny said. The context of Cerny's Year 3 or 4 quote is that there are modification that his team made to the APU that will not be effectively utilized until year 3 or 4.
The base power of the PS4 is ~40% greater of the X1 and it will be reflected in the games that are released even at launch.
Not it isn't! They've just used the "PS4 names".But "having similar onion , garlic bus to ps4" is new information to what we knew last week even if we don't know all the details of its operation yet
No, they are just using the names that the respective PS4 buses have, but the information itself is not new:
.
Amen
Not it isn't! They've just used the "PS4 names".
onion+ / selectively invalidating cache lines / more queues is PS4 only.
There is proprietary tech in the form of the additional "Onion+" bus which is what Cerny is referring to in this quote:
http://www.gamasutra.com/view/feature/191007/inside_the_playstation_4_with_mark_.php?page=2
Ok, well new to me as gaf told me onion/garlic bus was proprietary ps4 tech.
But maybe I was jut not paying enough attention to have heard otherwise
Ok, well new to me as gaf told me onion/garlic bus was proprietary ps4 tech.
"First, we added another bus to the GPU that allows it to read directly from system memory or write directly to system memory, bypassing its own L1 and L2 caches. As a result, if the data that's being passed back and forth between CPU and GPU is small, you don't have issues with synchronization between them anymore. And by small, I just mean small in next-gen terms. We can pass almost 20 gigabytes a second down that bus. That's not very small in today’s terms -- it’s larger than the PCIe on most PCs
Ok, well new to me as I understood onion/garlic bus was proprietary ps4 tech.
But maybe I was jut not paying enough attention to have heard otherwise
We are sure that xbone can not do?
Anyone who really thought the Xbox One was going to launch $100 more and significantly weaker was just being dumb or a crazy fanboy. It may, in fact, be weaker but it would never be so much so that third party games would look different.
I never thought about it this way. Makes sense.
Lol joke post?Anyone who really thought the Xbox One was going to launch $100 more and significantly weaker was just being dumb or a crazy fanboy. It may, in fact, be weaker but it would never be so much so that third party games would look different.
Lol joke post?
so you're completely disregarding the fact that PS4 is proven to have better performance by ~40% because of released known specs of both systems, and that X1 is $100 more because of kinect? because fanboys?
still no hUMA? o_o
I think Yoshida jumped on the quote but the implication in my comprehension was one of them is "surprisingly more powerful than we thought" (xbone was the one thought to not be as powerful) ergo, the Xbox one is surprising with its power meaning they will be "consoles pretty much on parity ".... one sticking up a little was the difference on PS4 having a slight edge over xbone but not as much as raw specs have led people to believe
It has better performance if you simply look at the parts used and their individual performance on paper. Clearly, based on this article, MS is doing some trickery (cue unoriginal hacks for "secret sauce" jokes) to get similar performance out of lesser parts. Specifically this huma/shared memory stuff.
As far as I can tell, it's like saying the PS4 is a vacuum and the Xbox One is an automated broom. Same idea but one is straight horsepower and the other is some bullshit MS cooked up.
Who know? The Xbox One could be straight trash or the PS4 could melt in its tiny casing on day 3. But, as far as I have seen, they are about the same.
What is hUMA and why is it so important all of a sudden?
but both are huma/have shared memory....
I think this is easily the most reasonable interpretation of the quote.
If you were to take it otherwise, i.e. that PS4 was surprisingly more powerful than they thought and that PS4 will slightly better performing than XB1, then the logical conclusion would be that they initially thought the XB1 was more powerful than the PS4. That's an unreasonable conclusion, based on nothing more than RAM type and GPU specs.
Why are they referring to eSRAM as a cache?
What is hUMA and why is it so important all of a sudden?
Can someone please explain the difference between what esram being a cache or scratchpad means and why the distinction is so important?
Scratchpad memory (SPM), also known as scratchpad, scatchpad RAM or local store in computer terminology, is a high-speed internal memory used for temporary storage of calculations, data, and other work in progress. In reference to a microprocessor ("CPU"), scratchpad refers to a special high-speed memory circuit used to hold small items of data for rapid retrieval.
It can be considered similar to the L1 cache in that it is the next closest memory to the ALU after the internal registers, with explicit instructions to move data to and from main memory, often using DMA-based data transfer. In contrast with a system that uses caches, a system with scratchpads is a system with Non-Uniform Memory Access latencies, because the memory access latencies to the different scratchpads and the main memory vary. Another difference with a system that employs caches is that a scratchpad commonly does not contain a copy of data that is also stored in the main memory.
In computer science, a cache (/ˈkæʃ/ kash)[1] is a component that transparently stores data so that future requests for that data can be served faster. The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that are stored elsewhere. If requested data is contained in the cache (cache hit), this request can be served by simply reading the cache, which is comparatively faster. Otherwise (cache miss), the data has to be recomputed or fetched from its original storage location, which is comparatively slower. Hence, the greater the number of requests that can be served from the cache, the faster the overall system performance becomes.
But, as far as I have seen, they are about the same.
The surprise is for the PS4, since 3rd parties always speak like both are equal.
It has better performance if you simply look at the parts used and their individual performance on paper. Clearly, based on this article, MS is doing some trickery (cue unoriginal hacks for "secret sauce" jokes) to get similar performance out of lesser parts. Specifically this huma/shared memory stuff.
As far as I can tell, it's like saying the PS4 is a vacuum and the Xbox One is an automated broom. Same idea but one is straight horsepower and the other is some bullshit MS cooked up.
Who know? The Xbox One could be straight trash or the PS4 could melt in its tiny casing on day 3. But, as far as I have seen, they are about the same.
scratchpad . http://en.wikipedia.org/wiki/Scratchpad_memory
cache.
Bolded is the important difference.
It has better performance if you simply look at the parts used and their individual performance on paper. Clearly, based on this article, MS is doing some trickery (cue unoriginal hacks for "secret sauce" jokes) to get similar performance out of lesser parts. Specifically this huma/shared memory stuff.
As far as I can tell, it's like saying the PS4 is a vacuum and the Xbox One is an automated broom. Same idea but one is straight horsepower and the other is some bullshit MS cooked up.
Who know? The Xbox One could be straight trash or the PS4 could melt in its tiny casing on day 3. But, as far as I have seen, they are about the same.
So do I understand that if it's a scratchpad it will require a bit more programming into the esram where if it is a cache it is able to do it automatically?
And then of course next question is if the computer reporters are all referring to this hot chips discussion as cache, how can we be so sure it is not?
So do I understand that if it's a scratchpad it will require a bit more programming into the esram where if it is a cache it is able to do it automatically?
And then of course next question is if the computer reporters are all referring to this hot chips discussion as cache, how can we be so sure it is not?
So to summarize, does it make games look better, play better, or make the process of making a game easier?
Can someone please explain the difference between what esram being a cache or scratchpad means and why the distinction is so important?
Can someone please explain the difference between what esram being a cache or scratchpad means and why the distinction is so important?
Look Better: In this case probably not, but possible although unlikely due to the set ups of both consoles.
Play Better: Yes, this will likely improve CPU limited functions by using GPGPU in a more efficient manor allowing for more advanced GPGPU algorithms compared to the alternative CPU only algorithms.
Making easier: Yes.
Caches are subdivided into "cache lines" which are the atomic unit of "blocks" of data that are managed by a cache. I think for Jaguar the size of a cache line is 64kb.
You probably mean 64 bytes.