• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

VGLeaks: Details multiple devkits evolution of Orbis

Wasn't there a GAFer saying Planetside 2 would too heavy of a load the PS4s rumored CPU?

I guess he wasn't taking into account the power of porting or optimization?

The massive amounts of players makes it CPU intensive, but we don't know how well they can port the engine though. Planetside 2 dev said the engine can potentially be done to work on certain other devices, strongly hinting at PS4.
 

androvsky

Member
\

It is possible though that the OS is 32 bit. You can install 32 bit linux on a 64 bit processor. They don't need to address more than 4g of memory so really, what does a 64 bit OS buy them other than more required memory for the OS?
32-bit operating systems have a variety of addressing limits. One is the obvious no ram over 4 GB, but that can be dealt with through potentially performance limiting paging. Also, hardware such as the GPU is addressed as though it's memory, so if you can't address more than 4 GB, you have to throw out part of your memory to address the hardware. This is why on Windows you generally can't use more than 3 GB of RAM in 32-bit mode. Also generally can't DMA to addresses higher than the first GB, but I don't recall the specifics well enough to say if it'd apply here (since it's an AMD platform and not a PPC one, probably).

Also, the 32-bit instruction set on Intel-compatible CPUs is PURE SHIT straight from the architecture's debut as merely being not very good back in 1981. The 64-bit version adds a decent number of registers and doesn't completely suck.

I think the OS is going to be 64-bit.
 

Codeblew

Member
Memory address space. You can technically get around the ~3.8ish GB limit of 32 bit OSs, but it's more efficient to use a 64 bit OS. Plus a 64 bit OS gains some speedup by processing 64 bit chunks.

I agree that it is unlikely they will use 32bit OS but where are you getting the 3.8GB? WinXP could use up to 4GB. I thought the max would just be the biggest unsigned 32 bit integer which is 4,294,967,296 bytes. Thus the max address that can be address with a 32bit pointer.

Also, hardware such as the GPU is addressed as though it's memory, so if you can't address more than 4 GB, you have to throw out part of your memory to address the hardware.

Didn't think of that.
 

Takuya

Banned
I agree that it is unlikely they will use 32bit OS but where are you getting the 3.8GB? WinXP could use up to 4GB. I thought the max would just be the biggest unsigned 32 bit integer which is 4,294,967,296 bytes. Thus the max address that can be address with a 32bit pointer.

There is reserved RAM for hardware.
 

androvsky

Member
I agree that it is unlikely they will use 32bit OS but where are you getting the 3.8GB? WinXP could use up to 4GB. I thought the max would just be the biggest unsigned 32 bit integer which is 4,294,967,296 bytes. Thus the max address that can be address with a 32bit pointer.

Like I mentioned above, hardware needs address space, as does internal OS functions. Screwing around with paging isn't worth it since it'll play havoc with applications trying to use the entire address space at once. It's far, far easier to just use the 64-bit OS.
 

mattp

Member
I agree that it is unlikely they will use 32bit OS but where are you getting the 3.8GB? WinXP could use up to 4GB. I thought the max would just be the biggest unsigned 32 bit integer which is 4,294,967,296 bytes. Thus the max address that can be address with a 32bit pointer.

you need memory to address hardware, thus you couldn't use the full 4gb

edit: gah too slow
 

Codeblew

Member
Like I mentioned above, hardware needs address space, as does internal OS functions. Screwing around with paging isn't worth it since it'll play havoc with applications trying to use the entire address space at once. It's far, far easier to just use the 64-bit OS.

There is reserved RAM for hardware.

you need memory to address hardware, thus you couldn't use the full 4gb

Yeah, I wasn't thinking about that. 64bit OS it is.
 

mattp

Member
to me it seems the extra effort/cost would be offset by psn game sales

when stuff was only disc/cart based it made sense people wouldn't really buy previous gen games for their new console. they want flashy new stuff
but when it's all just together on the psn store there's less of a "this is last gen" stigma attached to the games. i can see people buying psn games, regardless of whether it was made specifically for the ps3 or not
 
Really? All I hear is people stating how Sony is moving away from the Cell and that "means" there is no BC.
I never read an "insider" stating there would be no BC.

I think Aegis or someone said probably no BC. Also other sources I seen which may or not be accurate are saying no BC. Im getting the no BC vibe.

80% no BC is what I feel.
 

Razgreez

Member
And fortunately or unfortunately vibes don't count for anything. Though i would rather have people manage expectation and not expect BC
 

Jack_AG

Banned
Yeah if the solution is to just run it off of cloud servers then there's no hardware problem.
The idea is that content is locked to your PSN ID - cloud is the way to ensure 100% BC. Inserting a PS2 disk will launch the game via cloud.

With 4k support this is the last console you will need. PS5 is introduced via PS4 and cloud gaming. By then we will see bandwidth at numbers that can stream video/audio with very little latency.

Of course, my own speculations but the cloud will be used for gaming this gen without a doubt.
 

THE:MILKMAN

Member
I don't know if this is anything, but could be interesting. uuse5 over at Semiaccurate forums has found this image:

GF%2B2.jpg


It has the Yole picture of a future GPU we've seen before. But the graph now also talks about 4GB stacked GDDR5.
 
I don't know if this is anything, but could be interesting. uuse5 over at Semiaccurate forums has found this image:

GF%2B2.jpg


It has the Yole picture of a future GPU we've seen before. But the graph now also talks about 4GB stacked GDDR5.

Jeff has posted something similar. Not sure what it implies.
 

DieH@rd

Banned
I don't know if this is anything, but could be interesting. uuse5 over at Semiaccurate forums has found this image:

GF%2B2.jpg


It has the Yole picture of a future GPU we've seen before. But the graph now also talks about 4GB stacked GDDR5.

If Orbis APU is lets say 280mm2, how big 2.5D interposer is needed for it and eight GDDR5 chips? How big is one 512MB GDDR5 chip?
 
They don't need it. People just can't read and keep saying stuff, thus others (like you) end up asking this exact question.

Huh. Very interesting!

Can we take a guess how much it might add to the cost of a PS4 to add these PE's? If it's around the $20-30 mark, it might well be worth for Sony to it.
 

daveo42

Banned
That's why they'll include an extra single die.

They've clearly put time into R&D the patent itself. Using the PE and attaching it to not only one another, but other chips as well. Since it's a single die they'd just have to attach it to the RAM.
Nothing other than the Cell, at this point emulate the Cell. PS5 maybe when we go fully 3D stacked processors, but as of now? Hell no. Not even the top PC's can do it. RSX can be emulated, but trying to emulate Cell right now? Impossible. The architecture is way too different and the LS that each SPE has completely surpasses the speed of everything else. Brute forcing it doesn't work on any other chip yet.


Yeah, they won't need to put Cell in PS5 (as mentioned above), hell they might not even need PS3 games, but going forward x86 is going to make this a lot easier.

So this single die that being included the compute module? I've heard the rumor flying around after it leaked that it was supposed to be a mini-CELL system.

However, there's a fair amount of "secret sauce" in Orbis and we can disclose details on one of the more interesting additions. Paired up with the eight AMD cores, we find a bespoke GPU-like "Compute" module, designed to ease the burden on certain operations - physics calculations are a good example of traditional CPU work that are often hived off to GPU cores. We're assured that this is bespoke hardware that is not a part of the main graphics pipeline but we remain rather mystified by its standalone inclusion, bearing in mind Compute functions could be run off the main graphics cores and that devs could have the option to utilise that power for additional graphical grunt, if they so chose.
 

RoboPlato

I'd be in the dick
I don't know if this is anything, but could be interesting. uuse5 over at Semiaccurate forums has found this image:

GF%2B2.jpg


It has the Yole picture of a future GPU we've seen before. But the graph now also talks about 4GB stacked GDDR5.

Huh, had no idea it would be possible to stack GDDR5. That would help offset heat and latency concerns.

Core iX is superior to anything from AMD

There is the compute module in the Orbis, which I assume is to aid the CPU.
 
So this single die that being included the compute module? I've heard the rumor flying around after it leaked that it was supposed to be a mini-CELL system.

Interesting you mention physics... Here is a quote from the patent.

FIG. 8 illustrates further details of an exemplary sub-processing unit (SPU) 508. The SPU 508 architecture preferably fills a void between general-purpose processors (which are designed to achieve high average performance on a broad set of applications) and special-purpose processors (which are designed to achieve high performance on a single application). The SPU 508 is designed to achieve high performance on game applications, media applications, broadband systems, etc., and to provide a high degree of control to programmers of real-time applications. Some capabilities of the SPU 508 include graphics geometry pipelines, surface subdivision, Fast Fourier Transforms, image processing keywords, stream processing, MPEG encoding/decoding, encryption, decryption, device driver extensions, modeling, game physics, content creation, and audio synthesis and processing.

This is no different from the SPU on the Cell, except because of the way the PE is created, they can put as many or as few of them as they can and connect them to different memory pools.

VGLEAK%2520post1.jpg


Someone in the comments at VGleaks says that the SOC Devkit is fase.

I don't really buy that.
 

RoboPlato

I'd be in the dick
Interesting you mention physics... Here is a quote from the patent.



This is no different from the SPU on the Cell, except because of the way the PE is created, they can put as many or as few of them as they can and connect them to different memory pools.



I don't really buy that.
DF did say that the compute module is likely for physics, which would give the CPU a ton of room to breathe considering how important I expect physics and procedural animation to be next gen. It's a good strategy for getting the most out of a Jaguar based system, especially if it's cheap.
 
DF did say that the compute module is likely for physics, which would give the CPU a ton of room to breathe considering how important I expect physics and procedural animation to be next gen. It's a good strategy for getting the most out of a Jaguar based system, especially if it's cheap.

Well the thing, as the patent says, the SPU is a decent middle ground. It's fantastic for physics and GPU like tasks, but if it has to, it can do CPU like tasks better than a GPU can. Is it as good as a GPU for graphics tasks? No. Is it as good as a CPU for general tasks? No. But if they need to, they can allocate that extra couple hundred GFLOPS to whatever the devs might need.
 
D

Deleted member 80556

Unconfirmed Member
I really wouldn't believe much from a person who just randomly pops up in comments.

It's like me (who hasn't been very active in these next-gen threads, but reading them carefully) would just pop up and say: "Guys, Orbis is using a custom-made GPU that has 2TFLOPS" and not even replying to what people say.

Fishy all in all. Specially the Bulldozer part. I thought they were using Jaguar now.
 
Doubt very much it's true.

Yea your probably right the more I thought about it. The fact that he said he was under NDA and couldn't give much information, but then gives a bunch of information right after is a red flag. Could still be worth something keeping in mind though.
 

daveo42

Banned
VGLEAK%2520post1.jpg


Someone in the comments at VGleaks says that the SOC Devkit is fase.

The use of the word "monolithic" kind of gives off the impression this is not legitimate info. Never mind my previous statement. Seems like a term AMD is using with the upcoming processor. Completely wrong on my part.

Yea your probably right the more I thought about it. The fact that he was under NDA and couldn't give much information, but then gives a bunch of information right after is a red flag. Could still be worth something keeping in mind though.

This as well.
 

THE:MILKMAN

Member
I really wouldn't believe much from a person who just randomly pops up in comments.

It's like me (who hasn't been very active in these next-gen threads, but reading them carefully) would just pop up and say: "Guys, Orbis is using a custom-made GPU that has 2TFLOPS" and not even replying to what people say.

Fishy all in all. Specially the Bulldozer part. I thought they were using Jaguar now.

See with all the various leaks people very easily get confused. Jaguar is what is almost 100% what the CPU cores will be in the retail console at this point.

What the guy in the comments is claiming is about the dev kit. Even then, if he is a dev, it could simply be that his company is still to receive the new "SoC based dev kit".
 
I don't know if this is anything, but could be interesting. uuse5 over at Semiaccurate forums has found this image:

GF%2B2.jpg


It has the Yole picture of a future GPU we've seen before. But the graph now also talks about 4GB stacked GDDR5.

Global foundries can't do any of that. Also global foundries isn't fabbing any next gen console chips.
 

Norml

Member
Timothy Lottes said some more in the comments.

A fast GDDR5 will be the desired option for developers. All the interesting cases for good anti-aliasing require a large amount of bandwidth and RAM. A tiny 32MB chunk of ESRAM will not fit that need even for forward rendering at 1080p. I think some developers could hit 1080p@60fps with the rumored Orbis specs even with good AA. My personal project is targeting 1080p@60fps with great AA on a 560ti which is a little slower than the rumored Orbis specs. There is no way my engine would hit that target on the rumored 720 specs. Ultimately on Orbis I guess devs target 1080p/30fps (with some motion blur) and leverage the lower latency OS stack and scan out at 60fps (double scan frames) to provide a really great lower-latency experience. Maybe the same title on 720 would render at 720p/30fps, and maybe Microsoft is dedicating a few CPU hardware threads to the GPU driver stack to remove the latency problem (assuming this is a "Windows" OS under the covers).
http://timothylottes.blogspot.com/2013/01/orbis-and-durango.html
 
Top Bottom