• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Epic: UE4 full feature set requies 1 TFLOP GPU, a scaled down version exists for less

Ah so you mean bare minimum to run the full-feature engine. Which is kind of a oxy-moron but yea I getcha.

I would still say that's jumping the guns a bit. Interview says DX11 Gpus are the targets and things get interesting on 1+Tflop class HW. I doubt its top level performance but that comment suggests that it could be anywhere in the low-high setting scale.


i supposed but it doesn't sound like that would jive with their speech about not releasing next gen until it's an order of magnitude better. How they have these grand visions and will be held back by the hw if it's too weak. And let's face it, 1TF isn't that impressive.

I think 1TF is basically "minimum requirements" for UE4 in PC lingo. It's not much if PS4 is pushing 2TF and many inexpensive gaming PCs will have 6+ Tflops by the time UE4 hits its stride in 2015-2016.

The "reduced" version for smart phones might run on the same tool set but I expect the graphics to look more like UE3+.
 
So I wonder what the cheapest DX11 GPU card is that is over 1TFLOP? A 6790 is 1.3 TFLOPs for $120.

Edit: 7770 is the same price, for the same TFLOPs, and a 7750 doesn't break 1 TFLOP, so that may be the entry point for getting over 1 TFLOP.

They sound like good bang-for-flop video cards, but I think I will stick to my 5700gtx for now.
 

z0m3le

Banned
They recently said to run Samaritan demo fluidly at 720p you need 4x X360.

1 Teraflop is roughly 4x X360. UE4 is more advanced than Samaritan. Ergo with the same power, UE4 is not going run as well as Samaritan.

1 Teraflop is pretty shitty in 2012. They say it will handle the full feature set engine, but there is going to be a MASSIVE difference in how a 1 teraflop card runs an UE4 game versus a contemporary enthusiast card, the latest single 2012 GPU cards are pushing what, 3-4 Teraflops ? Let alone a few years from now when the UE4 comes into its own.

Yeah I think you have to be optimistic as hell to think a 1TF card is going to make UE4 games look as good as top level card, resolution aside.

I wouldn't be surprised if Epic got some kind of final confirmation on future GPU specs and that's why the have drawn the line in the sand at 1TF. I would assume one or more of the consoles meets that, and so that's intended to be the lowest common denominator.

Now of course the article says there is a scaled down version, but that's not what we're talking about. 1TF is the bare bones minimum to run the game with the global illumination and such etc.

You also have to factor in resolutions, frame rate, and most importantly the environment.
a 250GFLOPs GPU can't run anything that looks like gears of war 3 or halo 4 unless you are talking about a closed console system.

1TFLOPs closed system is pretty huge still... heck a 2TFLOPs system would literally strain PC dominance, so why is epic aiming so low? when combined with Crytek's interview about next gen not looking clear, I think next gen consoles might be quite a bit lower graphically than people are expecting. Visually you'll still easily push out that FF engine in a closed box environment, and once optimized, I think it could even go beyond that, but I think everyone is in for a bit of a disappointment except for the devs making games, not needed twice or three times the budgets we have now for their games.
 
1TFLOPs closed system is pretty huge still... heck a 2TFLOPs system would literally strain PC dominance, so why is epic aiming so low? when combined with Crytek's interview about next gen not looking clear, I think next gen consoles might be quite a bit lower graphically than people are expecting. Visually you'll still easily push out that FF engine in a closed box environment, and once optimized, I think it could even go beyond that, but I think everyone is in for a bit of a disappointment except for the devs making games, not needed twice or three times the budgets we have now for their games.


Yeah it seems like the 1.8TF rumor for Orbis is true, although one dev said that was an SDK from a year ago, so it could have been revised upward. That is if Epic's fireside chats had any impact. And then we hear 1-1.5 TF for Durango. And yeah they will still be impressive even if it's weak compared to PC due to closed box efficiency.

So I guess technically the GPU #'s aren't really 10x, although maybe with the APU efficiency, the fast ram, improved BD read speed and everything else, it will seem like 10x. Although I think today's best PC's are already at 20x. But yeah UE4 will be the same scenario as this generation for sure. I think if you buy a good video card when the first UE4 games arrive, it will carry you through most of the generation. Much like how a G80 was relevant for about 4 years because PC game engines were limited by consoles.

I've heard the argument that game development costs will double or triple next gen is a myth, but I don't know the facts.
 
I remember the 2005 marketing:

ps3 was about 1.8 TFLOPS

360 1 TFLOPS

I wonder why the made such a unbeliveble claim and nobody complained. I mean today GPU are about 1 TFLOPS right? ho could they achieve that 7 years ago?

I know it's a marketing stunt, but I believed that these console were about 1 TFLOPS for years. Why nobody didn't say anything??? :(

Those were Tittyflops as someone mentioned earlier, these are different to Teraflops.
 

z0m3le

Banned
Yeah it seems like the 1.8TF rumor for Orbis is true, although one dev said that was an SDK from a year ago, so it could have been revised upward. That is if Epic's fireside chats had any impact. And then we hear 1-1.5 TF for Durango. And yeah they will still be impressive even if it's weak compared to PC due to closed box efficiency.

So I guess technically the GPU #'s aren't really 10x, although maybe with the APU efficiency, the fast ram, improved BD read speed and everything else, it will seem like 10x. Although I think today's best PC's are already at 20x. But yeah UE4 will be the same scenario as this generation for sure. I think if you buy a good video card when the first UE4 games arrive, it will carry you through most of the generation. Much like how a G80 was relevant for about 4 years because PC game engines were limited by consoles.

I've heard the argument that game development costs will double or triple next gen is a myth, but I don't know the facts.

Yeah, those specs were before Sony took that 6billion dollar loss though and change presidents, to be perfectly honest, I don't think a subscription model will work, and neither will a $600 console, those specs easily imply that. There was also rumors at the beginning of this year that pointed to 1.2-1.4TFLOPs for PS4 (around the same time Wii U's were speculated at 1TFLOPs) I think Sony releasing a $400 console with 1.2TFLOPs is a much smarter decision and keeps Sony in the game.

Especially if XB3 really ends up being 1-1.5TFLOPs, devs would never use the extra power again, and Microsoft would put out a smaller, quieter box that performs against PS4 identically compared to PS360 this gen. Sony needs to play it smart, and go with say a 7850SE at a reasonable clock to keep noise, size and TDP down.
 
a 250GFLOPs GPU can't run anything that looks like gears of war 3 or halo 4 unless you are talking about a closed console system.

You are wrong. ATI's Radeon 2900 series is roughly equivalent to the Xenos GPU of the Xbox 360. It can easily play modern games at higher resolution than the Xbox. Please stop perpetuating this myth of "magical" console optimizations that double or triple a gpu's performance, it's just not true.
 

z0m3le

Banned
You are wrong. ATI's Radeon 2900 series is roughly equivalent to the Xenos GPU of the Xbox 360. It can easily play modern games at higher resolution than the Xbox. Please stop perpetuating this myth of "magical" console optimizations that double or triple a gpu's performance, it's just not true.

Which HD 2900? Pro is 384GFLOPs, XT is 475GFLOPs, can the GT play it at 288GFLOPs?

It is actually true that targeting a certain spec allows developers to push the system to it's max, it's no myth, and a closed system also doesn't deal with overhead, allowing ram to free up.

Found a youtube link of FARCRY 2: (2008)

HD 2900XT: http://www.youtube.com/watch?v=0NlBxKwUGdk&feature=results_main&playnext=1&list=PLBB66A79794221685

Xbox360: http://www.youtube.com/watch?v=uYxfzmeqvL0&feature=related

Couldn't find the same gameplay portion, but I didn't really waste much time looking, it's clear there is a noticeable difference, despite the PC having twice the GPU processing power.
 
I wonder why the made such a unbeliveble claim and nobody complained. I mean today GPU are about 1 TFLOPS right? ho could they achieve that 7 years ago?

I know it's a marketing stunt, but I believed that these console were about 1 TFLOPS for years. Why nobody didn't say anything??? :(
We did say something, back in 2005, many times.

As for the reason of using fuzzy math to get those nice round numbers: marketing. Just like most people on a first date, you want to make yourself look as good as possible. Especially true when you have invested so much money.

You can even do it by omission. Why do you think Nintendo hasn't given proper specs for their systems for many years? Because that would make them look bad.

Always remember that they are trying to sell you something when dealing with companies.
 
It is actually true that targeting a certain spec allows developers to push the system to it's max, it's no myth, and a closed system also doesn't deal with overhead, allowing ram to free up.

Of course it's true. However, it's the amount of performance difference that is often exaggerated to a huge degree. People really think that the same card can perform miracles on a closed system but more than halve its performance when used on a PC. That's just not true. If you want another example, try a search on youtube for the X1950 Pro, a weaker card than the Xenos, running games like Crysis 2. As for RAM, the overhead doesn't matter at all for a PC since most people have way more RAM installed than what is actually needed. Certainly more than enough to easily fit the OS, the game and other programs at the same time.

So while the performance difference is there, I would say that it's not more than 15-20%. A 1 Tflop card on console may provide 1.20 Tflop-like performance, but I seriously doubt it ill be able to go much higher until maybe the very end of a generation.
 

z0m3le

Banned
Of course it's true. However, it's the amount of performance difference that is often exaggerated to a huge degree. People really think that the same card can perform miracles on a closed system but more than halve its performance when used on a PC. That's just not true. If you want another example, try a search on youtube for the X1950 Pro, a weaker card than the Xenos, running games like Crysis 2. As for RAM, the overhead doesn't matter at all for a PC since most people have way more RAM installed than what is actually needed. Certainly more than enough to easily fit the OS, the game and other programs at the same time.

So while the performance difference is there, I would say that it's not more than 15-20%. A 1 Tflop card on console may provide 1.20 Tflop-like performance, but I seriously doubt it ill be able to go much higher until maybe the very end of a generation.

but the games I mentioned are the very end of a generation... Also the main reason is DX11, an AMD rep stated that consoles perform so much better because they code to the metal, while PC devs code to DX11, which has overhead... you can find the article pretty quickly on google, but if you need help let me know. It is fairly larger than 20% but you can't just apply a number like that anyways, allowing extra performance, also allows extra effects, like the grass swaying in the farcry 2 video on the 360, and it being completely absent for farcry 2 with a card twice as powerful
 
but the games I mentioned are the very end of a generation..

By that point it doesn't matter if the console GPU can provide better visuals than the PC one. At that point you can find really cheap gfx cards that provide performance orders of magnitude beyond the console.

Also the main reason is DX11, an AMD rep stated that consoles perform so much better because they code to the metal

That's true. I have already read the article. There is a performance difference but it's smaller than most people think.


but you can't just apply a number like that anyways, allowing extra performance, also allows extra effects, like the grass swaying in the farcry 2 video on the 360, and it being completely absent for farcry 2 with a card twice as powerful

The Far Cry 2 video that you linked is recorded with FRAPS, a program that shaves off a huge part of performance while recording. Don't think that the framerate you saw is the one during gameplay, it's likely double that. As for effects, the next console generation will be DX11, which has already been available on PC for years. This is different from 2005 when the 360 had a GPU with features not available on PC at the time.

Here, check out a Radeon X1950 Pro running Crysis 2, a late-gen game: http://www.youtube.com/watch?v=jHWPGmf_A_0

I think you'll agree that performance is excellent for a card that came out in 2006.
 
You are wrong. ATI's Radeon 2900 series is roughly equivalent to the Xenos GPU of the Xbox 360. It can easily play modern games at higher resolution than the Xbox. Please stop perpetuating this myth of "magical" console optimizations that double or triple a gpu's performance, it's just not true.

He's absolutely correct. I take it you never actually seen a profile of a game that is built on top of machine language vs one built on High-level API's but the results are quite staggering.

PC's have so much overhead on the SW and HW side that its pretty silly to try and compare efficiencies to consoles.
 
I'm not trying to do that. I'm simply stating that console optimization can not magically transform a card from average to powerhouse.

Edit: A Radeon X1950 Pro is rated at 0.25 Teraflops, same as the Xenos GPU on a 360. Check Youtube for examples on how this card runs modern games. Then explain to me how a 2006 card can deliver console-beating performance even with so much overhead. I'm not saying that sarcastically, I really want to know how this is even possible. I was just as surprised by this as you are.
 
A Radeon X1950 Pro is rated at 0.25 Teraflops, same as the Xenos GPU on a 360. Check Youtube for examples on how this card runs modern games. Then explain to me how a 2006 card can deliver console-beating performance even with so much overhead.

It's not attached to a system with only 256 MB of RAM, for starters.
 
Of course it's a huge advantage for PC. Even with 10 programs running, including two instances of Visual Studio, I have 800MB/2GB of physical memory left (not to even mention virtual memory, which is a luxury a console with an optional hard drive does not have). You shouldn't be surprised.
 

z0m3le

Banned
I'm not trying to do that. I'm simply stating that console optimization can not magically transform a card from average to powerhouse.

Edit: A Radeon X1950 Pro is rated at 0.25 Teraflops, same as the Xenos GPU on a 360. Check Youtube for examples on how this card runs modern games. Then explain to me how a 2006 card can deliver console-beating performance even with so much overhead. I'm not saying that sarcastically, I really want to know how this is even possible. I was just as surprised by this as you are.

Crysis 2 looks amazing on both that video and the 360, I would say the 360 pushes more, but it's very close, still the system is working with a lot more resources than the 360. I never said it was twice the performance, but it can easily be that much, and really depends on the game and the engine being used. Cryengine 3 is hugely optimized for PC and Consoles, so you'll get better results than something like UE3, but yes in some instances, it's much smaller than twice the performance from a closed box system.
 

z0m3le

Banned
So, is it safe to say that UE4 will not run on WiiU?

no it will definitely run it, not full featured though... also rumor has it that Wii U's GPU is closer to 800GFLOPs than 500... AMD using 768GFLOPs across a huge variety of cards, sort of hints at that spec.
 

Biggzy

Member
So, is it safe to say that UE4 will not run on WiiU?

Epic are hoping to get UE4 running on just about everything: from mobile platforms to high end consoles and pc's. So UE4 should be able to run on WiiU, whether that's with all the features, remains to be seen.
 
Tech-gaf should tell us what's the de facto entry point for UE4, if a 2gb 6990 or a 570 or whatever.

Assuming that DX11 compatibility and 1+ TFLOPS are the requirements, then like I said earlier I'm pretty sure your ass-end entry level card for UE4 will be a 560 Ti or higher for the 500 series or a 670 or higher for the 600 series, until they make a 660/660 Ti of course, which I assume will also fit the bill.

Technically, some of the high-end 400 series cards also have 1+ TFLOPS but I don't believe they support DX11, and I guess you could SLI lower-end 500/600 series cards as well.
 

M3d10n

Member
Epic are hoping to get UE4 running on just about everything: from mobile platforms to high end consoles and pc's. So UE4 should be able to run on WiiU, whether that's with all the features, remains to be seen.

It all depends on whether Epic intends UE4 to run on DX10 cards or not (aka: the many people out there still rocking their GeForce 8xxx, 9xxx and Radeon 4xxx cards).

Also, OpenGL ES 3.0 is pretty much guaranteed to be a bit under DX10, feature wise. It might not even have geometry shaders, so forget about phones and non-x86 tablets with tesselation and the advanced SM5.0 computing stuff (shared buffers, scatter writes, etc) which I'm sure is the backbone for many of UE4's dynamic lighting engine. Mobile UE4 will need to be able to scale way, way back in order to run on such devices (which are likely to hover around 0.1 TFLOPs).
 
I'm not trying to do that. I'm simply stating that console optimization can not magically transform a card from average to powerhouse.

Edit: A Radeon X1950 Pro is rated at 0.25 Teraflops, same as the Xenos GPU on a 360. Check Youtube for examples on how this card runs modern games. Then explain to me how a 2006 card can deliver console-beating performance even with so much overhead. I'm not saying that sarcastically, I really want to know how this is even possible. I was just as surprised by this as you are.

Well I think there is a disconnect between what we are talking about. Crysis 2 isn't coded to the metal, in fact I don't think any game on 360 is. Its built on high-level API's just like pc, although the 360's will be alot more specific and optimized.

AFAIK Sony is the only company that allows "to the metal" coding(IE, Assembly or Machine language that is barely readable by humans)

Heres an example:

A standard hello world in C looks rather simple

Code:
#include <stdio.h>

int main(void)
{
    printf("Hello, world.\n");
    return 0;
}

The same program in assembly is like this


Code:
.syntax unified

    @ --------------------------------
    .global main
main:
    @ Stack the return address (lr) in addition to a dummy register (ip) to
    @ keep the stack 8-byte aligned.
    push    {ip, lr}

    @ Load the argument and perform the call. This is like 'printf("...")' in C.
    ldr     r0, =message
    bl      printf

    @ Exit from 'main'. This is like 'return 0' in C.
    mov     r0, #0      @ Return 0.
    @ Pop the dummy ip to reverse our alignment fix, and pop the original lr
    @ value directly into pc — the Program Counter — to return.
    pop     {ip, pc}

    @ --------------------------------
    @ Data for the printf calls. The GNU assembler's ".asciz" directive
    @ automatically adds a NULL character termination.
message:
    .asciz  "Hello, world.\n"

And thats simply to display "Hello, World". I can't imagine actual 3d in this shit. AFAIK SONY WWS is the only group that uses a significant amount of this code, due to their specialized ICE team.

So yeah you have basically Vanilla DX3D/OGL which the the vast majority of games are wrote on, you have subsets/supersets which consoles rely on, then you have the truly hardcore shit.
 

M3d10n

Member
Well I think there is a disconnect between what we are talking about. Crysis 2 isn't coded to the metal, in fact I don't think any game on 360 is. Its built on high-level API's just like pc, although the 360's will be alot more specific and optimized.

AFAIK Sony is the only company that allows "to the metal" coding(IE, Assembly or Machine language that is barely readable by humans)

Heres an example:

A standard hello world in C looks rather simple

Code:
#include <stdio.h>

int main(void)
{
    printf("Hello, world.\n");
    return 0;
}

The same program in assembly is like this


Code:
.syntax unified

    @ --------------------------------
    .global main
main:
    @ Stack the return address (lr) in addition to a dummy register (ip) to
    @ keep the stack 8-byte aligned.
    push    {ip, lr}

    @ Load the argument and perform the call. This is like 'printf("...")' in C.
    ldr     r0, =message
    bl      printf

    @ Exit from 'main'. This is like 'return 0' in C.
    mov     r0, #0      @ Return 0.
    @ Pop the dummy ip to reverse our alignment fix, and pop the original lr
    @ value directly into pc — the Program Counter — to return.
    pop     {ip, pc}

    @ --------------------------------
    @ Data for the printf calls. The GNU assembler's ".asciz" directive
    @ automatically adds a NULL character termination.
message:
    .asciz  "Hello, world.\n"

And thats simply to display "Hello, World". I can't imagine actual 3d in this shit. AFAIK SONY WWS is the only group that uses a significant amount of this code, due to their specialized ICE team.

So yeah you have basically Vanilla DX3D/OGL which the the vast majority of games are wrote on, you have subsets/supersets which consoles rely on, then you have the truly hardcore shit.

You're mistaking programming the GPU with programming the CPU. You can code the CPU "to the metal" in the 360 and even in the PC. There's nothing preventing you from writing a program fully in assembly. It's completely moronic through, since usually only the core of high-performance functions are written in assembly because the compilers are much better than humans at generating optimized assembly for complex code.

Now, about programming the GPU. While both use DirectX, there is a massive difference between the 360 and a PC: in the 360 the DirectX API libraries are compiled directly with the game code while in a PC they are isolated in dynamically loaded DLLs.

This means that, on a 360 game, the compiler can optimize both the DirectX API calls and the game code together, drastically lowering the cost of the API calls. On a PC the calls to the DLL functions will always be indirect, which are more expensive CPU-wise.
 
You're mistaking programming the GPU with programming the CPU. You can code the CPU "to the metal" in the 360 and even in the PC. There's nothing preventing you from writing a program fully in assembly. It's completely moronic through, since usually only the core of high-performance functions are written in assembly because the compilers are much better than humans at generating optimized assembly for complex code.

Now, about programming the GPU. While both use DirectX, there is a massive difference between the 360 and a PC: in the 360 the DirectX API libraries are compiled directly with the game code while in a PC they are isolated in dynamically loaded DLLs.

This means that, on a 360 game, the compiler can optimize both the DirectX API calls and the game code together, drastically lowering the cost of the API calls. On a PC the calls to the DLL functions will always be indirect, which are more expensive CPU-wise.

Well Im referring to the PS3 mostly. Since there are quite a bit of GPU functions that run on Cell and supposedly on the ICE engine they wrote alot of it in low-level languages. I don't think MS has a specialized core tech group like that but correct me if im wrong.

You don't want to dive low level in PCs because of fundamental differences in the thousands of configurations of HW(Intel op codes vs AMD for example).


I have a copy of the PS3 TRC but I don't know much at all about 360 development so thanks for the factoid about the executables. Now that I look at it IDK if Sony allows to much outside of the provided APIs because they have things like this in their TRC:

Graphics command lists passed to RSX&#8482; commands are only generated with the APIs
or offline tools provided by SCE.

Maybe they provide ICE's work to all studios?
 
Well I think there is a disconnect between what we are talking about. Crysis 2 isn't coded to the metal, in fact I don't think any game on 360 is.

Ah, then would that explain why the fabled "console optimization" doesn't provide the performance benefits that people expect GPU-wise?

This means that, on a 360 game, the compiler can optimize both the DirectX API calls and the game code together, drastically lowering the cost of the API calls. On a PC the calls to the DLL functions will always be indirect, which are more expensive CPU-wise.

Interesting. So the burden from using APIs falls on the CPU and not the GPU?
 

Apocryphon

Member
So, is it safe to say that UE4 will not run on WiiU?

Epic have said that UE4 isn't targeted towards WiiU, and as such they won't provide support... or rather would only provide specific non-WiiU hardware support should somebody license the engine for a WiiU game.

Part of the licensing costs for Unreal Engine comes down to the massive amount of engine and platform support you can get from them, but if you want to use the engine for a "non-supported" platform, some of that support wouldn't be available to you.

Whether the trade off would be worth whatever advantages the engine has over UE3 or not remains to be seen and would be totally dependant on the skill of the programmers at the licensee studio. Fortunately Epic allow companies to evaluate their engines before committing to a license so it's not like some won't try and see what they could do with it that they can't with UE3.

In short, yes you could license and likely run UE4 on WiiU if you wanted to, but Epic won't provide hardware support or even guarantee that it'll run properly or that the whole feature set would be usable.
 

Desty

Banned
I didn't want to scare people away by putting this in the OP, but their approach to Global Illumination is pretty interested:

The technique is known as SVOGI – Sparse Voxel Octree Global Illumination, and was developed by Andrew Scheidecker at Epic. UE4 maintains a real-time octree data structure encoding a multi-resolution record of all of the visible direct light emitters in the scene, which are represented as directionally-colored voxels. That octree is maintained by voxelizing any parts of the scene that change, and using a traditional set of Direct Lighting techniques, such as shadow buffers, to capture first-bounce lighting.

Performing a cone-trace through this octree data structure (given a starting point, direction, and angle) yields an approximation of the light incident along that path.

The trick is to make cone-tracing fast enough, via GPU acceleration...

Nice find. Thanks for posting this. Totally missed this info in the E3 onslaught. It will be very interesting to see how the Luminous Engine, Crytek, and Fox Engine handle indirect lighting.

As I recall the SVOGI was something like 30 fps at 640x480 with 4 cones on a test scene. Pretty heavy on the graphics card (as all the real time global illumination techniques are). You could easily use up processing cycles by increasing the fidelity a little bit or having lots of characters in motion but it is nice to see it there at all.
 
Yeah, those specs were before Sony took that 6billion dollar loss though and change presidents, to be perfectly honest, I don't think a subscription model will work, and neither will a $600 console, those specs easily imply that. There was also rumors at the beginning of this year that pointed to 1.2-1.4TFLOPs for PS4 (around the same time Wii U's were speculated at 1TFLOPs) I think Sony releasing a $400 console with 1.2TFLOPs is a much smarter decision and keeps Sony in the game.

Especially if XB3 really ends up being 1-1.5TFLOPs, devs would never use the extra power again, and Microsoft would put out a smaller, quieter box that performs against PS4 identically compared to PS360 this gen. Sony needs to play it smart, and go with say a 7850SE at a reasonable clock to keep noise, size and TDP down.

$600 ?
I don' think so. I've seen expert opinion on B3D that a pitcairn class 2TF GPU is very doable from a thermal/power/cost perspective in a $400 retail console. And then if it's got an SoC/APU on top of that it could be a little higher.

I don't think many people really believe PS4 will only have 1.2 TFlops in the final spec.The fact that the 1.2-1.4 rumor was going around at the same time as the 1TF Wii-U rumor kind of brings down the credibility of the 1.2-1.4 rumor IMO, since a 1TF Wii-U seems waay above what people are now expecting. 1TF would let Wii-U meet Epic's minimum for the full featured UE4. But they've already said UE4 isn't made for Wii-U. So logically that implies Wii-U has less. And other rumors, like from an Ubisoft developer, suggest something closer to 400 or 500 Gflops.


Well you can say the power will go to waste against Xbox, and that could be a valid point. Even if PS4 is twice as powerful, it would only amount to an Xbox/PS2 scenario and the 3rd party multiplatform games will try and reach a happy medium.

But maybe Sony just took the middle road on the GPU and for whatever reason, MS decided to go a little bit lower spec on the GPU. Perhaps because they plan to spend more budget on ram for kinnect 2 and a tablet control like the Wii-U ? So it wouldn't be as if Sony was doing anything special, MS just took a different route.

2TF is not as outrageous as you make it sound.
 

thuway

Member
Yeah, those specs were before Sony took that 6billion dollar loss though and change presidents, to be perfectly honest, I don't think a subscription model will work, and neither will a $600 console, those specs easily imply that. There was also rumors at the beginning of this year that pointed to 1.2-1.4TFLOPs for PS4 (around the same time Wii U's were speculated at 1TFLOPs) I think Sony releasing a $400 console with 1.2TFLOPs is a much smarter decision and keeps Sony in the game.



Its hilarious you want Sony to release a console in late 2013 - early 2014 with 1.2 teraflops. In 2012, Sony can build a console with 2.8 teraflops without going crazy with heat or cost( 7950m + 6850; $399). By 2014 I'm sure we could see it break into the 3.5 teraflops threshold with the introduction of 8000 series cards. This will insure at the very least Unreal Engine 4 will run at 1080p with bells and whistles galore.

Edit: A $600 console in early 2014 would net you almost 5 teraflops.

Especially if XB3 really ends up being 1-1.5TFLOPs, devs would never use the extra power again.


The first party will shine once again
. Do you forget that Sony has the largest and arguably the most diverse first party studios? A large appeal to titles like Killzone, Uncharted, the Last of Us, Beyond, God of War, and Gran Turismo is the graphical fidelity. Sony needs to design hardware with their first party in mind. Something Nintendo has been doing since the inception of the NES (whatever Miyamoto wanted, he got). If third parties choose to ignore whats under the hood- thats fine- the first party stands out that much more.
 

thuway

Member

KageMaru

Member
I haven't looked that much into the comment, is the scaled down version supposed to just be for mobile development and such?

You are wrong. ATI's Radeon 2900 series is roughly equivalent to the Xenos GPU of the Xbox 360. It can easily play modern games at higher resolution than the Xbox. Please stop perpetuating this myth of "magical" console optimizations that double or triple a gpu's performance, it's just not true.

Well there are other factors such as the amount of memory and CPU type.

A better way to look at it would be to create a PC similar to what we see in the PS360 and see if you can get games to look anywhere near as good as the best looking PS360 games.

I don't care to put a number behind the measurement, but having a software abstraction layer does cripple or limit flexibility in many ways. One way to look at it is if a GPU supports 10,000 colors, but the API only supports up to 1,000 colors, then it doesn't matter what the GPU can do. That's an ugly rough example but hopefully gives you an idea what we're talking about. Although development of things like compute shaders go a long way to work around such limitations.

2.8 teraflops? I thought the current rumors are ~1.8 teraflops for PS4 and 1.1 to 1.5 tearflops for Durango.

It's too early to assume anything on flop counts just yet.
 
A better way to look at it would be to create a PC similar to what we see in the PS360 and see if you can get games to look anywhere near as good as the best looking PS360 games.

With the exception of memory (which is logical, since a PC has to run a full-fledged OS) I think you can get pretty close. A tri-core PC with a gfx card similar to Xenos could absolutely run a game at 360-level of quality. Here's Crysis 2 running on a dual-core with 2GB of RAM and a Radeon X1950 Pro:

http://www.youtube.com/watch?v=jHWPGmf_A_0
 

KageMaru

Member
With the exception of memory (which is logical, since a PC has to run a full-fledged OS) I think you can get pretty close. A tri-core PC with a gfx card similar to Xenos could absolutely run a game at 360-level of quality. Here's Crysis 2 running on a dual-core with 2GB of RAM and a Radeon X1950 Pro:

http://www.youtube.com/watch?v=jHWPGmf_A_0

Where can I even find a tri-core PC CPU?

Also, while the OS is something to be considered, it's not taking up all 2GB, not to mention the 512MB of faster memory on the GPU alone.

This is in no way comparable.
 
Yeah, those specs were before Sony took that 6billion dollar loss though and change presidents, to be perfectly honest, I don't think a subscription model will work, and neither will a $600 console, those specs easily imply that. There was also rumors at the beginning of this year that pointed to 1.2-1.4TFLOPs for PS4 (around the same time Wii U's were speculated at 1TFLOPs) I think Sony releasing a $400 console with 1.2TFLOPs is a much smarter decision and keeps Sony in the game.

You gotta stop beating this drum. You have this weird agenda about wanting the PS4 to be weak and as close to the Wii U as possible. Cherry picking rumors (lol @ Wii U being 1tflop) to fit this fantasy is not going to make it happen. There is nothing groundbreaking or expensive about the rumored target specs of the ps4., even if they end up going with 4gb ram.
 
*Sigh* People wanting cheap systems. Sadly... they outnumber the people willing to pay $599/$699/$799 (like me) for beastly consoles. Even still... $399 in 2005 dollars is about $480 in 2012 dollars. Regardless... we must never have a console exceed $399. Even 20 years from now. Health of the gaming hardware industry be damned.
 
I don't know if this question got answered earlier but in regards to Wii-U. The developer guy here said that Wii-U is closer to 800 than 500 GFLOPS. I expect that to mean less than 1000. So will Wii-U run they scaled down version?
 

donny2112

Member
I don't know if this question got answered earlier but in regards to Wii-U. The developer guy here said that Wii-U is closer to 800 than 500 GFLOPS. I expect that to mean less than 1000. So will Wii-U run they scaled down version?

Depends on the featureset of the chips and UE4 req's. We won't know for sure until someone comes out and says "This game was created with UE4 Lite." If they said "This game was created with UE4" then they probably just forgot to mention the "Lite" part. ;)
 
Neither Orbis or Durango will launch under 2.5 teraflops. That's my guess.

Well I think if either company responds to Epic's demands, there will be pressure on the other to do the same. The rumors now, which are based on old info, suggests PS4 GPU is stronger, but not a huge advantage, something like 1.8 vs 1.3. But everything I've read is the Durango GPU is certainly not finalized, and I expect it to have more than the max 1-1.5 figure the latest leak gave out. I also think the PS4 will have a minimum of 2 and 2.5 is not far out, especially if it's got all the HSA graphics logic retained on the APU, which some rumors like IGN's suggest is the case.
 
Depends on the featureset of the chips and UE4 req's. We won't know for sure until someone comes out and says "This game was created with UE4 Lite." If they said "This game was created with UE4" then they probably just forgot to mention the "Lite" part. ;)

Well the premise of this thread is that UE4 requires 1000 GFLOPs, which the developers comments says it falls short of. I would guess the Wii-U has at least a DX10 class gpu.
 
Top Bottom