• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Graphical Fidelity I Expect This Gen

CamHostage

Member
I think you people don't understand, i'm not just talking about graphic, i'm also talking about simulation and physics on pixar level.
Pixar doesn't generally run realtime physics. They don't have to; it's a movie. But also, they're animators. They animate their physics, albeit with help from simulation techniques. And it's not even the same "physics simulator" for each element; they may run a fire effect, output it, then add it into the scene alongside other already-rendered effects.

Pre-made "physics" works like this are in games already. Many of the coolest destruction effects in a game like Horizon or the Matrix demo aren't specifically realtime, they are animations built earlier. And Marvel 1943 imports VDBs to do stuff like the smoke effects praised as seeming next-gen in this thread. (The ability to include VDB files is itself a cool advancement for UE, that's 3d data which can react to the scene better than previous canned animation, but still, it's no longer exactly a "physics sim".)

Like games, Pixar movies are not real. It's tricks, it's showmanship, it's entertainment.
 
Last edited:

Lethal01

Member
I think you people don't understand, i'm not just talking about graphic, i'm also talking about simulation and physics on pixar level.

This doesnt change the answer really. Might make it take longer but physics generally has al the same issues. Processing is processing.

the "general computation" taken for a single frame of a modern pixar movie is reported to be a like 200,000 more than that of a game right now.

And going by the things we can predict based on recent trends in hardware minituraization and general increases im computting power in the last 5 to 10 years,meaning not just saying "Ai, ML, Sora etc will improve this somehow" we are looking at like a 1.5x jump every 5 years on the low end 5x on the high end, but with those jumps going lower every year.

But if we assume that these rates don't slow down(which we don't have a reason to belive but lets cross our fingers)

Could be from 150 to like 40 years according to this napkin.

if things continus to slow down could easily take 1000 years or basically just never happen.

On the other hand we could stumble into "paradigm shifting" tech and have it in 15 years. But again, we got a hundred different teams talking about how we will be able to increase powes by 100x if we use "Future, unproven theorectical AI tech". but 90% will fail and another 9% will take longer and be less effective than originally claimed.
 
Last edited:

RavionUHD

Member
This is Jusant, a beautiful UE5 game with Nanite and Lumen (4K Ultrawide with DLSS on PC).

Jusant-03-05-2024-00-38-57.png

Jusant-03-05-2024-00-58-50.png

Jusant-03-05-2024-00-59-08.png

Jusant-03-05-2024-01-02-06.png

Jusant-04-05-2024-00-11-26.png

Jusant-04-05-2024-00-13-51.png

Jusant-04-05-2024-00-17-44.png

Jusant-04-05-2024-02-02-37.png
 
Last edited:
Thank you - you are very kind. And I agree with you on cutscenes - too many, too long and in certain parts too frequent (mostly because of cuts, again).

Alas, not being able to skip them was, again, driven more by trying to deliver on a "feel" than anything else. One of the pillars of the game was "no loading screens, ever, other than loading a save game." Something that was a point of pride for all our games ever since Daxter (which also had no loading screens, in spite of UMD read bandwidth being hard-to-believe pathetic).

Cutscenes were basically regular "streamed level chunks" with interaction removed, so to make them skippable we would have had to force a loading screen to catch up to having the required assets ready as they would be if the scene played on. Lots of technical considerations for this, including the fact that memory management had to be accurately calibrated to always have enough in reserve for smooth streaming while dealing with such high complexity, but at the end yeah... we just needed to show a loading screen in response to a request to skip. It was decided that such a thing did not fit the design and so, no skipping. I assure you I would vote otherwise if we had a chance for a do-over. ;)

I loved the game and played it 3 times. Still looks incredible and if Sony wasn't such a lazy company, they'd give it a 4k update and none of the "softness" complaints would be an issue. The graphics are so good it transcends generations. I think Sony doesn't want their current batch of PS5 exclusives to be upstaged by a PS4 game!
 
Last edited:
We'll at some point have to think differently about how processing works and stacks, and not all of those ideas are clear yet if they even exist. So whatever's coming is less likely to be some multi-petaflop cybermonster box 100x+ beyond what we have today. PC makers are trying to work smarter, not more powerfully.
Physicists say that we are far from reaching the limits of computation. Racetrack memory had people talking of Petabytes of high speed memory.

Pixar never had ray tracing accelerating hardware in the past, until like a decade ago they weren't even doing ray tracing.

Nanite now allows for unlimited detail on static geometry, models with billions of polygons can render in realtime, and soon skeletal meshes will be able to do that. Path tracing is beyond the lighting used in many of the past pixar movies.

Sora like technology will allow for beyond hollywood cg level realism in realtime potentially within years.
 
Looks interesting, is it on Game Pass?
Edit: cool it's on game pass. Gonna check it out.

Its a great game, I finished it a few months ago. Usually when people recommend me some indie game I find them boring but Jusant was like The Last Guarding for me in regards of the sound design and that sense of loneliness. Its not frustrating nor is it a walking simulator. The game mechanics change with each chapter so it keeps it fresh. Def play it with headphones on...theres so many tiny wind howling sounds you would probably miss listening on the TV and it adds that much more to the experience.
 

SlimySnake

Flashless at the Golden Globes
Nanite was said to be data and streaming intensive since they first showed it running on PS5 but the " i/o has no impact on visuals" crowd continued to be in denial.
It’s not that much data. Epic told df that for the matrix demo it is 300 mbps. The ps5 can do 5.5 gbps. Most sata ssads can do 500 mbps.

The io has had zero impact in other ue5 games so far.
 

GymWolf

Member
Pixar doesn't generally run realtime physics. They don't have to; it's a movie. But also, they're animators. They animate their physics, albeit with help from simulation techniques. And it's not even the same "physics simulator" for each element; they may run a fire effect, output it, then add it into the scene alongside other already-rendered effects.

Pre-made "physics" works like this are in games already. Many of the coolest destruction effects in a game like Horizon or the Matrix demo aren't specifically realtime, they are animations built earlier. And Marvel 1943 imports VDBs to do stuff like the smoke effects praised as seeming next-gen in this thread. (The ability to include VDB files is itself a cool advancement for UE, and there's 3d data in there which can react to the scene better than previous canned animation, but still, it's no longer exactly a "physics sim".)

Like games, Pixar movies are not real. It's tricks, it's showmanship, it's entertainment.
Let me riformulate the question then, how much before we get actual physics that look as good and real as the trickery that pixar use?

I thought it was clear that i was only talking about how it look but applied to videogame usage where you can interact with stuff so it must be actual working physics and not just the right look.
 
Last edited:

CamHostage

Member
Let me riformulate the question then, how much before we get actual physics that look as good and real as the trickery that pixar use?

Can't help you any further, you're going to have to study up yourself if you want to answer your own hypothetical questions.
This guy knows some stuff about physics sims:

 
Last edited:

GymWolf

Member
Already watched the videos in that channel, they are what dreams are made of :lollipop_grinning_sweat:

I think my whole question was hypothetical since the beginning dude, i didn't really expected a precise response because no one knows the next tech paradigm shifting discovery.

It was mostly just to spark the discussion on future tech evolution.

Don't you people think that we are gonna have powerfull cloud gaming well before having a physical console that can do pixar graphic and phisics?

Like what microsoft tried to do with crackdown 3 but on steroids.
 
Last edited:

Hunnybun

Member
Already watched the videos in that channel, they are what dreams are made of :lollipop_grinning_sweat:

I think my whole question was hypothetical since the beginning dude, i didn't really expected a precise response because no one knows the next tech paradigm shifting discovery.

It was mostly just to spark the discussion on future tech evolution.

Don't you people think that we are gonna have powerfull cloud gaming well before having a physical console that can do pixar graphic and phisics?

Like what microsoft tried to do with crackdown 3 but on steroids.

You're still talking about real time rendering at 30 or 60fps compared to offline at hours per frame. You can't get round that by remote computation.

Where I do take issue with the direct comparison to offline rendering is with the failure to recognise the vastly diminishing returns beyond a certain level of fidelity.

PS7 or 8 won't get close to the level of actual operations per frame that current CG leverages, but imo it will get very close to how the final product actually LOOKS. And that's the key metric here.
 
Last edited:

Polygonal_Sprite

Gold Member
Im completely ok with that.
Same here but that doesn’t change the fact that if given the choice between industry leading visuals at 200gb or PS4 level looking games at 4k/60fps at 50-100gb most people will chose the latter for a combination of not filling 25% of their console storage with a single game and their internet speed / not wanting to throttle their whole families internet while their 200gb game downloads which in a lot of places inside North America will take several hours if not more.

If file size wasn’t a limitation a lot of modern games would have much more asset variety and they’d have far more detailed texture resolution.

On a related topic all three console makers should include a bandwidth limiter in their download tab options like Steam.
 

Polygonal_Sprite

Gold Member
I know what people means when they say that and they are technically correct, but boy it's laughable when i read people being ok with what we have know because they think we are already almost at the top...

The sad thing is that before we get pixar level of render farms power for videogames we are gonna be all dead.
The thing is the term diminishing returns were being banded about far too early like the PS360 era. Now yes we’re definitely in the realm of diminishing returns.

The Insomniac leak said it best “is the $100+ million spent on SM2 versus the first game evident on screen?” And I’d argue that no it is not worth the extra $100 million and the probable 18 extra months of development time for the overall project.

Some of the people in this thread are truly on the verge of insanity. I’ve asked before but what exactly is it that you want? People posted Witcher / AC / Sony CGI expecting that in real time on a Series S (because remember games have to run on Series S because it’s the baseline hardware which will be lowered to Switch 2 soon enough). Even on PS6 if it has a CPU with 4x the performance, 64GB of RAM and a 50tflop GPU games will still not look like those pre rendered videos. And those are relatively low end in the world of pre rendered CGI so forget Hollywood CGI.

But let’s dream for a second. Say PS6 can deliver visuals on the level of those game marketing CGI videos… how many people will be needed to create and deliver on that level of fidelity. How many years will it take those people. How large will the file sizes be 300/400/500GB!? And most importantly of all how much will these games cost in an industry that’s already struggling with $150 million budgets never mind the average AAA game costing $300-$400 million… a game would need to sell in the region of 10 million copies at full price to just break even.

Rift Apart, Returnal, Forbidden West, GoW Ragnorok, Spider-Man 2, Starfield, Forza Horizon 5 all look great and Hellblade II, Star Wars Outlaws, Black Panther and GTA VI all look a step above those. Don’t worry. Be happy.
 

Hunnybun

Member
The thing is the term diminishing returns were being banded about far too early like the PS360 era. Now yes we’re definitely in the realm of diminishing returns.

The Insomniac leak said it best “is the $100+ million spent on SM2 versus the first game evident on screen?” And I’d argue that no it is not worth the extra $100 million and the probable 18 extra months of development time for the overall project.

Some of the people in this thread are truly on the verge of insanity. I’ve asked before but what exactly is it that you want? People posted Witcher / AC / Sony CGI expecting that in real time on a Series S (because remember games have to run on Series S because it’s the baseline hardware which will be lowered to Switch 2 soon enough). Even on PS6 if it has a CPU with 4x the performance, 64GB of RAM and a 50tflop GPU games will still not look like those pre rendered videos. And those are relatively low end in the world of pre rendered CGI so forget Hollywood CGI.

But let’s dream for a second. Say PS6 can deliver visuals on the level of those game marketing CGI videos… how many people will be needed to create and deliver on that level of fidelity. How many years will it take those people. How large will the file sizes be 300/400/500GB!? And most importantly of all how much will these games cost in an industry that’s already struggling with $150 million budgets never mind the average AAA game costing $300-$400 million… a game would need to sell in the region of 10 million copies at full price to just break even.

Rift Apart, Returnal, Forbidden West, GoW Ragnorok, Spider-Man 2, Starfield, Forza Horizon 5 all look great and Hellblade II, Star Wars Outlaws, Black Panther and GTA VI all look a step above those. Don’t worry. Be happy.

I think this assumption that fidelity and development costs must always rise in close proportion needs some serious scrutiny.
 

GymWolf

Member
The thing is the term diminishing returns were being banded about far too early like the PS360 era. Now yes we’re definitely in the realm of diminishing returns.

The Insomniac leak said it best “is the $100+ million spent on SM2 versus the first game evident on screen?” And I’d argue that no it is not worth the extra $100 million and the probable 18 extra months of development time for the overall project.

Some of the people in this thread are truly on the verge of insanity. I’ve asked before but what exactly is it that you want? People posted Witcher / AC / Sony CGI expecting that in real time on a Series S (because remember games have to run on Series S because it’s the baseline hardware which will be lowered to Switch 2 soon enough). Even on PS6 if it has a CPU with 4x the performance, 64GB of RAM and a 50tflop GPU games will still not look like those pre rendered videos. And those are relatively low end in the world of pre rendered CGI so forget Hollywood CGI.

But let’s dream for a second. Say PS6 can deliver visuals on the level of those game marketing CGI videos… how many people will be needed to create and deliver on that level of fidelity. How many years will it take those people. How large will the file sizes be 300/400/500GB!? And most importantly of all how much will these games cost in an industry that’s already struggling with $150 million budgets never mind the average AAA game costing $300-$400 million… a game would need to sell in the region of 10 million copies at full price to just break even.

Rift Apart, Returnal, Forbidden West, GoW Ragnorok, Spider-Man 2, Starfield, Forza Horizon 5 all look great and Hellblade II, Star Wars Outlaws, Black Panther and GTA VI all look a step above those. Don’t worry. Be happy.
Devs are gonna find ways to cut the costs, with ia or something.

Let us dream dude, nobody really knows how graphic is gonna evolve in the future.
 

hlm666

Member
It’s not that much data. Epic told df that for the matrix demo it is 300 mbps. The ps5 can do 5.5 gbps. Most sata ssads can do 500 mbps.

The io has had zero impact in other ue5 games so far.
Not sure where that comment you replied to came from in relation to that video because there is a part in the video you can see the streaming at around ~250MB/s. It was about saving disk space using tessellation at the cost of some gpu perf with the added bonus I guess of making streaming even lighter to like ~30MB/s
 
You're still talking about real time rendering at 30 or 60fps compared to offline at hours per frame. You can't get round that by remote computation.

Where I do take issue with the direct comparison to offline rendering is with the failure to recognise the vastly diminishing returns beyond a certain level of fidelity.

PS7 or 8 won't get close to the level of actual operations per frame that current CG leverages, but imo it will get very close to how the final product actually LOOKS. And that's the key metric here.
I just want a real time fluid sim happening in some capacity that is at least passable. It doesn’t have to be current Pixar level
 

Polygonal_Sprite

Gold Member
I think this assumption that fidelity and development costs must always rise in close proportion needs some serious scrutiny.
Follow the graph of time in development thus development budget increases from PS1 to PS5. It will continue on the same trajectory without changing the aim of development from chasing photo realism.

Sony and MS have conditioned their customers to demand a large leap in visuals every generation though. At this stage it would be near impossible to change their business model. It’s why Nintendo were clever by changing the perception of what a generation was with Wii by creating new ways to play instead of just creating an HD console in 2005. 20 years later we will get the first Nintendo console that runs games at 1080p with Switch 2 and they’ve just come off a generation where their software alone has made more profit ($25 billion) than Xbox has made profit in it’s entire brand history.

PlayStation especially need to focus on something other than visual fidelity (VR is a great idea and they need to support it far more along with a decent price cut to get it into people’s homes) because even with putting their games out on PC they will not survive long with $400 million development budgets. No company would. Two let downs and you’ve lost a billion dollars when you include marketing…

This industry is beyond mad and it’s going to get even worse before it gets better. We might even get another crash like in the mid 80’s.
 
Last edited:

Polygonal_Sprite

Gold Member
The "softness" of the image might have not been to your liking, but it was very much part of the artistic vision as well. Our artists wanted to try something different and very specific, and we had to create lots of new tech to enable those ideas. Some worked out and some not so much. This is why when we introduced the Photo Mode we added full control over all postprocessing (and much more), and your changes could be persistent. You could make the image as sharp and free of simulated camera artifacts as you pleased, then play the game that way. It again goes to show you that it had nothing to do with rendering limits or the technology pushing the platform too far - the choices were deliberate and our systems and the PS4 were perfectly capable to deliver as designed.

Same goes for the black bars - the aspect ratio of an image has profound implications on the way scenes are populated, framed, lit and animated - the visuals of the game were conceived and concepted from the ground up around these notions, the effect of the black bars was negligible as far as performance goes. As I said many times, our PC prototype from 2011 and all the concept art from this period (years before the PS4 specs were finalized) have the same aspect ratio and the black bars. Again, decisions were made, feel free to dislike them, as you do.

And finally, I will not argue on me being "full of shit" or otherwise, but you don't have the full picture on a lot of things, and also what you say is not at all what was described or how it was supposed to work - sometimes marketing talking points don't come from where you think they do, let's just leave it at that. Beyond this, the way the whole cinemelee thing was envisioned never made it into the game (like many other things we were aiming for). As we were running out of time and were not given the option to extend any further, drastic cuts were made, across the board.

We needed another year, that is the truth of it... at the end it was our fault (by our I mean we in a position of leadership, our team did nothing but the most outstanding job possible). Some of the ambitious bets we made worked out well, some did not. I take responsibility for the latter - even when the issues originated from external forces, we should have fought harder or figured out ways to compensate. I am very sorry that the final output did not live up to expectations.
You probably didn’t see it but I replayed The Order last year on PS5 and was still blown away with what you and your team achieved with a notebook CPU and 1.8tflops of compute. It’s still one of the best looking games and I’d stack it up against any other game visually even the big hitters from 2023 like Phantom Liberty with all their RT running on hardware that costs $3000. I can’t give any more of a compliment than that.

In terms of structure I was honestly fine with the pacing of the walking + talking scenes, the exploration, the shooting and the cut scenes it just needed to be at least twice as long for a full price game imo. I’d still rate it a good 7.5/10. People forget that the in vogue thing at the time of development was linear 3rd person shooters with crazy visuals and set pieces. It’s strange Uncharted 1 didn’t get the same shit when that’s not far over the running time of The Order.

One question I would like to ask if you have a moment is when the frame rate limiter was off how near or far was it from 60fps especially during the more slow exploration and talking sections? I only ask because the Shadow Fall, Second Son and Driveclub directors all said they were around the 40-45fps range but limited their games to 30fps for consistency. A pity we didn’t have VRR screens back then!

All the best in your future endeavours and thanks again to the hard work you and your team put into the development of The Order! I hope somehow, someway we get a sequel. I always thought it would be the same leap as Uncharted to Uncharted 2 in terms of scope, level design and tightening up the shooting mechanics further.
 

Hunnybun

Member
Follow the graph of time in development thus development budget increases from PS1 to PS5. It will continue on the same trajectory without changing the aim of development from chasing photo realism.

Sony and MS have conditioned their customers to demand a large leap in visuals every generation though. At this stage it would be near impossible to change their business model. It’s why Nintendo were clever by changing the perception of what a generation was with Wii by creating new ways to play instead of just creating an HD console in 2005. 20 years later we will get the first Nintendo console that runs games at 1080p with Switch 2 and they’ve just come off a generation where their software alone has made more profit ($25 billion) than Xbox has made profit in it’s entire brand history.

PlayStation especially need to focus on something other than visual fidelity (VR is a great idea and they need to support it far more along with a decent price cut to get it into people’s homes) because even with putting their games out on PC they will not survive long with $400 million development budgets. No company would. Two let downs and you’ve lost a billion dollars when you include marketing…

This industry is beyond mad and it’s going to get even worse before it gets better. We might even get another crash like in the mid 80’s.

If path tracing can be used for all lighting does that imply a large increase in development resources, or a large reduction?

If nanite can eliminate multiple LODs does that imply a large increase in development resources, or a large reduction?

If AI can seriously reduce the time spent on simpler development tasks does that imply a large increase in development resources, or a large reduction?

Etc.
 

Polygonal_Sprite

Gold Member
If path tracing can be used for all lighting does that imply a large increase in development resources, or a large reduction?

If nanite can eliminate multiple LODs does that imply a large increase in development resources, or a large reduction?

If AI can seriously reduce the time spent on simpler development tasks does that imply a large increase in development resources, or a large reduction?

Etc.
No because from the info I have access to almost every single multiplatform game from past, present and future is coming to the next Nintendo platform so that will be used as the development base for the next decade meaning developers can't just throw out the old techniques and use RT for GI, AO and reflections. Even ignoring that Series S exists and still needs standard rasterisation techniques for it's versions of many games versus the much more powerful Series X and PS5.

Note*

Switch 2 or whatever it gets called can use RT as it has tensor cores Ninty wanted for DLSS (and it has the tech do denoise RT, forget the name of it) but at some point it's not enough to support techniques if it doesn't have the raw power to pull them off at acceptable framerates. See the RT effects in the PS4/XBO/Switch version of Crysis Remastered. From what I've been told Switch 2 is about on par with a Series S in terms of what you see on screen visually but has worse framerates and slower load times due to having a much weaker CPU, slower RAM and a slower drive to read data from. I'm in no way bad mouthing it because Switch 2 is a quantum leap over the current Switch and will mean many, many, many PS4/XBO ports, a ton of current gen ports and some downright astounding looking exclusive games.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
No because from the info I have access to almost every single multiplatform game from past, present and future is coming to the next Nintendo platform so that will be used as the development base for the next decade meaning developers can't just throw out the old techniques and use RT for GI, AO and reflections. Even ignoring that Series S exists and still needs standard rasterisation techniques for it's versions of many games versus the much more powerful Series X and PS5.

Note*

Switch 2 or whatever it gets called can use RT as it has tensor cores Ninty wanted for DLSS (and it has the tech do denoise RT, forget the name of it) but at some point it's not enough to support techniques if it doesn't have the raw power to pull them off at acceptable framerates. See the RT effects in the PS4/XBO/Switch version of Crysis Remastered. From what I've been told Switch 2 is about on par with a Series S in terms of what you see on screen visually but has worse framerates and slower load times due to having a much weaker CPU, slower RAM and a slower drive to read data from. I'm in no way bad mouthing it because Switch 2 is a quantum leap over the current Switch and will mean many, many, many PS4/XBO ports, a ton of current gen ports and some downright astounding looking exclusive games.
nobody gives a shit about acceptable framerates and resolutions on the series s or the switch. None of these games will use the switch 2 as the base, just like how the series s wasnt used as the base. PS5 is the lead platform for virtually games this gen. save for a few MS exclusives and PC games like cyberpunk.

This is what the switch ports looked like last gen.

6B45Icd.png


q8tgAik.jpeg


USItNoO.jpg


G9Wkq1N.png


This is what series s looked like when running RT games. 512p. some games actually drop below 480p. thats what the devs will do on the switch. The PS5 and XSX will continue utilizing RT techniques until next gen when they switch to path tracing if the hardware power is available. if not, it will be more intensive RT but it most definitely will not be held back by the switch 2.

1bZYrCX.png
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Already watched the videos in that channel, they are what dreams are made of :lollipop_grinning_sweat:

I think my whole question was hypothetical since the beginning dude, i didn't really expected a precise response because no one knows the next tech paradigm shifting discovery.

It was mostly just to spark the discussion on future tech evolution.

Don't you people think that we are gonna have powerfull cloud gaming well before having a physical console that can do pixar graphic and phisics?

Like what microsoft tried to do with crackdown 3 but on steroids.

Cloud gaming will never be practical because of the costs associated with them. How many cloud servers would you need at launch? 10 million? 20 million? Thats 20 million X $500 or a $10 billion loss on day one. And thats assuming a mediocre next gen upgrade. If you want a 4090 equivalent for powerful cloud gaming you will be looking at $1,000 minimum. Thats $20 billion. And why bother when you can just have gamers foot that bill?

MS likely dropped the crackdown tech because it was impossible to create enough servers together to do the calculations without putting people in the queue like they do today for xcloud. Imagine playing a game on your console and boom, servers run out and you are in the queue and the game goes from using path tracing to switch graphics. you would riot.

Apparently they are all investing in AI. Nvidia, Microsoft, seem to be all in on this stuff but I am a traditionalist. I know there is no secret sauce. I see all these new things as get rich quick schemes that never full pan out. You will always need a powerful GPU. AI will not be able to get a switch GPU to do path tracing. MS will not offer the biggest leap seen in between console generations. You will get a 30-40 tflops console with enhanced Ray tracing capabilities due to slow and steady improvements in hardware and tech. If we are lucky, we get path tracing. If not, we get RT with some pretty fancy visuals on par with certain pixar movies from 10 years ago.

As for physics, lets wait and see what Rockstar does with GTA. Ubisoft does with the next AC. And Kojima with Death Stranding 2. I want to say let it go but DS2 seems to have some realtime floods, earthquakes and destruction, and AC was rumored to have all kinds of destruction so that might just lead to more physics based stuff later next gen. I think the tech is there. Switch is doing physics with 15 watts of power. I think the tech is already there. devs just have to use it.
 

Polygonal_Sprite

Gold Member
nobody gives a shit about acceptable framerates and resolutions on the series s or the switch. None of these games will use the switch 2 as the base, just like how the series s wasnt used as the base. PS5 is the lead platform for virtually games this gen. save for a few MS exclusives and PC games like cyberpunk.

This is what the switch ports looked like last gen.

6B45Icd.png


q8tgAik.jpeg


USItNoO.jpg


G9Wkq1N.png


This is what series s looked like when running RT games. 512p. some games actually drop below 480p. thats what the devs will do on the switch. The PS5 and XSX will continue utilizing RT techniques until next gen when they switch to path tracing if the hardware power is available. if not, it will be more intensive RT but it most definitely will not be held back by the switch 2.

1bZYrCX.png
I don't really get the point you're attempting to make by spamming original Switch screenshots. Developers didn't know Nintendo were using the Tegra 1 until late 2015 which is two plus years after the launch of PS4/XBO. What your making is an apples or oranges comparison. Developers have known Nintendo were using chip T239 with DLSS since early 2020 BEFORE the launch of PS5/XBSS/X.

The next Mon Hunter for instance was built with the geometric and memory limitations of Switch 2 in mind. Yes it's main platform is PS5 (it's actually PC but go off king) but they keep the limitations of the lowest common denominator in mind because they can sell another 10 million copies of Mon Hunter on Switch 2 and 2 million on Series S. Switch missed out on 99% of PS4/XBO AAA games because this wasn't the case. I can't really spell it out any clearer for you than that other than... -

Switch 1 = devs didnt know what Nintendo were up to and DIDN'T CARE because of the disaster of WiiU. Some publishers genuinely thought Nintendo were done.
Switch 2 = devs knew the ballpark specs and feature set since BEFORE PS5/Series consoles launched. Publishers are batting down Nintendo's door to get their games on Switch 2 after selling 150 million Switch consoles.

You're in for a rude awakening but sure keep building yourself up for the next AAA game just because they show off an "in engine" trailer or spout some shit about "unlimited polygons" and "movie quality textures" or even some BS tech demo. Switch 2 will also have a BS tech demo when they show it off which even first party games will never achieve. It's the equivalent of the burger on a McDonalds commercial versus the real thing lol...

You'd think we'd all be wise to this. Remember the EA Sports games and Killzone from the PS3 reveal? or how about the Deep Down dragon and it's fire attack from the PS4 reveal? or the CGI like Breath of the Wild reveal or even in the more recent past with the UE5 demo with the character flying through a massive environment with pixel perfect shading / shadowing / ridiculous high quality textures and no LODs whatsoever... rofl.
 
Last edited:

Hunnybun

Member
No because from the info I have access to almost every single multiplatform game from past, present and future is coming to the next Nintendo platform so that will be used as the development base for the next decade meaning developers can't just throw out the old techniques and use RT for GI, AO and reflections. Even ignoring that Series S exists and still needs standard rasterisation techniques for it's versions of many games versus the much more powerful Series X and PS5.

Note*

Switch 2 or whatever it gets called can use RT as it has tensor cores Ninty wanted for DLSS (and it has the tech do denoise RT, forget the name of it) but at some point it's not enough to support techniques if it doesn't have the raw power to pull them off at acceptable framerates. See the RT effects in the PS4/XBO/Switch version of Crysis Remastered. From what I've been told Switch 2 is about on par with a Series S in terms of what you see on screen visually but has worse framerates and slower load times due to having a much weaker CPU, slower RAM and a slower drive to read data from. I'm in no way bad mouthing it because Switch 2 is a quantum leap over the current Switch and will mean many, many, many PS4/XBO ports, a ton of current gen ports and some downright astounding looking exclusive games.

That's not a defence of your original point; it just means that the time at which these techniques can be fully leveraged is somewhat delayed. But ultimately they'll still happen, won't they? Which is what I said.
 

Polygonal_Sprite

Gold Member
That's not a defence of your original point; it just means that the time at which these techniques can be fully leveraged is somewhat delayed. But ultimately they'll still happen, won't they? Which is what I said.
Not really because even when PS6 arrives there will still be a cross gen period of 2-3 years so we’re talking 2030. What will gaming even look like then? Local hardware will not even be a thing anymore probably and developers will more than likely mostly be AI lol.

Also during that cross gen period they will still want to support Switch 2 as the base platform so it circles around to my last point perfectly.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
I don't really get the point you're attempting to make by spamming original Switch screenshots. Developers didn't know Nintendo were using the Tegra 1 until late 2015 which is two plus years after the launch of PS4/XBO. What your making is an apples or oranges comparison. Developers have known Nintendo were using chip T239 with DLSS since early 2020 BEFORE the launch of PS5/XBSS/X.

The next Mon Hunter for instance was built with the geometric and memory limitations of Switch 2 in mind. Yes it's main platform is PS5 (it's actually PC but go off king) but they keep the limitations of the lowest common denominator in mind because they can sell another 10 million copies of Mon Hunter on Switch 2 and 2 million on Series S. Switch missed out on 99% of PS4/XBO AAA games because this wasn't the case. I can't really spell it out any clearer for you than that other than... -

Switch 1 = devs didnt know what Nintendo were up to and DIDN'T CARE because of the disaster of WiiU. Some publishers genuinely thought Nintendo were done.
Switch 2 = devs knew the ballpark specs and feature set since BEFORE PS5/Series consoles launched. Publishers are batting down Nintendo's door to get their games on Switch 2 after selling 150 million Switch consoles.

You're in for a rude awakening but sure keep building yourself up for the next AAA game just because they show off an "in engine" trailer or spout some shit about "unlimited polygons" and "movie quality textures" or even some BS tech demo. Switch 2 will also have a BS tech demo when they show it off which even first party games will never achieve. It's the equivalent of the burger on a McDonalds commercial versus the real thing lol...

You'd think we'd all be wise to this. Remember the EA Sports games and Killzone from the PS3 reveal? or how about the Deep Down dragon and it's fire attack from the PS4 reveal? or the CGI like Breath of the Wild reveal or even in the more recent past with the UE5 demo with the character flying through a massive environment with pixel perfect shading / shadowing / ridiculous high quality textures and no LODs whatsoever... rofl.
You seem to know a lot about what developers know/dont know/dont care. None of what you said has been corroborated by anyone. What are your sources?

The proof is in the pudding. We all thought series s would hold back next gen. it didnt. There was no rude awakening. Devs simply removed RT and other next gen features from Series S games or removed 60 fps modes or reduced resolution all the way down to 480.

We literally saw a repeat of what they did with the switch. Downport and move on. Matrix on series s looks like a VCD movie we all pirated back in 1999. Full of blocky artifacts and this was after MS intervened and sent coalition to fix the series S version because epic couldnt give two shits about it. I have no idea what the CG trailers of Killzone and Madden have to do with switch becoming the defacto console. Those are two different conversations.

By your logic GTA6, 1943, Avatar, Alan wake 2, and all the UE5 titles were held back by the Switch 2 because devs were somehow aware of the Switch 2 specs and decided to make Switch 2 the lead platform. How do you not see how insane this sounds?
 

Polygonal_Sprite

Gold Member
You seem to know a lot about what developers know/dont know/dont care. None of what you said has been corroborated by anyone. What are your sources?

The proof is in the pudding. We all thought series s would hold back next gen. it didnt. There was no rude awakening. Devs simply removed RT and other next gen features from Series S games or removed 60 fps modes or reduced resolution all the way down to 480.

We literally saw a repeat of what they did with the switch. Downport and move on. Matrix on series s looks like a VCD movie we all pirated back in 1999. Full of blocky artifacts and this was after MS intervened and sent coalition to fix the series S version because epic couldnt give two shits about it. I have no idea what the CG trailers of Killzone and Madden have to do with switch becoming the defacto console. Those are two different conversations.

By your logic GTA6, 1943, Avatar, Alan wake 2, and all the UE5 titles were held back by the Switch 2 because devs were somehow aware of the Switch 2 specs and decided to make Switch 2 the lead platform. How do you not see how insane this sounds?
There was no rude awakening? I must be imagining you and others in here pissing and moaning on a daily basis for the entire four years of this gen calling Insomniac and other devs lazy and unambitious etc lol.

My sources are mates that work not just in the industry but at two of the biggest aaa houses. Games are built around the constraints of the Series S even though PS5 is the "lead platform" and as of the past year multiplat games are now built around the constraints of NG Nintendo hardware because the meshes and other systems have to run on it and Series S aswell as low end PC's, PS5 and Series X.

There's no talking to you mate. Until we have Avatar 2 in real time you'll always find a reason to piss and whine. Play the games we have and for fuck sake enjoy them. They look great and a lot of good people work 80 hours a week to bring them to you and the rest of us.
 

SlimySnake

Flashless at the Golden Globes
There was no rude awakening? I must be imagining you and others in here pissing and moaning on a daily basis for the entire four years of this gen calling Insomniac and other devs lazy and unambitious etc lol.
Yes, we all know Insomniac, Guerrilla Games, Naughty Dog and Sucker Punch were held back by the Series S and the 2025 handheld Nintendo Switch 2. Not because they tied themselves to 1.8 tflops GPUs and 1.6 ghz CPUs and HDDs.
My sources are mates that work not just in the industry but at two of the biggest aaa houses. Games are built around the constraints of the Series S even though PS5 is the "lead platform" and as of the past year multiplat games are now built around the constraints of NG Nintendo hardware because the meshes and other systems have to run on it and Series S aswell as low end PC's, PS5 and Series X.
PS5 cant be both the lead platform and be limited by the constraints of nintendo hardware. The devs will treat the switch 2 like they have treated the series s. they will downport the games with no regards to actual quality. no one gives a shit about these shitty consoles. That is what those switch screenshots were supposed to show you. the ports will likely be handled by outside porting studios. CD project isnt designing the Witcher 4 around switch 2 specs lol
There's no talking to you mate. Until we have Avatar 2 in real time you'll always find a reason to piss and whine. Play the games we have and for fuck sake enjoy them. They look great and a lot of good people work 80 hours a week to bring them to you and the rest of us.
This has nothing to do with anything, but I played pretty much every major release last year and was one of the few to point out their graphics achievements while others bitched and moaned about the lack of Matrix quality leaps. You must have me confused with someone else.
 
Last edited:

Hunnybun

Member
Not really because even when PS6 arrives there will still be a cross gen period of 2-3 years so we’re talking 2030. What will gaming even look like then? Local hardware will not even be a thing anymore probably and developers will more than likely mostly be AI lol.

Also during that cross gen period they will still want to support Switch 2 as the base platform so it circles around to my last point perfectly.

No.
 

Polygonal_Sprite

Gold Member
Yes, we all know Insomniac, Guerrilla Games, Naughty Dog and Sucker Punch were held back by the Series S and the 2025 handheld Nintendo Switch 2. Not because they tied themselves to 1.8 tflops GPUs and 1.6 ghz CPUs and HDDs.

PS5 cant be both the lead platform and be limited by the constraints of nintendo hardware. The devs will treat the switch 2 like they have treated the series s. they will downport the games with no regards to actual quality. no one gives a shit about these shitty consoles. That is what those switch screenshots were supposed to show you. the ports will likely be handled by outside porting studios. CD project isnt designing the Witcher 4 around switch 2 specs lol

This has nothing to do with anything, but I played pretty much every major release last year and was one of the few to point out their graphics achievements while others bitched and moaned about the lack of Matrix quality leaps. You must have me confused with someone else.
Common now. How many times have you moaned about AAA third party games in the past four years. But yes you have a particular fetish hatred for Sony’s first party devs for some reason. Sony’s PS5 games aren’t wowing you as much as you thought they would because the leap between PS4 Pro to PS5 isn’t anything like the leap between PS3 to PS4. We went from 600p/720p @ 20-25fps on PS3 to full native 1080p @ locked 30fps on base PS4 and 1440p/4kcb @ either locked 30fps (or unlocked 50-60fps performance modes in some games) on PS4 Pro. Then we had new industry standards with PBR, per object motion blur, good quality DoF, clever approximations of bounce lighting, ridiculously good looking character models and good quality particle effects most of which couldn’t be done on PS3.

In development terms “lead platform” doesn’t mean that the game is built around it. It simply means the version that gets the most attention (usually the most popular platform so PS4 and now PS5). I’m sure you know games true lead platforms are development environments on PC first then console dev kits. And I’m sure you know what I meant but when game Worlds are being designed in terms of geometry, scale etc the particular games lowest powered platform is always used as the base and they build the other versions around that skeleton. They do not “down port” games to Series S after creating them on the crowning glory that is PS5 hardware. A console that has to run some games at 720p in the year of our lord 2024…

I won’t engage with you further because I know you’re an intelligent guy but are selective quoting and misinterpreting my posts on purpose in an effort to not lose an argument on a message board. It’s really not that serious dude. I’m simply passing on knowledge from people who have worked in the industry at the absolute highest level for 25+ years and are at this very moment working on the Worlds most anticipated video game, maybe of all time. They know a thing or two. Don’t shoot the messenger.
 
Last edited:
Im just shocked that people Actual Care if they can fit more than 5 games at once on there console? Even if you have to wait a bit to redownload a game why does that even matter? People should value patience far more than that. Even with slow internet it takes less time to download a game today than it used to too visit Blockbuster and return with one back in the day.
Same reason why people root for a digital only future (giving up their ownership rights “Own nothing and be happy”). No logic in their brains…Sometimes gamers complain about the stupidest things.
 
Last edited:
The thing is the term diminishing returns were being banded about far too early like the PS360 era. Now yes we’re definitely in the realm of diminishing returns.

The Insomniac leak said it best “is the $100+ million spent on SM2 versus the first game evident on screen?” And I’d argue that no it is not worth the extra $100 million and the probable 18 extra months of development time for the overall project.

Some of the people in this thread are truly on the verge of insanity. I’ve asked before but what exactly is it that you want? People posted Witcher / AC / Sony CGI expecting that in real time on a Series S (because remember games have to run on Series S because it’s the baseline hardware which will be lowered to Switch 2 soon enough). Even on PS6 if it has a CPU with 4x the performance, 64GB of RAM and a 50tflop GPU games will still not look like those pre rendered videos. And those are relatively low end in the world of pre rendered CGI so forget Hollywood CGI.

But let’s dream for a second. Say PS6 can deliver visuals on the level of those game marketing CGI videos… how many people will be needed to create and deliver on that level of fidelity. How many years will it take those people. How large will the file sizes be 300/400/500GB!? And most importantly of all how much will these games cost in an industry that’s already struggling with $150 million budgets never mind the average AAA game costing $300-$400 million… a game would need to sell in the region of 10 million copies at full price to just break even.

Rift Apart, Returnal, Forbidden West, GoW Ragnorok, Spider-Man 2, Starfield, Forza Horizon 5 all look great and Hellblade II, Star Wars Outlaws, Black Panther and GTA VI all look a step above those. Don’t worry. Be happy.
The Matrix Awakens demo, Marvel 1943 and Hellblade 2 say differently. Look at the Marble and Cars demo by Nvidia on a 4090…CGI quality realtime graphics aren’t hard to achieve with nanite and luman like technology…

Also to the second part of your post, This is where new compression methods, tools and bigger hard drives come in. Also the industry higher ups need to stop being greedy. There is plenty of money to fund these projects, it’s the higher ups that don’t want to lose a bonus paycheck.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
The Matrix Awakens demo, Marvel 1943 and Hellblade 2 say differently. Look at the Marble and Cars demo by Nvidia on a 4090…CGI quality realtime graphics aren’t hard to achieve with nanite and luman like technology…

Also to the second part of your post, This is where new compression methods, tools and bigger hard drives come in. Also the industry higher ups need to stop being greedy. There is plenty of money to fund these projects, it’s the higher ups that don’t want to lose a bonus paycheck.
Insomniac was saying how 30GB of the file size is just lighting texture data. that goes away or is reduced to a very small percentage when they switch to ray traced or Path traced GI. Horizon FW came in at 100GB. For a last gen game even on the PS4 it was 90GB. For a game essentially the same size as the first game. I wouldnt be surprised if it had to do them doubling the time of day bakes. Spiderman 2 did the same as well.

I wouldnt worry about file sizing holding back developers. Star Wars came in at 150 GB, FF7 Rebirth 145 GB and shipped on two discs so they couldve gone up to 200GB.

Last gen devs had a limit of 50 GB bluray discs and by the end, TLOU2 and RDR2 blew past that. Ragnorak was over 100GB on PS4. Thats like the last thing they will worry about.
 
My sources are mates that work not just in the industry but at two of the biggest aaa houses. Games are built around the constraints of the Series S even though PS5 is the "lead platform" and as of the past year multiplat games are now built around the constraints of NG Nintendo hardware because the meshes and other systems have to run on it and Series S aswell as low end PC's, PS5 and Series X.
This is why weak hardware isn’t “Next Gen”. Series S was a horrible idea…
 
Last edited:
Do any of you uses DLDSR? For a 2560x1440 monitor which factor do you recommend? DL or Legacy?

I use DLDSR any chance I get (only way to make more use of 24GB of vram really) and consider it Nvidia killer feature over everything else. 2.2x can be brutal at the 4K I use but 1.7x can still look great, some single player stuff I'll lock at 40 or 60 to use it.

Works well with DLSS too. Using 1.7x DLDSR with DLSS performance should have it upscaling from the same resolution as DLSS quality, you get great IQ from downsample and a performance boost, only now you do actually get that fabled 'better than native' as well.
 
I use DLDSR any chance I get (only way to make more use of 24GB of vram really) and consider it Nvidia killer feature over everything else. 2.2x can be brutal at the 4K I use but 1.7x can still look great, some single player stuff I'll lock at 40 or 60 to use it.

Works well with DLSS too. Using 1.7x DLDSR with DLSS performance should have it upscaling from the same resolution as DLSS quality, you get great IQ from downsample and a performance boost, only now you do actually get that fabled 'better than native' as well.
Absolutely enable both 1.78x and 2.25x DLDSR. I myself is on a 4070 on a 1440p monitor and even the 1.78x returns excellent IQ and performance. Managed to play (And beat) the notoriously jaggy Yakuza 6 and Yakuza kiwami 2 with DLDSR 1.78x. Cleans a the image up a hell of a lot better than standard DSR and at better performance to boot.
 

CamHostage

Member
Indie Devs can do this, but not AAAA industry devs…



Relax with the hyperbole. Test footage from a game several years away (using a few recent features of Unreal Engine familiar to professional animators) is a different topic of discussion from products completely produced and playable today. Deniz is doing great work, but they still have to actually make the game...

 
Last edited:
Top Bottom