Still not as photorealistic as Ace Combat 6 though
I see a lot of guys taking one model and instancing the crap out of it and thinking that the graphics card is churing through it with very little cost. Here's the deal.. these are instances of the same exact object. The game engine is smart in leaving just the memory storage for the model in a cache and then a pointer just indexes this exact same model. It can even cache the shader as well. The only cost would be the original model and transformation matrices to move the model around in world space. This test should be taken with a grain of salt. It does NOT push the graphics pipeline thoroughly enough. In order to do this, you would need several *different* assets and *different materials* on those assets. THEN try instancing them all over the place and you will watch the GPU come to a crawl. This would be a proper stress test as a game is more likely to have many unique instances in a rendered frame during gameplay.
I see a lot of guys taking one model and instancing the crap out of it and thinking that the graphics card is churing through it with very little cost. Here's the deal.. these are instances of the same exact object. The game engine is smart in leaving just the memory storage for the model in a cache and then a pointer just indexes this exact same model. It can even cache the shader as well. The only cost would be the original model and transformation matrices to move the model around in world space. This test should be taken with a grain of salt. It does NOT push the graphics pipeline thoroughly enough. In order to do this, you would need several *different* assets and *different materials* on those assets. THEN try instancing them all over the place and you will watch the GPU come to a crawl. This would be a proper stress test as a game is more likely to have many unique instances in a rendered frame during gameplay.
an x million polygon photoscan is one thing. A game is another. Who gives a shit until we actually see the engine on an proper game.
Taken with a grain of salt when thinking of an interactive, motion-filled, multi-object world, sure, that would be wise to temper expectations...
But even replicating the exact model is a feat worth recognizing. You can flip Nanite on and off for Objects in the UE editor, and your framerate will sink with every extra instance of a high-poly model replicated in a stage unless Nanite takes over. This demo, for instance, chugs on the guy's machine when he places 40 of these statues in his game world; a field of 20,000 statues happily runs at the same or better framerate.
Now, I don't know what happens if you try to apply texture stamps or even color changes different for each one of those 20,000 statues with Nanite (at some point, its efficiency must drop when it can't replicate blindly, right,) much less physics and animation, but even just with replica models, that's fascinating that it maintains performance at that volume.
I don't know how many models are in the Land of the Ancients demo world map, for instance, but by kitbashing & copying a few rocks, spinning and flipping and sizing them into all different shapes to slap into the stage then letting light spill across it all, they have an amazing and expansive desert landscape that performs well on target hardware and that looks great and unique from almost any angle you find while exploring it. (...But, it also takes up a 100GB project file and 25GB playable demo, so there are some big-ass assets in that file set.)
In what demo? You read it several times that running the editor is not the same as running the baked demo. I even told you this as well as others, repeatedly in the other threads. So what demo did the 3090 barely run 60fps @ 1440? Cause that same gpu, the 3090 and 3080 ran more than 60 fps at native 4k. I'd love to know what demo you are talking about, as I just have missed something over this weekend or something?The real question is: What's the frame budget is? How many polygons per frame are there? Also this is 1 asset. We saw 3090 struggling with the demo and can't hold 60fps at 1440p.
In what demo? You read it several times that running the editor is not the same as running the baked demo. I even told you this as well as others, repeatedly in the other threads. So what demo did the 3090 barely run 60fps @ 1440? Cause that same gpu, the 3090 and 3080 ran more than 60 fps at native 4k. I'd love to know what demo you are talking about, as I just have missed something over this weekend or something?
You've been in that thread spreading laugh emoji few days back, a thread by SlimySnake. Anyway, go search for it if you're interested.
EDIT: You've been there:
- 3090 at 4k outperforms 6900xt. 10-20% better performance.
- 6900xt at 1440p outperforms 3090. 10-15% better performance.
- 6900xt is only 20 Tflops. 3090 is 36 tflops.
- VRAM usage at both 1440p and 4k resolutions is around 6GB. Demo Allocates up to 7GB.
- System RAM usage at both 1440p and 4k resolutions goes all the way up to 20+ GB. Total usage 2x more than PS5 and XSX.
- PS5 and XSX only have 13.5 GB of total RAM available which means their I/O is doing a lot of the heavy lifting.
- 6900 XT is roughly 50-60 fps at 1440p. 28-35 fps at native 4k.
- 3090 is roughly 45-50 fps at 1440p. 30-38 fps at native 4k.
Unreal Engine 5 Benchmarks. 6900xt outperforms 3090 at 1440p. 28 GB RAM usage.
Some interesting results found by a youtuber. 3090 at 4k outperforms 6900xt. 10-20% better performance. 6900xt at 1440p outperforms 3090. 10-15% better performance. 6900xt is only 20 Tflops. 3090 is 36 tflops. VRAM usage at both 1440p and 4k resolutions is around 6GB. Demo Allocates up to 7GB...www.neogaf.com
Taken with a grain of salt when thinking of an interactive, motion-filled, multi-object world, sure, that would be wise to temper expectations...
But even replicating the exact model is a feat worth recognizing. You can flip Nanite on and off for Objects in the UE editor, and your framerate will sink with every extra instance of a high-poly model replicated in a stage unless Nanite takes over. This demo, for instance, chugs on the guy's machine when he places 40 of these statues in his game world; a field of 20,000 statues happily runs at the same or better framerate.
Now, I don't know what happens if you try to apply texture stamps or even color changes different for each one of those 20,000 statues with Nanite (at some point, its efficiency must drop when it can't replicate blindly, right,) much less physics and animation, but even just with replica models, that's fascinating that it maintains performance at that volume.
I don't know how many models are in the Land of the Ancients demo world map, for instance, but by kitbashing & copying a few rocks, spinning and flipping and sizing them into all different shapes to slap into the stage then letting light spill across it all, they have an amazing and expansive desert landscape that performs well on target hardware and that looks great and unique from almost any angle you find while exploring it. (...But, it also takes up a 100GB project file and 25GB playable demo, so there are some big-ass assets in that file set.)
Yea, that tells you how inefficient Unreal is on copy/pasting the same asset and NOT using Nanite. Nanite fixes an issue that shouldn't be there.
As an example, we took a buddha model with complete subsurface scattering BSDF and refraction (glass) BTDF and spammed it by well over 200 instances in offline rendering using Arnold (full path-tracer) and it gained negligible render times. Arnold is extremely efficient at rendering full bidirectional path-tracing. Do the same thing with Renderman back in the day and it would choke when turning on RT mode. It wasn't even a comparison when we were pitting the two up against each other for renderer of choice for the studio. Since then PRman has changed though.
People seem to think this means well get levels with 4 trillion polys in it.....we wont , not because we cant but because its dumb and makes no sense...unless you want 5 TB games.
Polys have been cheap for a long time, its material resolution and lighting thats expensive.
What this DOES mean however is just caring less or worrying less about poly count overhead....and its already been something people dont worry about much anymore except for mobile games.
It just means even less thinking about it or having to do more cleanup later.
It does mean a lot more polygons without increasing size, since there are tons of exremely repetitive things in environments like pebbles and what not.
yes but you still wouldnt do that, cleanup is an import part of reuse.
And it's been stated a million times before, that's in the editor, not the baked demo, which barely uses any resources. Not sure why you keep saying that, while knowing that's not real performance. So why do you keep doing it? Repeating it won't make it true.You've been in that thread spreading laugh emoji few days back, a thread by SlimySnake. Anyway, go search for it if you're interested.
EDIT: You've been there:
- 3090 at 4k outperforms 6900xt. 10-20% better performance.
- 6900xt at 1440p outperforms 3090. 10-15% better performance.
- 6900xt is only 20 Tflops. 3090 is 36 tflops.
- VRAM usage at both 1440p and 4k resolutions is around 6GB. Demo Allocates up to 7GB.
- System RAM usage at both 1440p and 4k resolutions goes all the way up to 20+ GB. Total usage 2x more than PS5 and XSX.
- PS5 and XSX only have 13.5 GB of total RAM available which means their I/O is doing a lot of the heavy lifting.
- 6900 XT is roughly 50-60 fps at 1440p. 28-35 fps at native 4k.
- 3090 is roughly 45-50 fps at 1440p. 30-38 fps at native 4k.
Unreal Engine 5 Benchmarks. 6900xt outperforms 3090 at 1440p. 28 GB RAM usage.
Some interesting results found by a youtuber. 3090 at 4k outperforms 6900xt. 10-20% better performance. 6900xt at 1440p outperforms 3090. 10-15% better performance. 6900xt is only 20 Tflops. 3090 is 36 tflops. VRAM usage at both 1440p and 4k resolutions is around 6GB. Demo Allocates up to 7GB...www.neogaf.com
And it's been stated a million times before, that's in the editor, not the baked demo, which barely uses any resources. Not sure why you keep saying that, while knowing that's not real performance. So why do you keep doing it? Repeating it won't make it true.
You make it sound like im arguing against Unreal 5....i use it everyday for living to make Environments and love it. You dont seem to understand im saying the feature is great but it doesnt mean you are dumping endless meshes into the game, you still follow intelligent pipelines. The point isnt to just dump endless meshes like people think, the point is to have another aspect to the pipeline you dont have to fret over.What? We already have tons of reuse, with nanite you could easily make those meshes higher quality at the same cost in addition to increasing the draw distance on them.