This definitely was not aimed for my gaming tastes.
If I wanted to be a game developer...I'd have got a job doing so. I don't want to create my levels, rules, objects, etc. About all I want to make is my character and customize their appearance throughout an experience.
Games like this are forever lost on me.
Based on what? It's not like it needs the COD/brofisting audience in order to sell. Truth is, we NEED more games like this because most of the other new/inventive ideas are coming from the indie scene, but they don't have the budget this game seems to have.It looks neat, and the tech is impressive no doubt. Nonetheless, I feel like this won't sell.
Anton Mikhailov works now at Media Molecule? Hmm, I wonder how it came to that. I thought he was heads deep with Morpheus and new Sony R&D stuff, probably could mean that they will try to implement VR in some fashion with Dreams.
Whatever it was,it had some of the most beautiful mind-blowing visuals I've ever seen.
Are you sure this wasn't really aimed at your gaming tastes? Ok, you don't want to be a game developer, but you need game developers to make the games you like to play, right? A piece of software like this is a means of further democratizing the game development process. YOU still don't have to be a game developer, but lots of other people can crowdsource basic game development with software like this.This definitely was not aimed for my gaming tastes.
If I wanted to be a game developer...I'd have got a job doing so. I don't want to create my levels, rules, objects, etc.
Which is ironic, considering that this is something Dreams seems particularly well suited at letting you do, above and beyond anything else out there.About all I want to make is my character and customize their appearance throughout an experience.
Can someone post a graphics 101 on how these graphics are created.
Preferably in Food Babe-level language...
Anyone?
I hope they do not make the same mistakes they did on LBP, limiting some creation stuff in order to have better control on DLC sales.
From the looks of it, this will enable players to create any type of games, and obviously is not just a movie creation suite.
Anyone?
That might uneven the playing field and demoralise players just using the in-game tools.I'm really excited for this. Even if it's $60, I'll be getting it day one.
I'm incredibly interested to see more in-depth stuff with creation and sculpting. It would be amazing if they had a way to upload models made in Blender or Maya so you aren't limited to just the sculpt tool and whatever prefabs they include.
link to him working at MM?
I'm incredibly interested to see more in-depth stuff with creation and sculpting. It would be amazing if they had a way to upload models made in Blender or Maya so you aren't limited to just the sculpt tool and whatever prefabs they include.
So have they touched on the actual gameplay yet or is it just a case of walking through an art gallery of dreams?
Allovermahboday.
This definitely was not aimed for my gaming tastes.
If I wanted to be a game developer...I'd have got a job doing so. I don't want to create my levels, rules, objects, etc. About all I want to make is my character and customize their appearance throughout an experience.
Games like this are forever lost on me.
I guess this game isn't for me. Didn't like it when they revealed the trailer.
I
(I think) MM is taking mathematical functions mentioned above and evaluating them at points in a 3D texture. So turning them into something they can look up at discrete points - which is a lot faster than calculating the functions from scratch. So, when you're sculpting in the game, it'll be baking the results of these geometry defining mathematical functions and operations into a 3D texture. So that's turning the distance functions into a more explicit representation which you might see referred to as a distance field.
To render the object represented by that texture, they have a couple of options. They could trace a ray and look up this 3D texture as necessary to figure out the point on the surface to be shaded - which is what I thought they were doing previously. But a more recent tweet suggests they are 'splatting' the distance field to the screen, which is sort of a reverse way of doing things. They'll be explaining this at Siggraph.
The advantages are the easy of deforming geometry with relatively simple operations. Doing robust deformation and boolean (addition/subtraction/intersection etc.) operations with polygonal meshes is really hard. Knowledge of 'the shortest distance from a given point to that surface' can also be applied in lots of other areas - ambient occlusion, shadowing, lighting, physics (collision detection). It's a handy representation to have for doing things that are trickier with polygons. UE4 has recently added the option to represent geometry with distance fields for high quality shadowing.
The disadvantage of this - and of using it from top to toe in your pipeline! - is that's it tricky to do it fast, and obviously GPUs and content pipelines etc. are so based around the idea of triangle rasterisation. But GPUs have gotten a lot more flexible lately, so maybe as time wears on we'll see even more less traditional 'software rendering' on the GPU.
So have they touched on the actual gameplay yet or is it just a case of walking through an art gallery of dreams?
We saw a bear beating a bunch of zombie and a robot racing with flying motorcycle. I'm sure they wouldn't choose to use those scenario if they can't at least make it playable imo.So have they touched on the actual gameplay yet or is it just a case of walking through an art gallery of dreams?
I'm optimist. One of the best moments in my gaming life was the moment I played the LBP Beta. Like this time, it was overwhelming and magical.
Ooh! {assumptions} Who better to help implement the motion controls?
VR would/could be perfect for this...the whole concept of sharing digital 'dreams' crossed with the idea of 'Morpheus' seems all too obvious. It could be a media molecule metaverse for VR. BUT...I have extreme doubts they could get this working at a framerate required for VR. I would guess this is targeting 1080p/30fps.
If this takes off, though, it's easy to imagine a VR version in the future. For now I think the fidelity they're targeting might be too much for PS4 VR though :/
In graphics, you're going to have some kind of geometry representation. The typical one is a mesh of triangles approximating a shape. So typical that it's universal in games.
But you can also use mathematical equations that describe a shape - let's say a sphere.
One such equation is one that takes a point in 3D space, and returns the shortest distance from that point to the surface of the shape.
You can use that function in rendering - in a ray tracer, for example, to figure out the point on the sphere that a camera's pixel should be rendering.
So instead of putting a bunch of polygons representing a sphere down a rendering pipeline, you can trace a ray per pixel and evaluate precisely what point on the sphere that pixel should be looking at. You've probably heard of 'per pixel' effects in other contexts - this would be like 'per pixel geometry'.
You can do interesting things with these functions. You can very simply blend shapes together with a mathematical operation between two shapes' functions. You can subtract one shape from the other, with another operation. You can deform shapes in lots of interesting ways. Add noise, twist them. For example, here's a shape with a little deformation on the surface described by a function using 'sin':
This render was produced with a tracing of the function - at every pixel the function was evaluated multiple times to figure out the point the pixel was looking at. Notice how smooth it is - you're not seeing any polygonal edges or the like here. It's a very precise kind of way of rendering geometry.
Now this isn't exactly what Media Molecule is doing. And here's where I diverge into speculation based on tweets and stuff.
(I think) MM is taking mathematical functions mentioned above and evaluating them at points in a 3D texture. So turning them into something they can look up at discrete points - which is a lot faster than calculating the functions from scratch. So, when you're sculpting in the game, it'll be baking the results of these geometry defining mathematical functions and operations into a 3D texture. So that's turning the distance functions into a more explicit representation which you might see referred to as a distance field.
To render the object represented by that texture, they have a couple of options. They could trace a ray and look up this 3D texture as necessary to figure out the point on the surface to be shaded - which is what I thought they were doing previously. But a more recent tweet suggests they are 'splatting' the distance field to the screen, which is sort of a reverse way of doing things. They'll be explaining this at Siggraph.
The advantages are the easy of deforming geometry with relatively simple operations. Doing robust deformation and boolean (addition/subtraction/intersection etc.) operations with polygonal meshes is really hard. Knowledge of 'the shortest distance from a given point to that surface' can also be applied in lots of other areas - ambient occlusion, shadowing, lighting, physics (collision detection). It's a handy representation to have for doing things that are trickier with polygons. UE4 has recently added the option to represent geometry with distance fields for high quality shadowing.
The disadvantage of this - and of using it from top to toe in your pipeline! - is that's it tricky to do it fast, and obviously GPUs and content pipelines etc. are so based around the idea of triangle rasterisation. But GPUs have gotten a lot more flexible lately, so maybe as time wears on we'll see even more less traditional 'software rendering' on the GPU.
Allovermahboday.
Awesome post, most of which I understood
Couple more qustions:
Are the disadvantages such that it's only now with GPUs capable of managing heavy compute loads that this technique is viable for use in games?
Can it be blended with traditional rasterising rendering in hybrid solutions?
This is worth a re-watch.
https://youtu.be/MtY12ziHuII?t=3m10s especially from 3:10 onward.
Not just for the 3D modelling but the "futuristic (3D) interface."