Power of the Cloud - Microsoft Cloud Demo (simulating ~35,000 falling chunks)
Power of the GPU - Resogun (no internet required)
they are doing completely different things
Power of the Cloud - Microsoft Cloud Demo (simulating ~35,000 falling chunks)
Power of the GPU - Resogun (no internet required)
Physics engine Vs particle rendering. Id est completely different things.
"it's not a particle effect, they're actual physical cubes. In gameplay, dynamic cubes with collisions"
Nothing like a complex and unreliable solution to a self created problem. Cloud has a place in all future tech but compute won't be a good fit for several years. Put a good gpu in the next console please.
More tweets -
Age of Ascent paying $75per hour for 100,000 users.
Power of the Cloud - Microsoft Cloud Demo (simulating ~35,000 falling chunks)
Power of the GPU - Resogun (no internet required)
This is a valid point imo. What game really needs the cloud? They need to justify the cloud for people to buy into the idea.
See, I brought this up earlier in the thread and was dismissed. If a game uses the cloud intensively, it will be quite expensive. So for devs to get really extreme results from the cloud, there needs to be a way to make that money back. If you pay $75/hr for 100k users that's 12 grand a week.
If there is no money to pay to keep the lights on 'so to speak' then the lights won't be on.
notsureifserious.gif or neogaf.gif? Choose wisely.Power of the Cloud - Microsoft Cloud Demo (simulating ~35,000 falling chunks)
Power of the GPU - Resogun (no internet required)
But it's not particles....
I'm not sure that's how the Azure virtual servers work. I think they're spun up as the game requests them.
Physics engine Vs particle rendering. Id est completely different things.
The entire environment - everything - is built up from these voxels," explains Kruger. "All of the cubes that you see flying around - there's no gimmick, no point sprites, it's not a particle effect, they're actual physical cubes.
How is rendering a cube any different than rendering a chunk of concrete or glass.they are doing completely different things
$576,000 a year..... Good point it brings back to the point of MS saying Devs have free access to thunderhead on XB1 yet not even all their titles with MP use it. There has to be some clause to that IMO. Although XB1 devs would probably get a good discount also as said by respawn a while ago.
Problem is they have a history of releasing half baked idea like Kinect. If they are like apple who wont announce till it is ready, people wont be this skeptical.
And if developers have to pay anything to use the cloud, they would have to consider whether they really need to use the cloud to implement slightly more impressive destruction or something. Implementing and testing stuff like this will also cost extra. It's a lot of cost, so they would want the cloud stuff to be really worth it. That means there needs to be a really compelling use case for the cloud, which we haven't really seen yet.
No developer is going to pay several grand a week to have their explosions look slightly more impressive.
Well, its not just "slightly more impressive".
Anyway, I dont think that the whole idea of cloud gaming is to move more polygons, but to create the concept of persistent worlds, a topic largely debated specially interesting for MMOs.
Did you not even read the quote I supplied?
How is rendering a cube any different than rendering a chunk of concrete or glass.
How is rendering a cube any different than rendering a chunk of concrete or glass.
How is rendering a cube any different than rendering a chunk of concrete or glass.
Slightly OT but I'm pretty sure MS has data centers around the world, not just in the USA, I know there is one in Dublin for example. Honestly not sure about Sony & PSNow (I would assume they are the same).The problem I have with the MS Azure cloud and Sony PSNow is that it all the latency stuff assumes everyone lives in USA with a data centre down the road.
As a person in EU I just cant see how this stuff will translate to a world wide audience ?
Taking COD for example, the time to kill for 2 shots at 720 RPM is 75 milliseconds. For some guns its 100 ms.
For me online games ping should be way less than 50 ms to make many games playable, add multiple people and I just cant see adding streaming or cloud compute is going to work other than for a select few (world wide).
Personally I think distributed servers world wide would do allot more for gamers, as would games running in 60 FPS with decent net code than this PR nonsense,
The rendering is actually no different, but it's not the difficult part. Rendering millions of polygons per frame is something any GPU has been able to do for years.
One of the much more difficult parts is interaction (I won't even consider the destructible part of the building demo, which is no easy task either). Just to give you a quick idea : if you want to simulate the interaction of 1000 objects with a static environment, you have to do 1000 tests of physical interaction, one per object. If you want to simulate the ineraction of 1000 objects with each other (admittedly in a naive and straightforward way), you have to do 1000 x 999 = 999 000 tests of physical interaction.
It's not even in the same order of magnitude.
The problem I have with the MS Azure cloud and Sony PSNow is that it all the latency stuff assumes everyone lives in USA with a data centre down the road.
As a person in EU I just cant see how this stuff will translate to a world wide audience ?
Taking COD for example, the time to kill for 2 shots at 720 RPM is 75 milliseconds. For some guns its 100 ms.
For me online games ping should be way less than 50 ms to make many games playable, add multiple people and I just cant see adding streaming or cloud compute is going to work other than for a select few (world wide).
Personally I think distributed servers world wide would do allot more for gamers, as would games running in 60 FPS with decent net code than this PR nonsense,
Are you serious with this?Power of the Cloud - Microsoft Cloud Demo (simulating ~35,000 falling chunks)
Power of the GPU - Resogun (no internet required)
This will be very interesting. But how doctored and controlled is this test going to be? They could easily cook these numbers. Latency is a real factor here. You can't compare a world class internet connection of a huge corporation to some dude who lives in bum fuck Nebraska.
Well like you said, that would be a highly inefficient way for the collision detections to be coded. The better way would be to set up some kind of hierarchy of areas and only check for collisions with nearby objects. If you know that a one group of objects aren't anywhere near another group of objects then there is no resaon to check for any interactions between the objects in the first group with those in the second.
Besides don't graphics cards have to do essentially the same thing in order to determine what items are blocking the view of other items. It's my understanding that that is why GPUs are so good a physics simulations. It is because they are already doing similar tasks.
The problem I have with the MS Azure cloud and Sony PSNow is that it all the latency stuff assumes everyone lives in USA with a data centre down the road.
I do wonder what will happen if you disconnect from the internet and decide to play crackdown. Will the buildings just have super shitty destruction physics or will the game won't work at all?
That's true, but even with the smartest heuristics possible, there's no way you can do a linear algorithm for self-interaction of objects (I'd say that the best case scenario would be a NlogN complexity, instead of N²). Hence why the two problems are very different.
Why wouldn't a similar approach not work with collision detection? Objects are plotted in z order over a grid. If when the object is plotted there is already something there, then a collisions has occurred. For speed that grid could have much less resolution than the real world so that a collision on the grid would just indicate a potential collision in the world. At that point you check to see if an actual collision occurred. It's just doing as I suggested before by breaking the world up into smaller areas and only testing for collisions inside an area.I'm not aware of the more advanced functions of graphic cards, but the most common approach is more straightforward : the GPU draws everything, from furthest to closest. So the close objects overwrite those behind them.
The thing that makes GPU good at some computations is that they're designed to do many tasks in parallel, because they're supposed to process pixels or small areas of an image independently. That's why they're great for something like Resogun (or so I suppose) : each voxel is processed independently from the other ones, and when they're displayed all together it looks cool. But if you want the voxels to interact with each other, then it's not as easily split into different tasks, since they'll have to communicate with each other.
In gameplay, dynamic cubes with collisions, floating around, you can get up to 200,000.
...
The actual cubes - the physics and the collisions that you see bouncing, the geometry - that's all done on the GPU-side
Why wouldn't a similar approach not work with collision detection?
Btw, Resogun does check for collisions.
All this cloud talk still sounds to me like some last-ditch effort to justify some of their original PR talk. None of it is going to matter to people who bought a One and don't have a stable internet connection. It mayeswell be locked behind Gold.
The GPU already has to sort by distance in order to handle one object occluding another. You even mentioned it in how you described it working. Why would it be more complex? At this point we are only trying to decide if one object has the potential to interact with another. That is pretty straight forward.You would still need to sort the objects by distance, and it's an NlogN algorithm. Besides, collision for physics is probably a bit more complex than just adding objects on top of each other.
Edit: I just want to say that I am not saying that Resogun and the Microsoft demo are doing the exact same thing. Only that the calculations are similar. The GPU makes a far better physics simulator than the CPU which is what Microsoft was using in the demo.