• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Phil Spencer Next Cloud Demo being planned will show BW/ CPU/ Latency Info

manual4

Banned
Power of the Cloud - Microsoft Cloud Demo (simulating ~35,000 falling chunks)
05X3QVF.jpg


Power of the GPU - Resogun (no internet required)

3Pe9.gif

they are doing completely different things
 

nynt9

Member
Nothing like a complex and unreliable solution to a self created problem. Cloud has a place in all future tech but compute won't be a good fit for several years. Put a good gpu in the next console please.

This is a valid point imo. What game really needs the cloud? They need to justify the cloud for people to buy into the idea.

More tweets -


Age of Ascent paying $75per hour for 100,000 users.

See, I brought this up earlier in the thread and was dismissed. If a game uses the cloud intensively, it will be quite expensive. So for devs to get really extreme results from the cloud, there needs to be a way to make that money back. If you pay $75/hr for 100k users that's 12 grand a week.
 

Kayant

Member
This is a valid point imo. What game really needs the cloud? They need to justify the cloud for people to buy into the idea.



See, I brought this up earlier in the thread and was dismissed. If a game uses the cloud intensively, it will be quite expensive. So for devs to get really extreme results from the cloud, there needs to be a way to make that money back. If you pay $75/hr for 100k users that's 12 grand a week.

$576,000 a year..... Good point it brings back to the point of MS saying Devs have free access to thunderhead on XB1 yet not even all their titles with MP use it. There has to be some clause to that IMO. Although XB1 devs would probably get a good discount also as said by respawn a while ago.
 

Alx

Member
But it's not particles....

Voxels, particles, it doesn't change the fact that it's not the same task. Handling small items idependently is relatively common and can indeed be parallelized on a GPU (I don't think those voxels bounce on each other).
Modelizing the non-scripted interaction of all the parts of a destructible building is another story. Do you really think they would bother doing a technical demo in a big conference if all there was to it was "moving 30000 objects" ?

I'm not sure that's how the Azure virtual servers work. I think they're spun up as the game requests them.

Yeah it's usually an "on demand" model. If nobody needs a server, none will be running and nobody will have to pay for it.
 
Physics engine Vs particle rendering. Id est completely different things.

Did you not even read the quote I supplied?

The entire environment - everything - is built up from these voxels," explains Kruger. "All of the cubes that you see flying around - there's no gimmick, no point sprites, it's not a particle effect, they're actual physical cubes.

they are doing completely different things
How is rendering a cube any different than rendering a chunk of concrete or glass.
 

nynt9

Member
$576,000 a year..... Good point it brings back to the point of MS saying Devs have free access to thunderhead on XB1 yet not even all their titles with MP use it. There has to be some clause to that IMO. Although XB1 devs would probably get a good discount also as said by respawn a while ago.

And if developers have to pay anything to use the cloud, they would have to consider whether they really need to use the cloud to implement slightly more impressive destruction or something. Implementing and testing stuff like this will also cost extra. It's a lot of cost, so they would want the cloud stuff to be really worth it. That means there needs to be a really compelling use case for the cloud, which we haven't really seen yet.

No developer is going to pay several grand a week to have their explosions look slightly more impressive.
 

the_champ

Banned
Problem is they have a history of releasing half baked idea like Kinect. If they are like apple who wont announce till it is ready, people wont be this skeptical.

This is a good point, indeed. Kinect didnt have an reasonable software catalog when it was released. Hopefully Microsoft learned this lesson. Or not.
 

the_champ

Banned
And if developers have to pay anything to use the cloud, they would have to consider whether they really need to use the cloud to implement slightly more impressive destruction or something. Implementing and testing stuff like this will also cost extra. It's a lot of cost, so they would want the cloud stuff to be really worth it. That means there needs to be a really compelling use case for the cloud, which we haven't really seen yet.

No developer is going to pay several grand a week to have their explosions look slightly more impressive.

Well, its not just "slightly more impressive".
Anyway, I dont think that the whole idea of cloud gaming is to move more polygons, but to create the concept of persistent worlds, a topic largely debated specially interesting for MMOs.
 

nynt9

Member
Well, its not just "slightly more impressive".
Anyway, I dont think that the whole idea of cloud gaming is to move more polygons, but to create the concept of persistent worlds, a topic largely debated specially interesting for MMOs.

But we already have that in pretty much every MMO ever, so what's new here? (Rhetorical question)
 
Did you not even read the quote I supplied?




How is rendering a cube any different than rendering a chunk of concrete or glass.

If im not mistaken concave and convex shapes require different amount of computational resources when it comes to physics interaction. But I'm comfortable enough in the material to make a claim.
 

Alx

Member
How is rendering a cube any different than rendering a chunk of concrete or glass.

The rendering is actually no different, but it's not the difficult part. Rendering millions of polygons per frame is something any GPU has been able to do for years.
One of the much more difficult parts is interaction (I won't even consider the destructible part of the building demo, which is no easy task either). Just to give you a quick idea : if you want to simulate the interaction of 1000 objects with a static environment, you have to do 1000 tests of physical interaction, one per object. If you want to simulate the ineraction of 1000 objects with each other (admittedly in a naive and straightforward way), you have to do 1000 x 999 = 999 000 tests of physical interaction.
It's not even in the same order of magnitude.
 

cheezcake

Member
How is rendering a cube any different than rendering a chunk of concrete or glass.

One is computing (in real time) variable size chunk destruction, the other is just subdividing a structure into fixed size cubes. Depending on the collision detection model its probably a lot cheaper to perform on the resogun cubes as well.
 
J

JoJo UK

Unconfirmed Member
The problem I have with the MS Azure cloud and Sony PSNow is that it all the latency stuff assumes everyone lives in USA with a data centre down the road.

As a person in EU I just cant see how this stuff will translate to a world wide audience ?


Taking COD for example, the time to kill for 2 shots at 720 RPM is 75 milliseconds. For some guns its 100 ms.

For me online games ping should be way less than 50 ms to make many games playable, add multiple people and I just cant see adding streaming or cloud compute is going to work other than for a select few (world wide).

Personally I think distributed servers world wide would do allot more for gamers, as would games running in 60 FPS with decent net code than this PR nonsense,
Slightly OT but I'm pretty sure MS has data centers around the world, not just in the USA, I know there is one in Dublin for example. Honestly not sure about Sony & PSNow (I would assume they are the same).
 
The rendering is actually no different, but it's not the difficult part. Rendering millions of polygons per frame is something any GPU has been able to do for years.
One of the much more difficult parts is interaction (I won't even consider the destructible part of the building demo, which is no easy task either). Just to give you a quick idea : if you want to simulate the interaction of 1000 objects with a static environment, you have to do 1000 tests of physical interaction, one per object. If you want to simulate the ineraction of 1000 objects with each other (admittedly in a naive and straightforward way), you have to do 1000 x 999 = 999 000 tests of physical interaction.
It's not even in the same order of magnitude.

Well like you said, that would be a highly inefficient way for the collision detections to be coded. The better way would be to set up some kind of hierarchy of areas and only check for collisions with nearby objects. If you know that a one group of objects aren't anywhere near another group of objects then there is no resaon to check for any interactions between the objects in the first group with those in the second.

Besides don't graphics cards have to do essentially the same thing in order to determine what items are blocking the view of other items. It's my understanding that that is why GPUs are so good a physics simulations. It is because they are already doing similar tasks.
 
The problem I have with the MS Azure cloud and Sony PSNow is that it all the latency stuff assumes everyone lives in USA with a data centre down the road.

As a person in EU I just cant see how this stuff will translate to a world wide audience ?

Taking COD for example, the time to kill for 2 shots at 720 RPM is 75 milliseconds. For some guns its 100 ms.

For me online games ping should be way less than 50 ms to make many games playable, add multiple people and I just cant see adding streaming or cloud compute is going to work other than for a select few (world wide).

Personally I think distributed servers world wide would do allot more for gamers, as would games running in 60 FPS with decent net code than this PR nonsense,

That's not true for Azure. Ms has data centers around the world. And it was one of the reasons they are launching so late in other countries.
 

Hellshy.

Member
This will be very interesting. But how doctored and controlled is this test going to be? They could easily cook these numbers. Latency is a real factor here. You can't compare a world class internet connection of a huge corporation to some dude who lives in bum fuck Nebraska.

you cant compare to a dude who lives in New York or LA. It would be nice if they would add the distance from the server to the list of specs
 

Alx

Member
Well like you said, that would be a highly inefficient way for the collision detections to be coded. The better way would be to set up some kind of hierarchy of areas and only check for collisions with nearby objects. If you know that a one group of objects aren't anywhere near another group of objects then there is no resaon to check for any interactions between the objects in the first group with those in the second.

That's true, but even with the smartest heuristics possible, there's no way you can do a linear algorithm for self-interaction of objects (I'd say that the best case scenario would be a NlogN complexity, instead of N²). Hence why the two problems are very different.

Besides don't graphics cards have to do essentially the same thing in order to determine what items are blocking the view of other items. It's my understanding that that is why GPUs are so good a physics simulations. It is because they are already doing similar tasks.

I'm not aware of the more advanced functions of graphic cards, but the most common approach is more straightforward : the GPU draws everything, from furthest to closest. So the close objects overwrite those behind them.
The thing that makes GPU good at some computations is that they're designed to do many tasks in parallel, because they're supposed to process pixels or small areas of an image independently. That's why they're great for something like Resogun (or so I suppose) : each voxel is processed independently from the other ones, and when they're displayed all together it looks cool. But if you want the voxels to interact with each other, then it's not as easily split into different tasks, since they'll have to communicate with each other.
 
I do wonder what will happen if you disconnect from the internet and decide to play crackdown. Will the buildings just have super shitty destruction physics or will the game won't work at all?
 

AmFreak

Member
The problem I have with the MS Azure cloud and Sony PSNow is that it all the latency stuff assumes everyone lives in USA with a data centre down the road.

If anything most of the EU should have an advantage cause the population density is much higher in most countries. Put one datacentre into the middle of germany and you reach >100 million people in a perimeter of ~500km.
 

bidguy

Banned
I do wonder what will happen if you disconnect from the internet and decide to play crackdown. Will the buildings just have super shitty destruction physics or will the game won't work at all?

it could be online only like the crew, the division and destiny
 

gofreak

GAF's Bob Woodward
I'm not sure about the claims of Resogun's particles and self-intersection, but being cubes-only, there'd be some optimisation possible that wouldn't be available with more general shapes.

I'm not sure the cloud demo of 35k objects is impressive because of its scale so much as it just being a proof of concept, though.

I think local power and GPU power is able to do very high scale simulation, higher than 35k objects I'm sure. We've seen 'big' physics demos running on local and, increasingly, GPU hardware. Console-specific, remember the Havok demo at the PS4 reveal? I think that was something like a million objects, and IIRC, they gather together after they fell on the scene etc. so there was self-collision going on there.

I'm not sure the cloud demo was 'gee-whizz' because of the scale. I think the idea was to just show that you could do physics simulation server side...(which, tbh, is 'just' what many MP games have done with servers for a long time, just probably not typically at this scale or for what some might consider peripheral effects).
 
That's true, but even with the smartest heuristics possible, there's no way you can do a linear algorithm for self-interaction of objects (I'd say that the best case scenario would be a NlogN complexity, instead of N²). Hence why the two problems are very different.

Like I said its not the exponential increase as you first stated.

I'm not aware of the more advanced functions of graphic cards, but the most common approach is more straightforward : the GPU draws everything, from furthest to closest. So the close objects overwrite those behind them.
Why wouldn't a similar approach not work with collision detection? Objects are plotted in z order over a grid. If when the object is plotted there is already something there, then a collisions has occurred. For speed that grid could have much less resolution than the real world so that a collision on the grid would just indicate a potential collision in the world. At that point you check to see if an actual collision occurred. It's just doing as I suggested before by breaking the world up into smaller areas and only testing for collisions inside an area.

The thing that makes GPU good at some computations is that they're designed to do many tasks in parallel, because they're supposed to process pixels or small areas of an image independently. That's why they're great for something like Resogun (or so I suppose) : each voxel is processed independently from the other ones, and when they're displayed all together it looks cool. But if you want the voxels to interact with each other, then it's not as easily split into different tasks, since they'll have to communicate with each other.

They only have to communicate with nearby voxels and as I suggested could do so in a manner similar to the way a GPU already works. I have no expertise on this at all so any game programmer feel free to correct me, but the concept appears to be sound.

Btw, Resogun does check for collisions.
In gameplay, dynamic cubes with collisions, floating around, you can get up to 200,000.
...
The actual cubes - the physics and the collisions that you see bouncing, the geometry - that's all done on the GPU-side

Edit: I just want to say that I am not saying that Resogun and the Microsoft demo are doing the exact same thing. Only that the calculations are similar. The GPU makes a far better physics simulator than the CPU which is what Microsoft was using in the demo.
 

Alx

Member
Why wouldn't a similar approach not work with collision detection?

You would still need to sort the objects by distance, and it's an NlogN algorithm. ;) Besides, collision for physics is probably a bit more complex than just adding objects on top of each other.

Btw, Resogun does check for collisions.

It's not clear to me if those collisions are between two voxels or one voxel and the environment. In the second case it's a linear problem, in the first case it's not. And since it's not really apparent in the game content, I suppose that they chose the simple path (because why choose the difficult one if you can't tell the difference in the end ?).
But if they do have complex interaction with each other, then I could go with gofreak's explanation about the possibility of a strong optimization allowed by a simple scenario.
 
All this cloud talk still sounds to me like some last-ditch effort to justify some of their original PR talk. None of it is going to matter to people who bought a One and don't have a stable internet connection. It mayeswell be locked behind Gold.

So for the people who own a console and do have solid internet connections, should we care? Or should companies solely target the least-common-denominator?


I'm glad people with much different attitudes than this are the ones out there creating stuff, trying new stuff, maybe even pushing boundaries and looking toward the future. Imagine a world where everybody just looked at the current state of things and made every single decision and plan based on that.
 
You would still need to sort the objects by distance, and it's an NlogN algorithm. ;) Besides, collision for physics is probably a bit more complex than just adding objects on top of each other.
The GPU already has to sort by distance in order to handle one object occluding another. You even mentioned it in how you described it working. Why would it be more complex? At this point we are only trying to decide if one object has the potential to interact with another. That is pretty straight forward.

Btw, I believe a similar technique was used in the PS3 in order to compensate for its weaker GPUs. The cell processors were used to eliminate parts of the scene that couldn't be seen so that the GPU didn't have to blindly draw everything.
 

cheezcake

Member
Edit: I just want to say that I am not saying that Resogun and the Microsoft demo are doing the exact same thing. Only that the calculations are similar. The GPU makes a far better physics simulator than the CPU which is what Microsoft was using in the demo.

But they're not. Resogun just subdivides a model into identical cubes and makes them explode outwards, the microsoft demo is real time deformation of structures (i.e. they're not precomputed destruction chunks), and calculates trajectory and stuff based on projectile collision. You're trying to reduce the issue to just the collision model which isnt correct.
 
Top Bottom