• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Microsoft Irides (a.k.a. cloud-powered VR) demo video

Ishan

Junior Member
Of course, dead end wasn't the appropiate word, it's research after all, but even if game streaming becomes big in the future, how much latency could this technology help mitigate? keep in mind that VR requires absolut minimum latency and high bandwith because of the high resolution and refresh rate, current infrastructure doesn't allow it, how many years will it take to pass this limitation? By the time it does, hardware won't be a limitation.

But you're right it's just research, with streaming evolving fast someone's gotta do it.

Lag is an issue irrespective of hardware unless you think we can do faster than light speed communication in the near future (under 100 years) . This is definitely a concern and viable research .
 

mrklaw

MrArseFace
for demo purposes this is cool, but how practical would this be if it was popular? Compute power in the cloud isn't very high at the moment - more meant for database shit, so if you suddenly needed to spin up Titan X SLIs for everyone accessing cloud VR the entire infrastructure would just fart and fall over.

I think this may have interesting side-effects for wireless VR in the home though. using your normal PC as the 'cloud' and having a small amount of processing on the HMD to perform timewarping could be interesting.
 

LeleSocho

Banned
The stupidity of this is baffling, they pick a thing like VR where the one thing that hasn't to be there is latency and power it with a technology that has latency as the major drawback.
 

orava

Member
Sony uses shortcuts and hacks to bypass hardware limitations

GAF: Brilliant! They truly are geniuses!

MS uses shortcuts and hacks to bypass hardware limitations

GAF: LOL this will never work.
 

DieH@rd

Banned
They are basing all of their hopes on crapton of error corrections, which is not really a rocksolid approach. Too many things can go wrong, and not everything can be solved with timewarp.

In the end, this could work, but I doubt it will be easy to implement and offer to everyone.
 

mrklaw

MrArseFace
Sony uses shortcuts and hacks to bypass hardware limitations

GAF: Brilliant! They truly are geniuses!

MS uses shortcuts and hacks to bypass hardware limitations

GAF: LOL this will never work.

if you're talking about Morpheus, then Sony just seem to be using a variation of timewarping, which makes sense when you have a locally rendered approach.

The MS research seems similar, only you now have internet infrastructure in between, so you potentially get slower responses to compensate for, or even no response in time.
 

Fafalada

Fafracer forever
jediyoshi said:
Interesting they'd go this route while Oculus and Valve backed away from time warp.
I don't know about Valve but everyone else is using 2d image warp to correct for latency. Short of chasing Vblank it's the only way to get sub 20ms with current headset refresh rates.

bj00rn_ said:
I wonder how Carmack feels about Sony and MS using 2D warp in the ways they do.
These IBR methods are ~3 decades old, and in 90s MS has done more research into them than many.
 

Zaptruder

Banned
The stupidity of this is baffling, they pick a thing like VR where the one thing that hasn't to be there is latency and power it with a technology that has latency as the major drawback.

The technology is about reducing latency through prediction. It seems like the ideal candidate tech for applying this sort of solution.

In the middle to long term future (5-10 years), this will likely be a critical part of VR technology for many users.
 

M3d10n

Member
While latency for cloud based rendering is a major problem, the concept could be applied to local rendering and make wireless VR glasses possible.

Right now all glasses either use cables to display images rendered in another device at low latency or have limited graphical capabilities due to local rendering (Samsung Gear VR). By mixing both, the graphics could be produced by an external device like a PC or console and the hardware in the glasses would work to mask the wireless latency.

Heck, this could be done today using the Gear VR and a PC.
 
The fundamental problem with this is the same as the fundamental problem as assuming its possible to have an "always on" system at all - it would require reinventing the way the internet actually works.

Its a long series of links in a chain, and any corruption, disruption, or sub-optimal routing at any point in that chain fucks the entire thing.

I mean, its easy to compare internet accessibility to something like electricity as an infrastructure service in terms of usefulness, but a bunch of dicks on twitter can't knock out a power grid like they can a network cluster.
 

Harlequin

Member
I honestly think that making so many sacrifices for mobility isn't justified. I mean, to properly use mobility in VR games the player actually needs to have the space to walk around in. Sure, some of it can be faked as has been discussed in previous threads, but even then you still need a certain minimum amount of open, empty space and mechanics like jumping or hand-to-hand combat or many forms of environment interaction will be either too strenuous or too dangerous (considering you may be jumping into your living room wall, knock over your TV or slip/fall and land on your VR headset,...). I think the best way to combine VR with traditional gameplay is to simply sit or stand in place and control character movement using the controller the way it's done in non-VR games (you can still mix that with actual gesture control for certain actions, like crouching down to make your character crouch or actually aiming your controller into a specific direction to make your character aim their gun in that direction, etc. but controlling actual movement through the game worlds with real-life walking is probably going to be difficult to combine with traditional gameplay).

That being said, this is merely research, not a consumer product, and researching possible solutions to problems isn't a bad thing IMO. Worst case scenario, you figure out what doesn't work.
 

Josman

Member
Lag is an issue irrespective of hardware unless you think we can do faster than light speed communication in the near future (under 100 years) . This is definitely a concern and viable research .

That's not what I said, I meant that by the time connections latency become as short as local computing, there won't be need for cloud processing, hardware will be powerful and cheaper than this method to power VR devices because it's going to take decades, rendering methods like foveated rendering will also reduce the horsepower required. At the present time it's a concern but not viable, in the future it won't be a concern.

Maybe it can work for local streaming but it would still induce more latency at 1440p 90hz
 
That's not what I said, I meant that by the time connections latency become as short as local computing, there won't be need for cloud processing, hardware will be powerful and cheaper than this method to power VR devices because it's going to take decades, rendering methods like foveated rendering will also reduce the horsepower required. At the present time it's a concern but not viable, in the future it won't be a concern.

Maybe it can work for local streaming but it would still induce more latency at 1440p 90hz
Exactly. Streaming would NEVER "catch up" to local processing, because both technologies benefit from the same improvements. The faster you can process in a server, the faster you can process locally. It is literally impossible for a computer server to magically do a job many timers faster than a desktop, when they are purchased in the same year.
All the down sides of local VR will disappear BEFORE streaming VR is actually tolerable. One can't assume the competition would be stagnant, they never are.

I wonder how many people even know what the Red Queen Effect is?
 

RowdyReverb

Member
I honestly think that making so many sacrifices for mobility isn't justified. I mean, to properly use mobility in VR games the player actually needs to have the space to walk around in. Sure, some of it can be faked as has been discussed in previous threads, but even then you still need a certain minimum amount of open, empty space and mechanics like jumping or hand-to-hand combat or many forms of environment interaction will be either too strenuous or too dangerous (considering you may be jumping into your living room wall, knock over your TV or slip/fall and land on your VR headset,...). I think the best way to combine VR with traditional gameplay is to simply sit or stand in place and control character movement using the controller the way it's done in non-VR games (you can still mix that with actual gesture control for certain actions, like crouching down to make your character crouch or actually aiming your controller into a specific direction to make your character aim their gun in that direction, etc. but controlling actual movement through the game worlds with real-life walking is probably going to be difficult to combine with traditional gameplay).

That being said, this is merely research, not a consumer product, and researching possible solutions to problems isn't a bad thing IMO. Worst case scenario, you figure out what doesn't work.
Suppose that they were able to figure out a way to make VR work with no local hardware aside from the goggles and be able to guarantee good experiences for the end user with little to no tweaking or tinkering on their part. Such a product would sell many millions and would be embraced by the general public much like the Wii was.

The biggest hurdle for market acceptance of VR is that it is not a casual use product in its current state. It demands some amount of tech know-how from its users and requires costly computing equipment to provide a high-quality VR experience. Releasing a stand-alone VR headset that only requires an Internet connection to play games is the holy grail for this emerging market. Mobility is very important to making this thing viable.
 

bj00rn_

Banned
I don't know about Valve but everyone else is using 2d image warp to correct for latency. Short of chasing Vblank it's the only way to get sub 20ms with current headset refresh rates.


These IBR methods are ~3 decades old, and in 90s MS has done more research into them than many.

That wasn't my point. Warping the whole scene and lose updating of depth/occlusion details, in that context, IIRC both Carmack and recently Nvidia advised that developers shouldn't use it deliberately to increase framerates (like Sony does with Morpheus (edit: well, it's an option for developers to (ab)use if they like)). With that said, I guess it is necessary to do just that in greater scale on mobile phones where you need to hit 60fps at all times but have "throttled processors".

Disclaimer: Not claiming to be an expert..
 
Did you even watch the video?

Are you aware that in order to predict what the player want to do, it would mean the server would need to do at least more than twice as much processing as it would need to if it was local?

This is something I keep noticing when "The Cloud" is brought up. Supporters of the Cloud assumes that the servers running it is powered by magic, and that they argue "ir is better to not have to upgrade your home computer".

Well guess what? To do what MS want stream VR to be, it means MS would have to build massive gaming computers PER USER, and then rent it out to you for you to use. And the outcome would be worse than just using that same computer for your own house running VR locally.

How do you suppose MS was going to get their money back from these gaming desktop servers? By charging you through the nose, making sure that you pay for the costs of these servers. This means you are renting these computers, ending up with both worse gaming experiences AND spending more money.
 

EvB

Member
Are you aware that in order to predict what the player want to do, it would mean the server would need to do at least more than twice as much processing as it would need to if it was local?

It's not all about gaming, this technology is totally applicable to AR projects also.
Well unless you plan on walking around with a PC strapped to your back, they need to be thinking about solutions that let this kind of technology work out and about.

Whilst there may be powerful local hardware available, it increases the entry cost for users and there will always be more cloud computing available than a single user has on them vs what is available locally.

And it's not even just about having it locally, it's about having the computing power connected untethered, which is a huge pain in the ass for VR.
 

low-G

Member
Does anyone on NeoGAF realize the latencies necessary for immersive VR? Using cloud computing for VR is literally impossible with traditional physics. The only thing MS can do is supplement complicated physics calculations (a crumbling building) or lighting. Just trying to estimate where the person's head will be is completely useless and won't help immersion.

That is, the warping technology will never allow for half decent VR.

Crap & lies project for investment dollars.
 
Sounds like a waste of resources when they could just make a decently powered VR headset with an on-board GPU to help with latency on the local/micro level.
 

Noobcraft

Member
^^ @low-g Didn't see the tech demo. I'm not saying the technology is viable on a mass scale, especially not currently, but as someone who doesn't care what Microsoft spends their money on; if they think this can work they should go for it.
 

DavidDesu

Member
Seems like a viable solution when you're working with a static or predictable VR scene, since predicted frames are being created and sent down in advance of knowing what is about to happen (if I understand d correctly), but surely this method goes out the window when dealing with an interactive, player dependent scenario, or worse a multiplayer scenario with other players in your VR world...
 

Helznicht

Member
I remember prediction in quake to overcome internet latency on mouse & kb inputs. Felt smoother but totally affected gameplay, because it got it wrong more than right. I usually turned it off and lag-aimed.
 

low-G

Member
^^ @low-g Didn't see the tech demo. I'm not saying the technology is viable on a mass scale, especially not currently, but as someone who doesn't care what Microsoft spends their money on; if they think this can work they should go for it.

^^ didn't understand the technology.

It's not viable, period. Thier casting of the scenario is misleading, but only to confuse the technologically soft-headed.

As another poster mention, this would work for one of those roller coaster simulations, as long as your head is strapped down tight, because the second you take a quick peek any direction it is literally impossible for a server to predict that, and boom immersion is lost. Done.

Also casting false dilemmas. Thermal radiation near the eyes? Really?
 
Its interesting that, by outsourcing part of the procesing to the cloud, this device could bw much more cheaper than your standar VR solution. Certainly not as good but the broader approach may give it an advantage on the market.
My main concern about VR is the cost (not only for gaming but overall), this seems as a fair solution.
 

shandy706

Member
For those that don't want to, or can't watch the video.

Without the technology.
prhwbi.gif



With the technology.
kqkibv.gif
 

ZehDon

Gold Member
While latency for cloud based rendering is a major problem, the concept could be applied to local rendering and make wireless VR glasses possible...
Exactly my thinking, too. Microsoft's example used total latency of 120ms, which is well above what all but the best internet connections (at least in my part of the world) could reliably provide. But, that's actually pretty achievable on a home network. Could be interesting.
 
This tech is limited more by internet infrastructure than anything. We still have people in the US with dial up.
So this won't work until everyone has amazing internet speed? I don't understand this kind of thinking, people with dial up likely won't even be interested in this.
 

strata8

Member
Exactly my thinking, too. Microsoft's example used total latency of 120ms, which is well above what all but the best internet connections (at least in my part of the world) could reliably provide. But, that's actually pretty achievable on a home network. Could be interesting.

I think something's wrong with your internet connection if you're getting much worse than 120ms. I'm 25km from Sydney and get a ping of around 10ms, and I've seen people with a ping of 30-35ms 500-600km away. On a home network it should be 5ms or less.
 
I think many people fail to realize that these big research departments are not looking to release a product next week, but are instead exploring technology that could become viable in 10 or even 20 years time.

Internet latencies will always be behind local hardware, but that's irrelevant. If it works it works, there doesn't always have to be one way of doing things, and each can have their own strengths and weaknesses.

There are already many millions of people who have pretty fast internet as it is, and the speeds and numbers are rising daily. Technology advances by pushing forward, not by staying in the past.
 

Juanfp

Member
^^ didn't understand the technology.

It's not viable, period. Thier casting of the scenario is misleading, but only to confuse the technologically soft-headed.

As another poster mention, this would work for one of those roller coaster simulations, as long as your head is strapped down tight, because the second you take a quick peek any direction it is literally impossible for a server to predict that, and boom immersion is lost. Done.

Also casting false dilemmas. Thermal radiation near the eyes? Really?

Imposible to the server? That is why the stadistics and predictions are, saying "It's not viable, period." it's a very close minded thinking. Also it can't necesarry break immersion, we most see want solution they put when the server fail the the prediction.
 
Because they will kill the servers if the game doesn't do well.

Cloud servers don't really work that way. They are virtual servers that can be spun up on demand. If a game is no longer being played, the cloud hardware will simply be doing something else.
 

gofreak

GAF's Bob Woodward
Cloud + VR is pretty exciting, IMO. I'm not sure this particular approach here is necessarily the best end-game either - think there are other avenues to explore that don't necessarily rely so much on prediction that could be more resilient/better.
 

Juanfp

Member
Because they will kill the servers if the game doesn't do well.

For what they show this is not exclusive to a game, so it can be aplicable to a lot of games, and even is the "kill" the server, even MS that I think that they have never do that, you can still use the game, because this is to enchanced your experience not to replace it, without internet you would just play like the VR headset was design.
 
For what they show this is not exclusive to a game, so it can be aplicable to a lot of games, and even is the "kill" the server, even MS that I think that they have never do that, you can still use the game, because this is to enchanced your experience not to replace it, without internet you would just play like the VR headset was design.

The point of a thin client is that if a server is not present it does nothing.
EDIT:
Like a chromebook. No Internet no worky.
 

Ovek

7Member7
The server predicting head movements and the "frame warping" it does to correct incorrect predictions won't be at all annoying.
 

JaggedSac

Member
I remember prediction in quake to overcome internet latency on mouse & kb inputs. Felt smoother but totally affected gameplay, because it got it wrong more than right. I usually turned it off and lag-aimed.

Vast majority of games still do prediction in mp.
 

Ishan

Junior Member
^^ didn't understand the technology.

It's not viable, period. Thier casting of the scenario is misleading, but only to confuse the technologically soft-headed.

As another poster mention, this would work for one of those roller coaster simulations, as long as your head is strapped down tight, because the second you take a quick peek any direction it is literally impossible for a server to predict that, and boom immersion is lost. Done.

Also casting false dilemmas. Thermal radiation near the eyes? Really?

Are you even treading the territory of trying to imply msr researchers are trying to mislead and aren't smart . Maybe the video is misleading but let me assure you msr has some of the brightest minds on the planet. On a r and d scale on algorithms and comp sci far superior to sony or any other gaming company
 
Top Bottom