• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Microsoft Irides (a.k.a. cloud-powered VR) demo video

nicjac

Member
Are you even treading the territory of trying to imply msr researchers are trying to mislead and aren't smart . Maybe the video is misleading but let me assure you msr has some of the brightest minds on the planet. On a r and d scale on algorithms and comp sci far superior to sony or any other gaming company

Indeed. I am lucky enough to be able to attend MSR seminars on a regular basis and they are usually far beyond most other research groups in many fields (machine learning, depth imaging, computer graphics), be it in industry or academia. They do fundamental research that sometimes have tangential commercial applications (e.g. advances in decision trees classifiers and depth imaging --> KINECT).

Rest assured that all the arguments that you guys bring up in this thread were considered by the research team. This was presented at seminars and conferences, with the brightest minds in the field trying to find flaws in their methodology. Of course it is not perfect or generalisable to all situations. This is fundamental research, and only a small piece of the puzzle.
 
It's not all about gaming, this technology is totally applicable to AR projects also.
Well unless you plan on walking around with a PC strapped to your back, they need to be thinking about solutions that let this kind of technology work out and about.

Whilst there may be powerful local hardware available, it increases the entry cost for users and there will always be more cloud computing available than a single user has on them vs what is available locally.

And it's not even just about having it locally, it's about having the computing power connected untethered, which is a huge pain in the ass for VR.

You are NOT going to save money., the infrastructure would be so expensive that MS would have to essentially rent you a gaming desktop remotely, by the hour. Seriously... You want paid online this badly? Because I don't.
 
While GAF is not a hivemind, and should not be presented as such, assigning an average response to a group of people saying different things is hardly new.
Still doesn't make it the right thing to do. Because most of the time,
these aren't the same people who said these things.
 
We'll see how it works out when they actually release it.

I don't have anything against MS, but they have the habit of making their products look a lot better than they actually are.

Look at the whole Kinect and the recent Hololens.
 
I don't know but perhaps he's talking about a certain group of people's lack of criticism around Morpheus' "reprojection" which in some ways shares some similarities to Irides?

So, just PS4 owners then? Is that what you're trying to say, but too scared to push the boat all the way out? Be brave, just name these people and be done.
 

Shpeshal Nick

aka Collingwood
For anything cloud related to really work, Microsoft should really focus on compression techniques.

What are Netflix using? My brother in law was running a movie on his phone via Netflix with ONE bar of reception and it streamed flawlessly. I can't even watch preview trailers on Xbox One without buffering.

If MS can get their compression right on all the Azure based stuff, then I see the cloud computing processes having real world impact.
 
This tech is limited more by internet infrastructure than anything. We still have people in the US with dial up.

Yup, this is why PS NOW and other live streaming systems are doomed at this point, until we all have fiber access that doesn't cost more than a mortgage payment this is all just a dream.

Anything based off a US server for my Canadian Rogers internet basically automatically has a 30-50ms penalty lols
 

Noobcraft

Member
For anything cloud related to really work, Microsoft should really focus on compression techniques.

What are Netflix using? My brother in law was running a movie on his phone via Netflix with ONE bar of reception and it streamed flawlessly. I can't even watch preview trailers on Xbox One without buffering.

If MS can get their compression right on all the Azure based stuff, then I see the cloud computing processes having real world impact.
Netflix used Silverlight (made my Microsoft) until 2014, now iirc Netflix just uses HTML 5.
Also with 2-3 bars over 4g LTE it's not uncommon to have download speeds over 10mbps.
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
the video addresses this

using clever warp tricks to mask latency is really cool

Now you know practically speaking this will never be better than locally rendering the data without the need of the network right?
 
Yup, this is why PS NOW and other live streaming systems are doomed at this point, until we all have fiber access that doesn't cost more than a mortgage payment this is all just a dream.

Anything based off a US server for my Canadian Rogers internet basically automatically has a 30-50ms penalty lols

The video has a demonstration where they drop from 120 to 13 ms of lag. I think you're covered.

Now you know practically speaking this will never be better than locally rendering the data without the need of the network right?

Define "better," though. Not everybody has/wants to buy a gaming PC to support local processing that's better than this.
 

Siphorus

Member
The amount of bandwidth that this would require would be way to much. Even assuming they somehow get 2160x1200 res (which is currently the upcoming norm for VR), down to 5mbps for video and audio. They still have to recalculate the frame (which requires source code), and they have to recalculate in multiple different directions for their software to interpolate the difference, and assuming its base case, that probably brings the requirement to 20mbps (4 directions, X and Y, assuming the interpolation handles X rotation and Y rotation). Then add in the fact that if it's a real game it would require additional frames to guess player input (character direction etc), the bandwidth requirement is going to probably be in the 50-60mbps ballpark, and that's with a very low estimate.

If they plan on guessing player input as well (incoming action, etc, to reduce streaming latency) this thing is pretty much a goner until 100mpbs is available everywhere, and by then the processing power on the user end is solved, and this has more application in a wireless or mobile space.
 

magnumpy

Member
120ms of latency is still too much. 20ms is the specified latency for the head tracking on the oculus rift. find a way to lower the latency even further, or else what is the point in releasing a new product that is inferior to what's already available.
 
I honestly think that making so many sacrifices for mobility isn't justified. I mean, to properly use mobility in VR games the player actually needs to have the space to walk around in. Sure, some of it can be faked as has been discussed in previous threads, but even then you still need a certain minimum amount of open, empty space and mechanics like jumping or hand-to-hand combat or many forms of environment interaction will be either too strenuous or too dangerous (considering you may be jumping into your living room wall, knock over your TV or slip/fall and land on your VR headset,...). I think the best way to combine VR with traditional gameplay is to simply sit or stand in place and control character movement using the controller the way it's done in non-VR games (you can still mix that with actual gesture control for certain actions, like crouching down to make your character crouch or actually aiming your controller into a specific direction to make your character aim their gun in that direction, etc. but controlling actual movement through the game worlds with real-life walking is probably going to be difficult to combine with traditional gameplay).

That being said, this is merely research, not a consumer product, and researching possible solutions to problems isn't a bad thing IMO. Worst case scenario, you figure out what doesn't work.

What sacrifices? The point of this research is to make no sacrifice at all, they want all in: Mobility, Response time and Image Quality.

Now, if they are able to deliver is another story.

120ms of latency is still too much. 20ms is the specified latency for the head tracking on the oculus rift. find a way to lower the latency even further, or else what is the point in releasing a new product that is inferior to what's already available.

120ms is the network latency the setup is able to hide. They reduce the perceived latency from 120ms down to 13ms in that video.
 

magnumpy

Member
What sacrifices? The point of this research is to make no sacrifice at all, they want all in: Mobility, Response time and Image Quality.

Now, if they are able to deliver is another story.

oculus rift (which I would note they are using a OR DK1 in the video) doesn't seek to address mobility or image quality. those are two artificial limitations they have chosen to place upon themselves. OR achieves what it does entirely by addressing the response time. OR has addressed mobility through their Gear VR product. image quality is a function of whatever host system is doing the actual rendering of the scene.

they need to achieve what OR already does first before trying to surpass it. else they end up with a product which is not even as good as what's come before. this video seems like early prototyping of potential future products, rather than actual development of a product which has a firm release date (q1 2016 in the case of the rift.)

120ms is the network latency the setup is able to hide. They reduce 120ms down to 13ms in that video.

they attempt to hide latency by rendering all potential future frames in the cloud, and discarding everything except the frames that are actually needed. I'll believe it when I see it. the latency that is added by the network (any network) is a hell of an additional problem to overcome, how about you just attempt to solve the problems that are already addressed in the OR.
 

Noobcraft

Member
For those that don't want to, or can't watch the video.

Without the technology.
prhwbi.gif



With the technology.
kqkibv.gif
.
 

Fafalada

Fafracer forever
bj00rn_ said:
That wasn't my point. Warping the whole scene and lose updating of depth/occlusion details, in that context, IIRC both Carmack and recently Nvidia advised that developers shouldn't use it deliberately to increase framerates
IIRC they were cautioning against careless async-interpolation use, which can actually smooth out un-even framerate to an extent - but you shouldn't expect it to just "fix" framerate for you. Afaik Sony cautions against that too.
60hz upsample will interpolate about 2x as much missing sensor data as 90hz native, so obviously potential for noticing edge artifacts is higher, but there are ways to compensate - sensor predictions are used to render frames in-"future", increased pixel coverage to avoid missing edge-data etc.
In the end - user-experiences will be the measuring stick, and empirical evidence suggests it's harder to differentiate it from native 90/120 then one might think.

I'd also note current interpolation is primitive, and we can do much better with data games typically have access to (velocity, depth and other buffers) - there's a lot of room to evolve VR rendering principles.
 
Top Bottom