• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Microsoft Gave Take-Two's CEO A Demo Of Its Mysterious VR Tech

JordanN

Banned
hey if you don't want to do it I'll just start the company and try to do it and if successful give you 40% of the profits.

Ha, it was just something I thought up on the spot. Realistically, I wouldn't know where to begin (I don't have the money to start a company) and I got so many other things I need to do first, but I would jump back on the idea if an opportunity arises.
 
Ha, it was just something I thought up on the spot. Realistically, I wouldn't know where to begin (I don't have the money to start a company) and I got so many other things I need to do first, but I would jump back on the idea if an opportunity arises.

This kind of stuff is exactly why VR is so exciting for me. It is like the Wild West of the early days of internet all over again where all sorts of possibilities are literally just waiting for the right person at the right time with the right amount of drive to make them a
virtual lol
reality.
 

Krejlooc

Banned
Ok, let's try it another way then. I just googled "siggraph augmented reality" to find a state of the art example, and this was one of the first results :
http://www.siggraph.org/discover/news/ar-sandbox-cutting-edge-earth-science-visualization

ar-sandbox-diagram-and-setup.jpg

https://www.youtube.com/watch?v=g6fSS3cynDo

It's definitely augmented reality (adding information to a real image, which in that case is 100% real), and in fact uses technology that is entirely different from what is used for VR.
So what now, are they wrong in calling it AR, or will you claim that is still an extension of VR ?

Uh, this is inside-out tracking technology, which gets used extensively in VR. This is, essentially, positional tracking, a major component of Virtual Reality. You have just proven my argument.
 

Alx

Member
Uh, this is inside-out tracking technology, which gets used extensively in VR. This is, essentially, positional tracking, a major component of Virtual Reality. You have just proven my argument.

Seriously ?
First, there's barely any kind of tracking in that demo. All it does is 3D scanning and rendering from a different point of view. The only part you could associate to tracking is the hand detection used to trigger water generation, and that's a stretch (you don't even need to track the hand, just detect "something big enough above a given height"). And associating that to VR is even bigger of a stretch, it's like saying "it uses projective geometry, VR does that too !".
What it does prove is that AR isn't attached to the technology used. As long as you're displaying artificial information on a real object or the image of a real object, you're doing AR. Projecting images on sand is AR, so are drawing shapes on sport streams, outline targets in visor images or draw funny hats and moustaches on your video chat. You won't always need a headset, inertial sensors or even user detection to do AR.
As a side comment, it's also an example of how AR applications can be less sensitive to issues like resolution and latency. That system is obviously low resolution and high latency (limited by the kinect1, the projector and the whole setup that isn't optimized for speed), but it doesn't matter much in the end for what it's trying to do.
 

Mohonky

Member
Ok, let's try it another way then. I just googled "siggraph augmented reality" to find a state of the art example, and this was one of the first results :
http://www.siggraph.org/discover/news/ar-sandbox-cutting-edge-earth-science-visualization

ar-sandbox-diagram-and-setup.jpg

https://www.youtube.com/watch?v=g6fSS3cynDo

It's definitely augmented reality (adding information to a real image, which in that case is 100% real), and in fact uses technology that is entirely different from what is used for VR.
So what now, are they wrong in calling it AR, or will you claim that is still an extension of VR ?


Thats really cool.

Imagine using that to play From Dust. That would be awesome.
 
I can't imagine even Google would want to invest the ridiculous amount of resources it would be required to make such a thing. Would there even be a big market for such a thing?

We already know they invest crazy resources in Google Earth and Google Street View, and I see this as the natural heir to both. Also, you're envisioning a world where all this technology has improved except asset creation, whereas in reality what will happen is that asset creation will continue to cheapen - maybe even to the point of being essentially free. There will be a point in the next few decades (and you can see the genesis of this in products like Photosynth) where an AI will be handed a pile of historical documents, art, photos and maps, and spit out a fair 3D approximation of the period at hand.
 

Synth

Member
Seriously ?
First, there's barely any kind of tracking in that demo. All it does is 3D scanning and rendering from a different point of view. The only part you could associate to tracking is the hand detection used to trigger water generation, and that's a stretch (you don't even need to track the hand, just detect "something big enough above a given height"). And associating that to VR is even bigger of a stretch, it's like saying "it uses projective geometry, VR does that too !".
What it does prove is that AR isn't attached to the technology used. As long as you're displaying artificial information on a real object or the image of a real object, you're doing AR. Projecting images on sand is AR, so are drawing shapes on sport streams, outline targets in visor images or draw funny hats and moustaches on your video chat. You won't always need a headset, inertial sensors or even user detection to do AR.
As a side comment, it's also an example of how AR applications can be less sensitive to issues like resolution and latency. That system is obviously low resolution and high latency (limited by the kinect1, the projector and the whole setup that isn't optimized for speed), but it doesn't matter much in the end for what it's trying to do.

This would all be a whole lot more convincing if the person you were arguing with hadn't been constantly posting examples of such tech being used for VR, and apparently working with the creator of the example in question.

Everything in that example is applicable to VR. The only difference is that the "camera" dealing with your viewpoint isn't attached to your head. You could play with that exact same box in VR and the only difference would be that the results would be drawn onto the screen strapped to your face.
 

Alx

Member
This would all be a whole lot more convincing if the person you were arguing with hadn't been constantly posting examples of such tech being used for VR, and apparently working with the creator of the example in question.

Everything in that example is applicable to VR. The only difference is that the "camera" dealing with your viewpoint isn't attached to your head. You could play with that exact same box in VR and the only difference would be that the results would be drawn onto the screen strapped to your face.

Trying to reproduce the same setup in VR would miss the base idea, that is "digging in the sand". Real sand, that you can feel, touch, shape, that's what makes it intuitive, interactive and fun, and suited for groups of children.
Besides, even if we set the lack of tactile feedback aside, emulating that system in VR would be much more difficult, while it is very simple in AR : you would face the usual issues of latency and pixel resolution in the rendered VR view, which aren't a problem when you're watching the real sandbox.
So it is a counter-example to another Krejlooc's opinion that "things have to be done right in VR before they're done right in AR". In that specific case, AR is already able to do something that would be less convincing in VR.
 
Top Bottom