• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Valve engineer explains how Lighthouse (positional tracking) works (Tested interview)

Durante

Member
Speculation:

Each base station uses a different frequency of light
That's also my speculation, but I don't know enough about photosensors -- it might not be viable.


I wonder about the setup process for the whole system. At some point, you presumably need to provide the exact position of the base stations (and walls etc.).
 

Crispy75

Member
That's also my speculation, but I don't know enough about photosensors -- it might not be viable.


I wonder about the setup process for the whole system. At some point, you presumably need to provide the exact position of the base stations (and walls etc.).

If the base stations can see each other, and they can both see a tracked object of known dimensions, that's all you need to know the relative positions of the base stations. But yeah, you'd have to manually put a sensor on a wall surface in order to know where it is.

"Please rub your controller on all the walls of the room now"
 
Well they're not little computers if that's what you mean. But they're sensors that know when they've been hit by the scanning laser. Smarter than an always-on LED.
I don't think you're right. They are emitters, not sensors. You say it's not inside-out tracking, it is. The sensors are the photocells on the headset and controllers. As far as I understand it, the lighthouse base stations are dumb.
 

mrklaw

MrArseFace
Right. Oculus aren't in this to get a monopoly or unbeatable lead in the headset hardware. They just want to help and be a part of the growth of VR tech. Business wise, they are more focused on the content/ecosystem side.

and from Facebook's point of view, if they can own the next VR social space, they won't care whose headset you're using as long as it runs their software. The Oculus purchase was an investment to help ensure that happens but if more people come onboard too and makes the success of VR more likely, they'll be perfectly happy.
 

Crispy75

Member
I don't think you're right. They are emitters, not sensors. You say it's not inside-out tracking, it is. The sensors are the photocells on the headset and controllers. As far as I understand it, the lighthouse base stations are dumb.

We both understand it right, I think we're just calling it different things. It's not really inside out or outside in, but a hybrid of the two.

EIDT: Too may pronouns can distort meaning :)
 
I don't think you're right. They are emitters, not sensors. You say it's not inside-out tracking, it is. The sensors are the photocells on the headset and controllers. As far as I understand it, the lighthouse base stations are dumb.

The base stations look to me like tiny laser scanners complete with spinning emitters and a receiver. The HMD and controllers also receive, but handle the precision timing on their end, and the rest of the laser field is used to built a point cloud of your surroundings where timing isn't important.

...but Lighthouse can supposedly exist in a form much more basic than a mini laser scanner, so I dunno. I'm not entirely sure what's going on, tbh.
 

Raticus79

Seek victory, not fairness
We both understand it right, I think we're just calling it different things. It's not really inside out or outside in, but a hybrid of the two.

Yeah, the line is really blurred there. Inside-out vs outside-in doesn't capture enough information for this scenario. It would be useful to have more descriptive terms based on what (if anything) needs to be placed in the environment for a system to work, assuming the user has a backpack computer and would be otherwise free to wander. Maybe simple classifiers won't work and we'll just stick with describing systems like this as Lighthouse-based due to the complexity.
 

efyu_lemonardo

May I have a cookie?
It really has very little to do with the Wiimote (though it's closer than the Move). The Wiimote uses a camera, there are no cameras here (that's what makes it so efficient, fast and scalable. And hopefully inexpensive).
I may have misunderstood. Is it mentioned specifically what kind of photocells are used? I was under the impression that they are more than simple photoresistors, and that each cell has the ability to accurately distinguish between different frequencies of pulses in addition different durations and intensities, among other parameters such as coding and maybe polarity.

In essence I thought there were a few kinds of multiplexing going on in parallel. But you're right that the spatial resolution of each sensor is tiny, and that temporal resolution plays a much bigger role, due to the nature of the periodic laser.
 

Crispy75

Member
A thought occurs to me: The laser must be scanning *really* fast, or it would report different positions for the top and bottom of a moving object. Kind of like screen tearing, or the "bendy" effect you get with a rolling shutter video camera.
 

Durante

Member
A thought occurs to me: The laser must be scanning *really* fast, or it would report different positions for the top and bottom of a moving object. Kind of like screen tearing, or the "bendy" effect you get with a rolling shutter video camera.
Most likely, yeah. Another thought: if you increased the number of base stations, couldn't you completely eliminate the need for scanning? Simple pulsing should be sufficient.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Really impressive engineering. /tips hat at valve

A thought occurs to me: The laser must be scanning *really* fast, or it would report different positions for the top and bottom of a moving object. Kind of like screen tearing, or the "bendy" effect you get with a rolling shutter video camera.
I don't think they need the readouts of all the (or the majority of) sensors, all the time. That's why the distribution of sensors is so important.
 

Raticus79

Seek victory, not fairness
A thought occurs to me: The laser must be scanning *really* fast, or it would report different positions for the top and bottom of a moving object. Kind of like screen tearing, or the "bendy" effect you get with a rolling shutter video camera.

Yeah, especially considering the fine angular resolution you need to cover to ensure you hit those small sensors most of the time within that 5x5m area. They said it updates at 100Hz, so whatever area needs to be scanned at that resolution by each laser needs to be completed in 10ms. So, imagine covering a wall 3 metres away with dots of that size (say 5mm each, so 200 per linear metre). Say each laser covers a 9 square metre area at that distance (50 degrees horizontal and vertical). The logic's similar to a CRT monitor here. The resolution is 600x600 and the refresh rate is 100Hz. It's actually not that bad when you look at it that way. Laser setups usually have one horizontal and one vertical mirror, so one would be working at 1Hz to cover one axis, then the other has to work overtime - covering 600 lines, 100 times a second, so 60kHz oscillation (or continuous rotation I guess?). I have no idea if that's realistic. I looked up some stats for random DJ laser gear and one mentioned being able to hit 50,000 points per second, so maybe it's in the ballpark. If it's not, then they just reduce the scanning resolution and accept some misses at the max distances thanks to the redundancy of the sensors.
 

jmga

Member
This positional tracking system could be used as a walker.

Bj8Fsic.gif
 
This positional tracking system could be used as a walker.

Bj8Fsic.gif
That strikes me as a bad idea. I've approximated it with the DK2 a few times, physically walking around a space within the camera bounds (superb), and then transitioning to alternative movement (gamepad left stick or keyboard button) while standing still as I reach the edge. It is jarring, quite easy to lose my balance, and made me feel sick after a while.
 

IMACOMPUTA

Member
Well they're not little computers if that's what you mean. But they're sensors that know when they've been hit by the scanning laser. Smarter than an always-on LED.

Speculation:

Each base station uses a different frequency of light
Max. tracked sensors is essentially infinite, because no two sensors can occupy the same point in space, they will always produce time-distinct readings.

Ohhh. Your wording read like you meant the lasers were "reading" the position of the headset.

I get it now.

EDIT: Ok, now I think that *is* what you meant. You keep talking about "lasers reading something".

From what I've read, the laser boxes are completely dumb. All they do is project IR lasers all over the place.
The HMD and controllers have photocells that read the positioning of the IR lasers to determine where they are in the room. I don't think its been explained, but I would imagine you set up your lighthouse boxes and then scan the room with your headset/device to calibrate.
Really it sounds a lot like the wii remote/ IR sensor. Except they're using photocells instead of cameras, and there are lasers instead of 2 dots.

Also, why would the base stations need to use different light frequencies?
 

IMACOMPUTA

Member
That strikes me as a bad idea. I've approximated it with the DK2 a few times, physically walking around a space within the camera bounds (superb), and then transitioning to alternative movement (gamepad left stick or keyboard button) while standing still as I reach the edge. It is jarring, quite easy to lose my balance, and made me feel sick after a while.

The feeling of sliding around is what makes me most nauseous. HL2 is unbearable. I think i'd hate this walker idea.
 

efyu_lemonardo

May I have a cookie?
Also, why would the base stations need to use different light frequencies?

You need some way to know which base station your referencing with any given measurement. This doesn't necessarily have to be accomplished by differentiating between the lasers (either by dividing frequency, coding or other methods of division), for example an internal magnetometer and gyroscope may provide sufficient information to do that, but it probably makes the most sense to do it anyway to minimize interference.

edit: Also, listening to the first minutes of the video in the OP again, I think it's implied that timing (and angle) information is encoded into the laser pulses in some form. To do that you're going to need some bandwidth and some kind of multiplexing scheme anyway.

edit2: by the way, the depressions in the headset are likely to improve contrast for the sensors by minimizing interference from indirect light, as well as possibly adding some angular information. The depressions around the eyes of animals are hypothesized to have evolved to serve a similar purpose :)
 

Crispy75

Member
Ohhh. Your wording read like you meant the lasers were "reading" the position of the headset.

I get it now.

EDIT: Ok, now I think that *is* what you meant. You keep talking about "lasers reading something".
When I say "tracked sensors" I mean by software.
Also, why would the base stations need to use different light frequencies?
So that the sensor knows which base station it just got scanned by.

EDIT: I'll stop typing there, cos lemonardo nails it :)
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
GPS taken to the next level.

LPS, perhaps.
 
I like Sony's method more for console versatility (camera as well for video chat) but this is a better method for scalability and I'd imagine latency as well. Though Sony's camera seems to have been designed for speed in mind.

It's more practical to have two of these on opposite ends of the room than two camera as well. Do they know the rooms boundaries by placing receptors on them?
 

Durante

Member
I like Sony's method more for console versatility (camera as well for video chat) but this is a better method for scalability and I'd imagine latency as well. Though Sony's camera seems to have been designed for speed in mind.

It's more practical to have two of these on opposite ends of the room than two camera as well. Do they know the rooms boundaries by placing receptors on them?
I think for static obstacles you'd just need to "scan" their position relative to the base stations once, in some ind of setup process. No reason to use extra hardware for that. We discussed this a it earlier in the thread.
 

Crispy75

Member
I think for static obstacles you'd just need to "scan" their position relative to the base stations once, in some ind of setup process. No reason to use extra hardware for that. We discussed this a it earlier in the thread.

Unless the base stations also have time-of-flight sensors, in which case they could map the shape of your room automatically...
 

mrklaw

MrArseFace
I like Sony's method more for console versatility (camera as well for video chat) but this is a better method for scalability and I'd imagine latency as well. Though Sony's camera seems to have been designed for speed in mind.

It's more practical to have two of these on opposite ends of the room than two camera as well. Do they know the rooms boundaries by placing receptors on them?

Also more practical if they are standalone and don't need wiring back to the host computer.
 

Durante

Member
Do we have any estimates on the cost of Valves tracking solution in comparison to Oculus and Sony'?
Now that we know that the sensors are just individual photocells, it shouldn't be too bad really. I don't have the expertise to say more than that though.
 

Raticus79

Seek victory, not fairness
Ha, cute animation. I don't think that'll be it though. Put a beam through a lens and you still have a beam on the other side (maybe redirected depending on where it hit the lens), not a fanned-out plane like that.
 
Watched this yesterday and it blew me away. I was expecting more VR products to compete with Oculus, but I wasn't expecting anything like this. I primarily game in my living room, so not sure if this would work for me, but I am really excited for this thing!
 

HariKari

Member
This positional tracking system could be used as a walker.

One of the tested guys mentioned that, because you lose spatial awareness, they can do a trick where you feel like you're constantly moving forward but are actually going in a bit of a circle. Any idea how that might look/work?
 
One of the tested guys mentioned that, because you lose spatial awareness, they can do a trick where you feel like you're constantly moving forward but are actually going in a bit of a circle. Any idea how that might look/work?
Redirected walking:

https://www.youtube.com/watch?v=bSd8yKYO9H8

http://cb.nowan.net/blog/2008/12/02/redirected-walking-playing-with-your-perceptions-limits/

Very cool, but you need a pretty wide space for it to be a universal solution. The kind of living room spaces that most people would want to use aren't really practical, but I think it could be achieved for certain types of games with very careful level design.
 
No, he misunderstood. He thought the lasers were painting your room with a known pattern of IR dots, which were then picked up via cameras on the headset. This is inside-out tracking.

Lighthouse is outside in. But instead of trying to track dumb IR dots with a camera, the dots are *smart* and know exactly when they've been scanned.

Sorry for the late reply and thanks.

I was curious about whether what Keljrooc talked about was what ended up being implemented with the Vive.
 

MaLDo

Member
This positional tracking system could be used as a walker.

Bj8Fsic.gif

I don't like it. Movement must be consistent.

What I would like is a segway wannabe movement, where I move using body tilts. This for open world games with a lot of movement where my character is walking/running.

For seated games, I would like moments with full 1:1 movement like looking at my car wheels pressure, open my car trunk or fix the jetpack of my mech putting me behind my chair and moving relays and switches.
 

Ty4on

Member
This positional tracking system could be used as a walker.

Bj8Fsic.gif
This to me rings of the early Wii games where you aimed at the edge to turn. I think like in those games it would always feel clunky. Movement/aiming we have today works well because it is always the same and doesn't suddenly change when you hit an arbitrary border.
I've never tried VR, but it's also easy to see why people who have says it feels very weird when you suddenly start sliding. So making any off center movement triggering it (like Metroid Prime) would probably be even worse.

Edit: This seems like an issue we just have to live with for now with stuff like teleportation and games designed with this limitation in mind. This would still be awesome for something like a cockpit where you could walk around to different posts.
 

IMACOMPUTA

Member
When I say "tracked sensors" I mean by software.

So that the sensor knows which base station it just got scanned by.

EDIT: I'll stop typing there, cos lemonardo nails it :)

But the sensor is scanning the base station's output. Not the other way around.
 
That strikes me as a bad idea. I've approximated it with the DK2 a few times, physically walking around a space within the camera bounds (superb), and then transitioning to alternative movement (gamepad left stick or keyboard button) while standing still as I reach the edge. It is jarring, quite easy to lose my balance, and made me feel sick after a while.
What about ramping up the speed from a much smaller dead spot?

Edit: seen comments above and see the issue even with this.
 
What about ramping up the speed from a much smaller dead spot?

Edit: seen comments above and see the issue even with this.
Indeed. In the same way that people are avoiding right stick movement for looking because it is too smooth, instead preferring flicks of the mouse or instant changes of angle, this also applies to left stick movement. It's not as bad as right stick, but still feels unnaturally smooth. By creating a smaller dead spot in this room configuration, ramping up the speed until the edges, you're effectively creating a giant left stick, which is not what you want.
 

IMACOMPUTA

Member
That's exactly what I meant. "Scanned" was a poor choice of word - I meant it in the sense that the electron beam in a CRT "scans" from side to side.
OK. Sorry. I don't mean to be a nitpicker or anything, but there's been a lot of confusion with how this thing works. If anything I'm just trying to make sure I understand it properly. Thanks for clarifying.
 

Durante

Member
Indeed. In the same way that people are avoiding right stick movement for looking because it is too smooth, instead preferring flicks of the mouse or instant changes of angle, this also applies to left stick movement. It's not as bad as right stick, but still feels unnaturally smooth. By creating a smaller dead spot in this room configuration, ramping up the speed until the edges, you're effectively creating a giant left stick, which is not what you want.
Yeah. It's confusing before you try it, but abrupt changes which would seem to be worse at first glance are actually much less disturbing in VR than gradual ones. This applies to direction, velocity and position.

As such, e.g. warping from room to room could be a lot better than anything which moves the environment around you.
 

efyu_lemonardo

May I have a cookie?
uploadvr article said:
Even further, an additional nugget of information surfaced at the Maker Faire. It was discovered that the Lighthouse basestations can detect disruptions. If they get picked up, knocked over, or adjusted in any way, the system can sense the movements. This means that internal algorithms can make adjustments from there, keeping the experience as stable and streamlined as possible.

Now this is really taking it a step further :)

hizook blog post said:
The IR LEDs provide the start of a timing sequence. A microcontroller (attached to the photodiode) starts a counter (with fine-grained time resolution) when it receives that initial sync signal, and then waits for the X and Y line lasers to illuminate the diode. Depending on the time elapsed, the microcontroller can directly map the time delay to X and Y angular measurements. Using multiple photodiodes and knowing the rigid body relationship between them, the microcontroller can calculate the entire 6-DoF pose of the receiver.
This is also clever, as there's no need to synchronize with a base station if you don't have line of sight with it.
 
Top Bottom