• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

NVIDIA VR Technical Deep Dive: Gameworks VR + Q&A

viveks86

Member
Came across this technical deep dive on Nvidia's Gameworks VR, scheduled for public release in the next few months:

Introduction & VR SLI

Multi-Resolution Shading

Multi-Resolution Shading demo video from E3

Direct Mode, Front Buffer Rendering & Context Priorities

Nvidia has also released a detailed document on each of these features. Not sure if this was released after the last thread on the announcement. Since there is no link to it in that thread, i'll leave it here: https://developer.nvidia.com/sites/default/files/akamai/gameworks/vr/GameWorks_VR_2015_Final_handouts.pdf


Community Q&A:

When can we expect a public release of VR SLI capable drivers?
Within the next few months. (However, drivers by themselves won’t automatically enable VR SLI—it will require integration into each game engine, so support for it will be up to individual developers.)

Can you confirm what GPUs will receive what GameWorks VR features i.e. is it restricted to the 9xx series?
Almost all of the features work all the way back to the 6xx series! The only one that requires the 9xx series is multi-resolution shading, because that depends on the multi-projection hardware that only exists in Maxwell, our latest GPU architecture. For the professional users out there, GameWorks VR is also fully supported on our corresponding Quadro cards. Of course, VR rendering is pretty demanding, so we recommend GeForce GTX 970 or higher performance for folks looking to get their PCs ready for VR.

Can you comment on the recently uncovered patent applications RE: an NVIDIA VR headset? (link)
NVIDIA regularly files patents across a wide range of graphics and display-related fields.

Can you explain the differences between Gameworks VR and VR Direct – will one replace the other?
GameWorks VR is the new name for what we previously called VR Direct. As we’ve grown the feature set of our VR SDK, it made sense to roll the capabilities into our overall GameWorks initiative.

A lot of people are banking on SLI working in VR as a solution to the extremely high performance requirements. What are the chances of you getting one-GPU-per-eye latency down to the same levels as single card setups?
We already have! In our VR SLI demo app we can see one-GPU-per-eye working without increasing latency at all relative to a single-GPU system.

Any plans for GameWorks VR features for Linux operating systems?
Eventually yes, but that’s further down the road.

Regarding Foveated Rendering and VR SLI: do you have plans to enable SLI with two or more graphics cards or GPUs from different performance sectors? Like an enthusiast GM200/GP100 for the center area and a lower specced GM206/GPxxx for the peripheral vision? Any plans for a special VR dual GPU card that works like this?
SLI requires all the GPUs to be the exact same model, so no, this isn’t something we can do currently. Heterogeneous multi-GPU (i.e. different classes of GPU working together) is potentially interesting as a research project, but not something we are actively looking at right now.

What are your long term plans to increase GPU utilization without increasing latency?
Not sure I understand this question exactly, but we’re constantly making driver improvements so that we can keep the GPU fed with work and not have to wait for the CPU unnecessarily, and we’re improving multitasking and preemption support with each new GPU architecture as well.

Any plans for LOD based rendering being stereoscopic close and monoscopic after a certain distance?
This is up to the individual game developer to implement if it makes sense for them.

I see you have added monitor detection for Headsets, do you intend to extend this to Plug and Play VR Setup, so when you plug the headset in, its resolution and capabilities can be passed down like plug and play?
As mentioned, we’re implementing Direct Mode so that headsets are recognized and treated as such instead of as a desktop monitor, but beyond that it’s up to the runtime/drivers provided by the headset maker to communicate with applications about the headset’s capabilities.

How big is the VR R&D team at NVIDIA?
It’s a bit difficult to say because there isn’t really one “VR team”, there are people focusing on VR across nearly every organization in the company. It’s a big initiative for us.

Can NVIDIA make a driver better than Morgan Freeman in Driving Miss Daisy?

It’s a huge challenge to make a driver as smooth as Morgan Freeman, but we’re sure as hell going to try!
 

TAJ

Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that.
So basically a poor man's foveated rendering, with the benefit of not requiring eye tracking? That's cool, I guess.
 

tokkun

Member
A lot of people are banking on SLI working in VR as a solution to the extremely high performance requirements. What are the chances of you getting one-GPU-per-eye latency down to the same levels as single card setups?
We already have! In our VR SLI demo app we can see one-GPU-per-eye working without increasing latency at all relative to a single-GPU system.

I am skeptical of this answer. Assuming both eyes' displays need to refresh in sync (seems like it would be disorienting if they didn't) then aren't they always limited to the worst of the two frametimes?

I guess maybe they mean relative to a single GPU with the same performance as one of the SLI'd cards.

To provide the most comfortable VR experience, games would ideally run perfectly consistently at 90 FPS, never missing a single frame. Unfortunately, with multitasking operating systems, memory paging, background streaming, and so forth, there are a lot of things going on that can cause even a perfectly optimized game to occasionally stutter or hitch. Missed frames are annoying even when playing on a regular screen, but in VR they’re incredibly distracting and even sickening.

To help fight this problem, we enable the option to create a special high-priority D3D11 device in our driver. When work is submitted to this device, it preempts whatever else is running on the GPU, switches to the high-priority context, then goes back to what it was originally doing afterward. This supports features like async timewarp, which can re-warp the previous frame if a game ever stutters or hitches and doesn’t deliver a new frame on schedule.

This is confusing. Why isn't this already an option in non-VR fullscreen games? And how does it obviate memory paging on multicore CPUs?
 

Xyber

Member
Looks like a really good way to save quite a lot of performance, sucks that it can't be added automatically because a lot of games probably won't use it because of that.

And then there's the problem with MRS only working on 900-series and above, so AMD would be excluded as well (and I doubt Nvidia would let them implement something so it would work for them as well).

Stuff like this should be an open standard or it will most likely only be used occasionally and that doesn't really benefit most people.
 

Flai

Member
Looks like a really good way to save quite a lot of performance, sucks that it can't be added automatically because a lot of games probably won't use it because of that.

And then there's the problem with MRS only working on 900-series and above, so AMD would be excluded as well (and I doubt Nvidia would let them implement something so it would work for them as well).

Stuff like this should be an open standard or it will most likely only be used occasionally and that doesn't really benefit most people.

I think MRS is a hardware-feature, so that is the reason why earlier GPUs don't support it (or AMD).

Maybe it will be standardized when DX13 comes :)
 
Top Bottom