• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD's Raja Koduri says that we need 16,000x16,000 at 240Hz for "true immersion" in VR

Durante

Member
Unfortunately that could not use the rendered output the game produced. An entirely separate image would need to be produced.
Sure. But compared to rendering for the level of "realistic" VR we are talking about here, rendering that image for a stream is basically peanuts. Especially since it also doesn't have any significant latency constraints. So you don't need eye tracking for that.

However that brings up an interesting point. Could future games and hardware allow for a separate GPU to produce content specifically for streaming. Game streaming is big business for the streamers as well as free advertisement for games. No doubt that would be enough to warrant extra development and hardware costs to make VR streaming content as enticing as possible.
That should be viable, especially on the hardware side. On the software side, the developers would need to consider it important enough though.
 

Falk

that puzzling face
The other solution would be to just save all pertinent data for re-rendering.

CSGO, Dota2, etc with their replay functionality allows for going back to the game, booting up a replay and re-rendering at highest possible contemporary settings at the time, in slow motion if need be.

Again, software devs would have to consider it important enough, but then again I curse every game that doesn't have some form of accessible replay functionality.

basically all of them

(Also doesn't solve the real-time issue for broadcasting)
 

Reallink

Member
What is the primary technological barrier in producing an 8k or 16k mobile sized display? Strictly cramming that many pixels in? IIRC Sony's HMZ line's Micro OLED's were over 2000 PPI (0.7" 1280x720p) and the HMZ-T1 launched way back in 2011 with 2 screens for $799. You would think a 5" 8k panel (~1700 PPI) would be pretty easy by now, with 16k being cutting edge.
 

Extollere

Sucks at poetry
All this makes me think... I wonder what kind of advanced prototypes Oculus and Valve have tucked away in some secret research rooms.

You think they have some $30,000 8k 200 FOV monster headsets in there?
 
Funny, sad and true at the same time.

Here's to hoping AMD can make a turn around this year. I know they probably won't have a massive turnabout, but honestly even as 10+ year Nvidia patron I'm really wanting a reason to jump ship. Especially with the announcement Nvidia has no plans to support adaptive sync via Display Port. Nothing pisses me off more than anti consumerism.
 

Piggus

Member
1080p or even 1440p indeed is far too low at the moment for VR. A lot of people will find this out soon enough. However you have to start somewhere. 4K VR will be a major step forward, and that's not far off.
 

Renekton

Member
No one will do PS5X2 GPUs if AMD dies. Nvidia will sue the pants off anyone who tries to do non-mobile GPUs.

They already got hundreds of millions from Intel.
 
lol, This statement has been taken the wrong way.

We DO NOT need to render two 16k displays to make VR indistinguishable between reality, Obviously that would be the simple and extremely inefficient method of doing so, but like other people have pointed out here the key to all of this is Foveated rendering and changing the way we think about making displays.

4k at a respectable refresh rate seated right in our fovea centralis' degree of view would create an image indistinguishable to reality.

So once display technology reaches 16k at say a 3 inch display size (which will happen in the next 10-12 years) then we should have VR that is identical in quality to how we perceive reality.

And all of this requires your GPU to render two 4k images in real-time at a high refresh rate, which will definitely be doable maybe even late next year or early 2018 with Nvidias Volta GPUs.
 

AmyS

Member
Who will make the chips for the PS5/Xbone 2?

Imagination. We're back to Dreamcast days.

Yay, Ray Tracing!

f0HIQZc.jpg
 

Pagusas

Elden Member
We need to discover/invent a totally new way of rendering things before we start dreaming of those resolutions.
 
I wonder if developing some other type of technology would be more practical than attempting to push our current rendering technology to 16K at 240Hz. I believe that as technology continues to improve over time we might find another way to immerse ourselves. Maybe brain to machine technology will improve to an extent that interfacing them becomes commercially practical.
 

Drkirby

Corporate Apologist
I wonder if developing some other type of technology would be more practical than attempting to push our current rendering technology to 16K at 240Hz. I believe that as technology continues to improve over time we might find another way to immerse ourselves. Maybe brain to machine technology will improve to an extent that interfacing them becomes commercially practical.

I don't know, unless we suddenly come up to a brick wall, 16k@240Hz sounds like something we could achieve in 10 to 20 years.
 
Sure. But compared to rendering for the level of "realistic" VR we are talking about here, rendering that image for a stream is basically peanuts. Especially since it also doesn't have any significant latency constraints. So you don't need eye tracking for that.

You are talking about totally rerendering an image from a different point of view. It's not even how much extra processing that would take but that no game would leave that much processing on the table just so a few people could stream. That's why I could envision a special streaming VR version of a console that had an extra lower power GPU just to handle the creation of the video for streaming.

Perhaps an even better solution would be for the game to stream position and game information to the cloud where processors there could create the streaming content.That way the console doesn't need any hardware changes.
 

_woLf

Member
Hearing leaders at tech companies having goals like this is great in my opinion. As outrageous as it sounds, it gives them a goal for innovation, even if takes it 25 years :p
 
Where is he getting 240hz from? I thought Oculus did psychological studies that found ~88hz was where people stopped unconsciously noticing flickering.
88 Hz is the criticial flicker fusion rate for their HMD, or the effective framerate of the eye.
But temporal aliasing artifacts are still visible way beyond that.

You could use some type of eye-adaptive (i.e. using eye tracking) motion blur. Or the 240 Hz "brute force" method, like what downsampling is for spatial aliasing. Or both.

stroboscopic-60hz.jpg

stroboscopic-120hz.jpg

Strobing, one type of temporal aliasing on low persistence displays. Ideally those ghost images should look blurred together. (via blurbusters)
 
Top Bottom