• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DX12 update for Tomb Raider out now

Not something worth the performance hit, and there's something about it only being available on Maxwell--forward--let alone Nvidia that really bugs me, but that's a real tangible diff.
If this implementation is similar to that described implementation from, then there is no reason why it cannot run on other hardware (NV or otherwise), rather, it would just be noticably slower and perhaps even less accurate.

Let's hope it eventually gets a non NV-API implementation for everyone else.
 

Arkanius

Member
VXAO impact compared to HBAO+ changes depending on the resolution you're using.



VXAO is perfectly usable on 980Ti cards. If you're willing to lower some other settings then you should be able to hit 60 fps in 1080p on 970/980 with VXAO as well. So I don't see how "the h/w is not here yet".

I have a 980 Ti user, but my target Resolution/frame rate is 1440p,144hz :)

For Tomb Raider I locked it at 60fps at 1440p, VXAO destroys that perfect balance.

I think 2 more years of GPU evolution and we got ourselves a new winner in AO tech.
 
I have a 980 Ti user, but my target Resolution/frame rate is 1440p,144hz :)

For Tomb Raider I locked it at 60fps at 1440p, VXAO destroys that perfect balance.

I think 2 more years of GPU evolution and we got ourselves a new winner in AO tech.

Get a gsync monitor and get the most out of your 980 Ti.
 

Neo_Geo

Banned
They put out another version of the drivers shortly after the ones that caused the issues. I don't know if it fixes the issues though.

The new certified drivers didn't fix the issue for me, still experiencing freezes while idling when two monitors are attached.
 

Bebpo

Banned
Hmmm, so since Maxwell gpus can use the DX12 feature, does that mean Maxwell gpus are DX12 compatible after all?

I remember when they came out people were unsure if DX12 compatibility was marketing speak and not really true and that Pascal would be the first real DX12 cards on the market.
 

Kezen

Banned
Hmmm, so since Maxwell gpus can use the DX12 feature, does that mean Maxwell gpus are DX12 compatible after all?

I remember when they came out people were unsure if DX12 compatibility was marketing speak and not really true and that Pascal would be the first real DX12 cards on the market.

Maxwell GPUs were always DX12 capable. You must be confusing with something else entirely.
Maxwell is DX12_1 feature level.
 

seph1roth

Member
Hmmm, so since Maxwell gpus can use the DX12 feature, does that mean Maxwell gpus are DX12 compatible after all?

I remember when they came out people were unsure if DX12 compatibility was marketing speak and not really true and that Pascal would be the first real DX12 cards on the market.

The first real DX12 GPU are AMD GCN architecture. Take a look at Hitman 2016...
 

seph1roth

Member
None of the current GPU's are fully DX12 featured

AMD has a big advantage with Async Computing though, since it's a feature widely used in the current gen consoles.

Which is worse, a technique that can use even old hardware GPUs like the current gen consoles, but not Nvidia with its "new" Maxwell architecture...
 

Arkanius

Member
Which is worse, a technique that can use even old hardware GPUs like the current gen consoles, but not Nvidia with its "new" Maxwell architecture...

I doubt even Pascal will support Async Computing like GCN does. I bet only in Volta will Nvidia introduce something that allows both Geometry and Compute queues at the same time.

Nvidia had the hardware optimized for DX11 and serialized. They were great at that, and tailored for that. Suddendly DX12 and Vulkan hits and the AMD long game of tailoring their uArchs for the future pays off...

Nvidia will suffer a bit in the next two years in my opinion if DX12 and Vulkan catches on like wildfire. It might not happen because it will fuck 80% of the current Marketshare though...
 

seph1roth

Member
I doubt even Pascal will support Async Computing like GCN does. I bet only in Volta will Nvidia introduce something that allows both Geometry and Compute queues at the same time.

Nvidia had the hardware optimized for DX11 and serialized. They were great at that, and tailored for that. Suddendly DX12 and Vulkan hits and the AMD long game of tailoring their uArchs for the future pays off...

Nvidia will suffer a bit in the next two years in my opinion if DX12 and Vulkan catches on like wildfire. It might not happen because it will fuck 80% of the current Marketshare though...

That's why AMD is a better option than Nvidia, at least for the next 2 years.

Sharing the same architecture with consoles, introducing new features like Async computing, and even HBM2, is the real deal...even if they don't have Gameworks or the 80% of the market.

The facts are over the table, let's see how Nvidia reacts, their "auto-pilot" mode probably will not work in the near future.

Let's see if AMD launches FuryX v.2 with Polaris this year, that will be my next purchase for sure.
 

Kezen

Banned
Even with the DX12 update I don't think this game makes use of async compute, or at least not heavily.

It does use it, but they have not specified to which extent.

Another big feature, which we are also using on Xbox One, is asynchronous compute. This allows us to re-use GPU power that would otherwise go to waste, and do multiple tasks in parallel
http://tombraider.tumblr.com/post/140859222830/dev-blog-bringing-directx-12-to-rise-of-the-tomb
 

dr_rus

Member
The first real DX12 GPU are AMD GCN architecture. Take a look at Hitman 2016...

The first "real" DX12 architecture is Radeon 7790's GCN. GCN 1.0 do not support FL12_0 and as such is not a DX12 architecture.

And Maxwell 2 is above the latest GCN version in its DX12 features. No need to take a look at any AMD sponsored game. All DX12 GPUs support async compute.
 

dr_rus

Member
GCN 1.0 cards do support FL12_0.

Really? So this is wrong?

fBKb.png
 

Brandon F

Well congratulations! You got yourself caught!
Single 970 here with 16gb RAM and an i7 6700k@4.5ghz(OC) with 362.00 Nvidia drivers. This is at 2560x1080(21:9).

Everything Set to 'Very High' I get ~30fps with DX11+VXAO. Though performance benchmark shows it can range between the low 20's to 40's depending on zone. Bumping up to 3440x1440p native and I average in the unplayable teens.

DX12 with HBAO+ doesn't fare much different. Guess I just need to suck it up and lower my visual settings across the board with a 970 to reach that 60fps level. Pascal can't come soon enough.
 

Tubie

Member
Just tried VXAO and it drops my FPS by like 15 on average, doesn't seem like a good trade for what it does.

I guess we need the 1000 series (or whatever they end up calling them) nvidia cards to come out to really take advantage of DX12 features.
 

tuxfool

Banned

Yeah. I see a lot of Dx12 benchmarks with the 280x, which is a GCN 1.0 card. I mean, it has to be, because you can't support dx12 without FL12_0. Fermi cards are theoretically supported but I don't know if Nvidia has added support for those yet.

e:That table is just misleading. Those feature levels are basically those the card was targeted to support, but for example 11_1 exposes the same hardware features of 12_0. However, you have oddities like those 11_0 Nvidia cards which also support dx12. I was under the impression that 11_1 did define a few hardware features over 11_0 but those may have just been driver features.
 

dr_rus

Member
Yeah. I see a lot of Dx12 benchmarks with the 280x, which is a GCN 1.0 card. I mean, it has to be, because you can't support dx12 without FL12_0. Fermi cards are theoretically supported but I don't know if Nvidia has added support for those yet.
You can support DX12 with FL11_0 feature level. If your code doesn't use anything from higher feature levels or if you've implemented a fallback to FL11_0 you're good to go with D3D12 runtime on any FL11_0 card including 280X. Keplers are all FL11_0 and they are all benchmarked in whatever DX12 renderers we have at the moment.

That table is just misleading. Those feature levels are basically those the card was targeted to support, but for example 11_1 exposes the same hardware features of 12_0.
This table is based on data from MS's own capability checker as far as I'm aware. And prior to that the guy who handles the page has written his own checker which basically showed as much - GCN1 is FL11_1, FL12_0 is starting from GCN2/7790. The difference is small obviously - only one Tiled Resources tier below the spec - but it limits GCN1 cards to being not FL12_0 compatible.

Btw Polaris is a full new technology or is a revisión of GCN architecture like GCN 1.3?

Polaris is a GPU architecture name which includes the GCN4 multiprocessor architecture in itself. So it's a revision but you can call Maxwell a revision of Fermi as well - it's hard to actually draw many conclusions from the fact than Polaris is built on the same GCN basis.
 

tuxfool

Banned
This table is based on data from MS's own capability checker as far as I'm aware. And prior to that the guy who handles the page has written his own checker which basically showed as much - GCN1 is FL11_1, FL12_0 is starting from GCN2/7790. The difference is small obviously - only one Tiled Resources tier below the spec - but it limits GCN1 cards to being not FL12_0 compatible.

Yup. You're right, I do get confused about the feature levels. Honestly this feature level stuff is a bit inexact, we'd probably be better served by Vulkan style capability checking, Though I suppose there is an argument to be made for both.
 

seph1roth

Member
Polaris is a GPU architecture name which includes the GCN4 multiprocessor architecture in itself. So it's a revision but you can call Maxwell a revision of Fermi as well - it's hard to actually draw many conclusions from the fact than Polaris is built on the same GCN basis.

And Pascal?
 

wbEMX

Member
Cdbva69XEAEZTFz.jpg:large


Benchmarks seem to run a bit better, but actual gameplay runs a little bit worse.
Settings: 1920x1080, High Preset except for Depth of Field, which is set to Very High. Pure Hair is also on and I use HBAO+. Hovers around 50-60fps, runs a bit worse at the Geothermal Valley.
Specs: i7-4790K @4.2GHz; 8GB DDR3 1333MHz; ASUS GTX 970 STRIX OC
 

dr_rus

Member
This is so confusing. I swear pc gaming is getting way too complicated with all this dx12 stuff. What happened to just buy a powerful gpu and you're future proofed to enjoy every graphical feature for 3-4 years.

This is mostly for developers though and even for them the real choices are FL11_0 (because it will run on any h/w back to GTX680) and FL12_0 (because this is what both XBO and PS4 have on a h/w level in their GPUs).

So far there is no indication that any game coming out this or even next year won't run on an FL11_0 h/w and whatever usage of higher levels we have are contained to either IHV-specific effects like VXAO and HFTS (which can be implemented in lower feature levels with a bigger performance hit) or just more performance with the same graphics quality.

So nothing really happened - with a powerful modern GPU you are future proofed for a couple of years at least.
 
This is so confusing. I swear pc gaming is getting way too complicated with all this dx12 stuff. What happened to just buy a powerful gpu and you're future proofed to enjoy every graphical feature for 3-4 years.

This is how it's always been. Hell, it used to be MORE complicated years ago with older DX versions.

As a consumer, all you need to know is whether the card supports DX12. Any card you buy today supports DX12 end of story and you'll be good to go for a few years. Unless you're a developer you don't need to worry about the technical minutiae of feature levels, supported capabilities within that feature level, etc. That's so the developers can know what sort of capabilities they can expect to use in their games.
 

Exactly.

Since optimization on PC is a disaster because of too many configurations, let's make even more complicate with more incompatibilities and having to support DX11 + 12 + every graphic card unknown quirk + different feature sets.

That's why earlier on this thread (and for months) I've been preaching that the result of DX12 will be WORSE optimization and worse performance.
 

Durante

Member
This is how it's always been. Hell, it used to be MORE complicated years ago with older DX versions.
Yeah, the generation of PC gamers joining in the twilight years of the last console cycle have a really warped perspective of what "complicated" means.

Back in the early 00s you'd have entirely new hardware features enabling effects completely different from everything that had come before every other year. Those were the days. Now a 2-year+ old architecture might not support some feature which can make some particularly effect slightly more accurate and/or efficient and people panic.
 

Kezen

Banned
This is so confusing. I swear pc gaming is getting way too complicated with all this dx12 stuff. What happened to just buy a powerful gpu and you're future proofed to enjoy every graphical feature for 3-4 years.

That never existed. Newer games can always tax high-end GPUs to the point when max settings at 60fps is not realistic.

And it better stay that way.
 

Kezen

Banned
Middleware such as UE4 will become more important to maintain good code path for every u Arch.

Yes it is obvious engine vendors must be very happy about the rise of low-level APIs on PC. It only makes their proposition more attractive for a number of reasons.
 

DieH@rd

Banned
Any performance benchmarks for those with AMD cards?

Early batch of DX12 games will be more about utilizing resources of CPUs that were unable to cope with single-threaded focus of DX11.

The better question you should as is, "how the game works on weaker CPUs?".
 

Mohasus

Member
Update: As of 16-03-2016 the patch is also availabe on the Windows 10 Store and will be downloaded automatically for users.

736.1 MB, it will take a while.

Specs:
3570k
GTX 970
16GB 2133Mhz
1440p display

DX11 high preset:
yWMyXno.png


DX12 high preset:
1glKvgY.png


DX11 highest settings (except for textures and AA):
GUTeUHH.png


DX12 highest settings (same as above):
STcGVh0.png


Games crashes sometimes when fast traveling.
 

Sanctuary

Member
I'm not agreeing with him but I think that comparison is comparing the Ambient Occlusion at the 'On' setting to 'VXAO'. Rather than HBAO+ compared to VXAO. The difference is still going to be there of course but that particular example doesn't address his point.

That's what I was seeing too. It didn't look like HBAO+ at all. In fact, I could barely even see any AO at all, as if it was a comparison of no AO and then showing HBAO+.
 
Top Bottom