• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

The Tomorrow Children GDC Tech presentation

wachie

Member
Thanks for posting this.

asynctraos.png

They also talk about using async compute gives them an performance increase of about 18%.
 

jbw

Banned
So is this game going to allow for single player? Or will it require you to go online and play with others?

Sorry for the newbie question, but I have searched and can't find an answer.
 

HiVision

Member
So is this game going to allow for single player? Or will it require you to go online and play with others?

Sorry for the newbie question, but I have searched and can't find an answer.

It is an online game but not "multiplayer" in the general sense of the word. If you've played Dark Souls it'll help you understand where we are coming from.

Or maybe even Animal Crossing as we have elements of that too. This game is very difficult to put in a single genre which I think is one of the things that makes it exciting.
 

Shin-Ra

Junior Member
I thought the wobbly edges were a style choice so no worries there!


The main things (visually) that bothered me during the alpha were specular aliasing, the sometimes chuggy framerate and lack of feedback when digging so I'm keen to see how it's progressed.
 

HiVision

Member
I thought the wobbly edges were a style choice so no worries there!



The main things (visually) that bothered me during the alpha were specular aliasing, the sometimes chuggy framerate and lack of feedback when digging so I'm keen to see how it's progressed.

All those are addressed. :)
The framerate wasn't chuggy in the alpha though, except for the half a second or so when a new island appeared, and that is fixed of course. You might have been encountering bugs of course (has been heard of!).
 

Shin-Ra

Junior Member
All those are addressed. :)
The framerate wasn't chuggy in the alpha though, except for the half a second or so when a new island appeared, and that is fixed of course. You might have been encountering bugs of course (has been heard of!).
I couldn't pinpoint the cause so it could've been something out of view. Sometimes it seemed to be when players finished digging and a hole opened up.
 

HiVision

Member
Did you actually get into the game or were you watching a stream? Some streams give an impression of chuggy-ness when there isn't any and we made a note of that.

Anyway, since then even more optimizations are in place so even the occasional cases we know about are as smooth as butter.
 

Shin-Ra

Junior Member
I got in to play, but I trust you'll polish things up nicely for the 'shipping' version despite the ambitious rendering goals.

I'll be taking more pictures of beautiful polygonal handles and stuff. ;)

 

okonomiyonda

Neo Member
some more clarifications (just on terminology)

To oversimplify a bit, the compute units (CUs) are the things that run your shader. So whether you are doing graphics shaders or compute, they are all executing on the compute units. The async compute pipes are there to give you a way to supply more work to the GPU. Imagine your graphics work looks like this

  1. write to a texture
  2. wait for it to finish
  3. use texture as input for the next shader

In a naive implementation, that second shader can't run until the first finishes. That means towards the end there will be alot of the GPU hardware sitting around idle until the previous shader finishes. The compute pipes give you a way of supplying more work to the GPU that can fill in the gaps left by graphics.

Sorry if thats all a bit basic and oversimplified, but I'm not sure what level of detail is best to post

Regarding CPU sync, we have almost none. We use the CPU to kick GPU work and the CPU never stalls waiting for the GPU during our "frame". We do have some weirdness where an end of pipe interrupt wakes up our vsync thread which sleeps waiting for vsync and then writes a label allowing the GPU to continue, but thats just because work after the label may write to the current display buffer and we want to flip away from it before continuing. We basically tried to make everything as async as possible to avoid render and main thread involvement of any kind and minimize stalls.

Finally, while you can build command buffers and kick them from the GPU if you're clever about it, that is a topic for another day
 

okonomiyonda

Neo Member
They also talk about using async compute gives them an performance increase of about 18%.

I prefer to think in milliseconds rather than percentages. On a 33.333 ms frame, we saved roughly 6ms by using the async compute pipes. Thats where the 18% comes from. Those savings went up to 10ms on our stress test level. Really if you're working on a AMD GCN GPU, I can't recommend looking into async compute enough!
 
this game is so watchable on streams something relaxing about watching it because it seems to always be organically changing and never the same.

I can't wait to play it myself though. It's a instant purchase for me.
 

stryke

Member
some more clarifications (just on terminology)

I already understood the concept but it's great to see some affirmation.

How flexible is async compute? Is it something malleable enough to implement even late in game dev or do you have to think about it from day one of preproduction?

And as a developer, are you hearing or seeing any other devs using it or are we still far out from it being common practice?
 

okonomiyonda

Neo Member
How flexible is async compute? Is it something malleable enough to implement even late in game dev or do you have to think about it from day one of preproduction?

Depending on what kind of work you are doing and what dependencies it has, its possible to hack it in later in the dev cycle and still get very good results. But like most things in life, you can often get far better results if you plan for it upfront.

And as a developer, are you hearing or seeing any other devs using it or are we still far out from it being common practice?

We were super early adopters (guinea pigs) of both compute and async compute, and at that time not many devs were using it. However, from talking to friends at other companies, it seems people are starting to discover it now and I think you'll see it become much more common this year
 
some more clarifications (just on terminology)

To oversimplify a bit, the compute units (CUs) are the things that run your shader. So whether you are doing graphics shaders or compute, they are all executing on the compute units. The async compute pipes are there to give you a way to supply more work to the GPU. Imagine your graphics work looks like this

  1. write to a texture
  2. wait for it to finish
  3. use texture as input for the next shader

In a naive implementation, that second shader can't run until the first finishes. That means towards the end there will be alot of the GPU hardware sitting around idle until the previous shader finishes. The compute pipes give you a way of supplying more work to the GPU that can fill in the gaps left by graphics.

Sorry if thats all a bit basic and oversimplified, but I'm not sure what level of detail is best to post

Regarding CPU sync, we have almost none. We use the CPU to kick GPU work and the CPU never stalls waiting for the GPU during our "frame". We do have some weirdness where an end of pipe interrupt wakes up our vsync thread which sleeps waiting for vsync and then writes a label allowing the GPU to continue, but thats just because work after the label may write to the current display buffer and we want to flip away from it before continuing. We basically tried to make everything as async as possible to avoid render and main thread involvement of any kind and minimize stalls.

Finally, while you can build command buffers and kick them from the GPU if you're clever about it, that is a topic for another day

That's fascinating! Thanks!
I can see how big this can be from now on!
 

RoboPlato

I'd be in the dick
Depending on what kind of work you are doing and what dependencies it has, its possible to hack it in later in the dev cycle and still get very good results. But like most things in life, you can often get far better results if you plan for it upfront.



We were super early adopters (guinea pigs) of both compute and async compute, and at that time not many devs were using it. However, from talking to friends at other companies, it seems people are starting to discover it now and I think you'll see it become much more common this year
Do you think we'll see implemented at all in multiplat games or will it strictly be in exclusives?
 

HiVision

Member
I got in to play, but I trust you'll polish things up nicely for the 'shipping' version despite the ambitious rendering goals.

I'll be taking more pictures of beautiful polygonal handles and stuff. ;)

Lol, those handles are particularly pretty :)
 

okonomiyonda

Neo Member
Do you think we'll see implemented at all in multiplat games or will it strictly be in exclusives?

Oh, totally multiplatform. You'd be insane not to use it. Two out of the three major gaming platforms use AMD GCN now, so if you're targeting Xbox and PS4 (sorry WiiU devs!) you can do similar implementations. Supposedly the Xbox One has 4x fewer compute pipes* than PS4, but I would expect that they are totally exposed** to developers and usable.

* Based on publicly available info, but I'm not an Xbox dev so the info could be wrong
** One again, not an Xbox One dev, so best to let them explain
 

TalonJH

Member
I really enjoyed the alpha. Just take those dame slide puzzles out or ship the game with a slide puzzles for dummies manual.



Dev has curved my fear of the slide puzzles though.
 

c0de

Member
Maybe next year we can convince Jero to do a GDC talk about the landscape tech. Thats actually just as interesting as the lighting. IMHO

Oh, welcome to the community! Now we have another user who knows his stuff but probably won't answer because NDA'd ;)
 

R_Deckard

Member
Oh, totally multiplatform. You'd be insane not to use it. Two out of the three major gaming platforms use AMD GCN now, so if you're targeting Xbox and PS4 (sorry WiiU devs!) you can do similar implementations. Supposedly the Xbox One has 4x fewer compute pipes* than PS4, but I would expect that they are totally exposed** to developers and usable.

* Based on publicly available info, but I'm not an Xbox dev so the info could be wrong
** One again, not an Xbox One dev, so best to let them explain

Really interesting stuff and thanks for sharing.

On the PS4 can you use both GCP to handle Render and Compute jobs or are they split?
 
Top Bottom