Showing posts with label rendering. Show all posts
Showing posts with label rendering. Show all posts

Monday, 21 November 2016

Guildford Game Jam 2016 - Follow-up

Tidy Up

After finishing the game-jam there were a few things I needed (wanted) to do before uploading it for anyone to play.
  1. Rendering stutter - performance on lower spec machines was terrible.
  2. Collision and physics - so you could walk around the maze, confined to the rooms.
  3. Additional detail - the walls were a bit plain and were supposed to be brickwork.
  4. Difficulty tweaks - The maze size should have been linked to difficulty and I wasn't sure if it was fair.
I didn't think it was unreasonable to try and polish off these issues as they were either a bit of a show-stopper or had already been started during the jam.

Render Performance

It seemed that slower machines and particularly ones with only a couple of CPU cores suffered badly from low frame-rate and bad stalls whilst moving around the maze.  I had a few ideas where this could be, but really needed to gather more intel.  There were several avenues of investigation I could use:
  1. I started with the Remotery integration already in Apparance, to get some ideas what was going on.
  2. Running the GPU and CPU profiling tools in Visual Studio.
  3. Adding hard coded timing and logging to suspect areas of the engine.

I wasn't having much luck and after a few red-herrings I wondered what else I had changed or added in the past that could be causing such issues.  It was a tricky issue to diagnose as there are various systems between the camera movement and the rendering that could be at fault.  Performance on my main dev machine (many cores) was fine so I suspected something to do with the threaded-rendering or main message loop could be to blame.  To resolve this I tried a few things out of desperation:
  1. Boost the priority on render thread.
  2. Boost the priority on the main thread.
  3. Run the renderer on the main thread (single-threading model).
  4. Switch message look to non-blocking (needed for 3).
I made these command-line switchable so I could try different combinations on a couple of machines.
The priority boosts made no noticeable difference, but running the renderer on the main thread did help on machines with only a couple of cores.
My investigations eventually lead to two things that seemed to explain my woes.
  1. My slightly hacky camera smoothing system was completely broken.
  2. I was logging synth errors to debug output.
The first one was only showing up on slower machines so I just disabled it (something to revisit later).  The second should have only affected development and debug builds of the engine or when running under the debugger, however I disabled it anyway.
Once all these had been fixed and tweaked, performance was much better and so I moved on to collision.

Collision and Physics

Aware that implementing a physics engine is no light-weight task, I tried to keep it focused and simple.  I have implemented this sort of collision successfully before for an experimental Quake clone built in .Net and set in a cube based world.  The collision requirements there were sufficiently constrained that it wasn't too difficult to do.  Based on this previous experience I decided I could get it done quickly.
After an initial late-night foray into hacking together a solution during the jam itself I decided to try and finish this implementation.
My proposed design went like this:
  1. Meshes can be tagged with a special material that signifies collision properties.
  2. The material will normally be hard wired not to render anything, but can be enabled to visualise the collision surfaces.
  3. The engine will gather together any meshes tagged in this way that are within a certain bounds around the camera.
  4. A custom camera controller (similar to the free-cam one) will handle FPS style player movement.
  5. This controller will hook into the engine to request collision information.
  6. Based on this information, collision tests and simulation will be performed to give the impression of solid walls and floors.
  7. A Sphere-Triangle Swept collision test would be used to implement this.
I had most of this working during the game-jam, but didn't get the actual motion sim and collision working well enough.  This is what I spent most of the follow-up time on.  I got quite close, with floor and wall collision generally holding up smoothly, but the wall collision was still jittery and there were plenty of failure cases where you could fall though the geometry.  I also found it was going to need the collision geometry to be handled quite a bit differently with regard to detail levels than regular geometry.  This would be a lot of work.
In the end I decided that I was flogging a dead horse and should leave it out.  I didn't want to spend the additional time at this stage.

Additional Detail

The walls I had used to build the maze rooms were using a brick wall procedure written ages ago, and did include additional brickwork detail, even down to modelled bricks and mortar between them.  However, they were implemented before the current detail management system was in place and I was still working out how detail was controlled.  The wall procedure had a manual detail level control that you had to drive at the top-level.  Unfortunately this didn't integrate well with the block based detail control and couldn't be used.  I would have had to re-do the brick wall procedures from scratch and I just didn't have the time.

Difficulty Tweaks

This was one thing I did manage to have a go at, and the maze size now increases with difficulty setting.

Release

I had already set up the release build generation process so it was simple to package up a build for upload.  The final zip file was almost exactly 1 MB which was a nice size to show the compression at play with procedural generation.
The game is available to play via:
I also submitted it to The Procedural Generation Jam that was going on at the time:
It even got included in a Let's Play of all the entries by Jupiter Hadley.
https://www.youtube.com/watch?v=-NtPCzF2mZI (it's the first one at 00:54)
Download it and have a play!

(Oh, but don't forget to pretend the walls are solid :O)

Sunday, 3 July 2016

The Rendering System

Previously we covered how geometry is created.  This time we will look at how it is managed and rendered.

Models

It was mentioned in my last post that geometry is built into fixed size buffers.  This limits the amount of geometry that can be created by one a procedure.  For a small object, or one of low detail (farther away), this may not be a problem, but if we are to build huge, detailed worlds then it most certainly is.  To overcome this, a number of systems and techniques are used.

Refinement

Models are managed within a spacial octree, each node being responsible for any models that fit reasonably within its own bounds.  Smaller models are managed by the small nodes deeper in the octree.
During the synthesis process, the sub-procedures used (and any bounding information that can be obtained from them) are analysed, and in certain cases stored.  The aim here is to capture a set of sub procedures that fully represent the model built, but as smaller component parts.  These parts can then be used to build more detailed versions of parts of the whole model, and which can be managed by the smaller octree nodes that are more suitably sized.  This effectively provides a way of re-synthesising successively smaller parts of any model as we need the extra detail deeper in the octree.  This 'refinement' process is driven by proximity to the viewpoint, using the deeper more detailed model parts in areas that are nearer the camera.
Successive octree levels, and the geometry managed by each

Authoring

Procedures do need to be built with this process in mind somewhat.  There are certainly ways to help or hinder the process and prevent the system from operating at its best, but the tools provide feedback and diagnostics to help you optimise them.  This is another area that I will dig into in more detail in another post.

Rendering

The rendering engine for Apparance has always been fairly basic as most of the work has been in proving out the procedure synthesis and detail refinement techniques.  All that the renderer needed to be able to do was render some coloured triangles with a couple of fixed light sources.  This was implemented in DirectX 9 and based on a fairly simple cube rendering sample.  Even with no materials, no texturing, and simple primitives I have been able to make quite a wide range of examples.
Small sample of results achieved with basic renderer
The renderer itself has been written to be fairly robust and flexible, with support for multiple viewports, cameras, and scenes, it runs on its own thread, and supports window resizing and device loss properly.

Shaders

Current focus
Driven mainly by the need to start blending between meshes of different detail levels, I decided that I needed to add shader support and this is my current focus.
With the flexibility and power shader based rendering brings I will be able to implement an elegant blending system, as well as better lighting, and start experimenting with more realistic surface properties.
I decided that I should certainly allow run-time authoring of shaders as this is an important premise of the Apparance tool philosophy.  To do this I also decided that the shader code should be procedurally constructed by the same systems the models are built.  Not only does this mean I can easily re-use shader functions and constructs, but pieces of code, and even allowing parameterisation of the shader code itself.  This should have all sorts of interesting effect creation potential.

Trouble

During the testing of DirectX 9 shaders I hit some nasty snags to do with background compilation of shaders during rendering, shader lifetime management, and finally with a crash on ending and releasing of shader resources that I couldn't resolve.  Even using my simple training app I couldn't solve the issue and under Windows 10 it turns out that debugging and diagnostics in DirectX 9 isn't supported, so no help there.  My solution was to bite the bullet and upgrade the engine to DirectX 11, which represents a significant improvement in features and support, as well as being fully integrated into the OS and with significant debugging support.  Unfortunately this did mean learning about all the differences and writing another learning app, but it seems like it will be a good move in the long run as I was probably going to need it at some point anyway and DirectX 11 has some nice improvements in the way you handle shaders that it will be good to get used to.
New rendering and shader test app for DirectX 11

Graphics Fu

Eventually I am going to need some fairly fancy rendering features to show off the models properly, such as multi-texturing, advanced light sources, high quality shadows, ambient occlusion, and maybe even global illumination.  I am treating these as 'solved' problems and prioritising many other, more unique, features over them.  I am also likely to need help with the harder graphics tech and should start to involve others in the project more closely, but that will depend on how much interest I can raise in the project and whether I can find funds to build a team around it in the future. We shall see...

Next

I was going to describe my development setup a little here, but I think I'll leave it until a later post.  Next time I'll talk about the editor and how it is used to develop procedures.