Monday, 18 July 2016
Monday, 11 July 2016
For this post I'll show you round the different parts of the editor application's user interface. Next time I'll dig into the editing functionality and how to use Apparance to actually build procedures.
Why do we need an editor? Well, we are trying something very different here, by way of workflow, modelling paradigm, and output. As essential parts of the Apparance concept, building a custom editing application was the only way to achieve this level of bespoke requirements. Some of the important features it needs are:
- Creation and management of procedures
- Data-flow based visual graph editing
- Preview of resulting procedure output
- Real-time, interactive authoring and tweaking
|The Apparance Editor|
Good design means factoring out functionality into smaller, re-usable, chunks, and consequently we will need to be able to work with many procedures. At the moment, procedures are organised in a simple two level hierarchy with a Category and a Name. This will probably need expanding in the future, for larger projects, but provides a way of grouping procedures together for now.
|Procedure/Operator browser and properties of selected procedure|
A browsing panel lists all the procedures and as a navigation aid there is a filter box to narrow down those displayed. As well as procedures, the fundamental operators they are built from are also listed, in their own browsing panel and can be filtered in the same way.
Once you create a procedure you need to start specifying the functionality within it and the connections in and out. This is performed within the main area of the editor in a scrollable, zoom-able window.
|Zoomed-out overview of a large procedure in the editing window|
Often your operator graph will fit within the window, but for more complicated creations you will need to zoom out or pan around. Operators are boxes with the name of the operation at the top, inputs on the left, and outputs on the right. The procedure itself has its inputs on the left and outputs on the right too. Consequently, the natural visual 'flow of data' is from left to right, most connections and chains of functionality propagating information to the right. This doesn't mean you can't make connections in any direction and create all manner of spaghetti. Careful factoring out of messy bits into sub-procedures helps here.
The inputs and outputs of the procedure that you specify and name here are what you will see and be able to connect to when you place your procedure down within another procedure.
|Procedure IO editing|
There is a rendering window in the corner of the editor where you can view a procedures output. At the moment all output is 3D model geometry, and as we are targeting 3D worlds this is all you need to see a model in place.
|The 3D preview window|
By electing to view a procedure, you are specifying the starting point of the geometry synthesis process. In order to do this with procedures that have inputs, you need to be able to specify their values. This can be done where you edit the input connections to your procedure (see above) and are effectively the default values your procedure comes with. This means you can preview any procedure as each come with some starting values. These are also the values your procedure starts with at its inputs when you place it down.
The 3D view-port has pretty standard camera navigation controls, with orbit, and FPS style movement as well as an auto-rotate mode for showing off a model.
To help with construction and spatial orientation, a ground-plane grid is drawn for you. This is implemented as another procedure that can be edited just like any other if it needs customising (e.g. turn off, adjust colour/intensity, spacing, scale, etc).
To get a better look at your scene you can expand the 3D view to occupy the whole editing and browser area. This leaves the property editing panel (which expands to occupy the space where the 3D view was). This mode is ideal for tweaking values, simply select the operators who's inputs you want to change and switch to expanded mode.
Most editing environments include some form of properly panel where a list of the individual adjustable elements of an object are shown. The Apparance editor uses this for editing (and viewing) a number of things, such as: Operator input constant values, procedure IO name and description, new procedure name and description, renderer settings and statistics, view-port visualisation modes (see below) and diagnostics, and grid settings.
|Property viewing and editing panel|
Most data types are fully editable, some with specific enhancements such as sliders for floating point values and toggle buttons for enumerations. Sliders have editable min/max values too so you can set them to a sensible range for the value the slider controls.
In line with the live/interactive editing model adopted here, most of the user interface can be updated at run-time. This has made development of the UI much, much faster and allowed much in the way of polish that would have otherwise been left. The editor UI is implemented in WPF which supports dynamic loading/parsing of the backing XAML design data. Custom text editing panels can be expanded to allow live editing of most of the editor interface.
|Live editing of the editor UI|
The synthesis process can be monitored in a custom panel showing each of the synthesisers, with a timeline of the jobs each works on. For each job a breakdown of memory use and any issues encountered is displayed. This is needed to diagnose any technical modelling problems.
|Synthesis statistics and diagnostics|
Another panel allows exploring of the internal engine structure and any properties exposed by each part.
|Engine exploration; here showing view-port modes and settings|
There are a few ways to analyse the operation of the engine, the synthesiser, the procedures, and the tools, including: GraphViz dumps of each synthesis run, the scenehierarchy, and procedure capture analysis process, as well as in-editor visualisations of the detail refinement hierarchy, the editor tool stack, and the UI stack. All helpful in working out why things aren't going as expected and important to understand how best to build procedures that work well with the engine.
Next time I will talk about procedure creation, editing, and viewing.
Sunday, 3 July 2016
Previously we covered how geometry is created. This time we will look at how it is managed and rendered.
It was mentioned in my last post that geometry is built into fixed size buffers. This limits the amount of geometry that can be created by one a procedure. For a small object, or one of low detail (farther away), this may not be a problem, but if we are to build huge, detailed worlds then it most certainly is. To overcome this, a number of systems and techniques are used.
Models are managed within a spacial octree, each node being responsible for any models that fit reasonably within its own bounds. Smaller models are managed by the small nodes deeper in the octree.
During the synthesis process, the sub-procedures used (and any bounding information that can be obtained from them) are analysed, and in certain cases stored. The aim here is to capture a set of sub procedures that fully represent the model built, but as smaller component parts. These parts can then be used to build more detailed versions of parts of the whole model, and which can be managed by the smaller octree nodes that are more suitably sized. This effectively provides a way of re-synthesising successively smaller parts of any model as we need the extra detail deeper in the octree. This 'refinement' process is driven by proximity to the viewpoint, using the deeper more detailed model parts in areas that are nearer the camera.
|Successive octree levels, and the geometry managed by each|
Procedures do need to be built with this process in mind somewhat. There are certainly ways to help or hinder the process and prevent the system from operating at its best, but the tools provide feedback and diagnostics to help you optimise them. This is another area that I will dig into in more detail in another post.
The rendering engine for Apparance has always been fairly basic as most of the work has been in proving out the procedure synthesis and detail refinement techniques. All that the renderer needed to be able to do was render some coloured triangles with a couple of fixed light sources. This was implemented in DirectX 9 and based on a fairly simple cube rendering sample. Even with no materials, no texturing, and simple primitives I have been able to make quite a wide range of examples.
|Small sample of results achieved with basic renderer|
The renderer itself has been written to be fairly robust and flexible, with support for multiple viewports, cameras, and scenes, it runs on its own thread, and supports window resizing and device loss properly.
Driven mainly by the need to start blending between meshes of different detail levels, I decided that I needed to add shader support and this is my current focus.
With the flexibility and power shader based rendering brings I will be able to implement an elegant blending system, as well as better lighting, and start experimenting with more realistic surface properties.
I decided that I should certainly allow run-time authoring of shaders as this is an important premise of the Apparance tool philosophy. To do this I also decided that the shader code should be procedurally constructed by the same systems the models are built. Not only does this mean I can easily re-use shader functions and constructs, but pieces of code, and even allowing parameterisation of the shader code itself. This should have all sorts of interesting effect creation potential.
During the testing of DirectX 9 shaders I hit some nasty snags to do with background compilation of shaders during rendering, shader lifetime management, and finally with a crash on ending and releasing of shader resources that I couldn't resolve. Even using my simple training app I couldn't solve the issue and under Windows 10 it turns out that debugging and diagnostics in DirectX 9 isn't supported, so no help there. My solution was to bite the bullet and upgrade the engine to DirectX 11, which represents a significant improvement in features and support, as well as being fully integrated into the OS and with significant debugging support. Unfortunately this did mean learning about all the differences and writing another learning app, but it seems like it will be a good move in the long run as I was probably going to need it at some point anyway and DirectX 11 has some nice improvements in the way you handle shaders that it will be good to get used to.
|New rendering and shader test app for DirectX 11|
Eventually I am going to need some fairly fancy rendering features to show off the models properly, such as multi-texturing, advanced light sources, high quality shadows, ambient occlusion, and maybe even global illumination. I am treating these as 'solved' problems and prioritising many other, more unique, features over them. I am also likely to need help with the harder graphics tech and should start to involve others in the project more closely, but that will depend on how much interest I can raise in the project and whether I can find funds to build a team around it in the future. We shall see...
I was going to describe my development setup a little here, but I think I'll leave it until a later post. Next time I'll talk about the editor and how it is used to develop procedures.