A (single) post process effect is achieved by firstly rendering the scene to a framebuffer with appropriate attachments outputting the result to a texture, applying the texture to a full screen quad, specifying an output framebuffer (usually the default framebuffer) and then executing a shader on the texture to render to the output framebuffer (the screen). To chain you take the output framebuffer and feed it as input to another post process effect using the same sequence as before. The final shader (the final link in the chain) is then output to the screen (i.e. the main framebuffer). For performance reasons of course it’s better to have an uber-shader but the support’s there. I’ve made it so adding post processing shaders to the engine / editor is painless (you simply drop them into a special folder that the editor scans on start-up – which then generates a factory / menu item entry for each effect – the same approach I take for regular non-post-process shaders in the engine / editor). Here’s a video of a vignette -> pixelate -> chromatic aberration post processing chain in action in the editor:
As SimulationStarterKit is C++ / CMake based I’ve adopted CTest as the unit testing framework (for the time being although GoogleTest’s benchmarking support looks good). This is especially helpful when you’re trying to cover many platforms (for me currently that’s Linux, macOS and Windows). Every time I implement a feature in the engine I try to create a corresponding test first to help determine what constitutes correct operation and also sometimes as a programming aid to help discover what a usable API might look like for a new feature. The video below demonstrates how this looks in Visual Studio.
When rendering objects I want to:
- Minimise state switches (i.e. shader activation – glUseProgram())
- Render opaque objects front to back to reduce overdraw
- Render transparent objects back to front to get transparency
In this modern age of dockerized apps there are still times when it’s desirable to bundle an app, its dependencies and resources as a self contained installable bundle on Linux.
I wanted to start using the AWS SDK from a native app with the least amount of friction to simplify porting it to other platforms and thought about sharing the approach in case it’s of use to others.
A standard feature of all 3D scene editors are object transform manipulation tools, i.e. translate, rotate and scale. Whilst there are articles that describe the behavior I couldn’t find a specific article on the implementation details so thought I’d write up the algorithm I use. Continue reading Pixel perfect object movement
There are a few forms a C/C++ library that you want to cross compile to Android might come in.
- An Autotools project
- A CMake project
To get started let’s make a standalone Android toolchain (a toolchain being compilers, libraries and headers for cross compiling our source code to a specific target architecture and platform ABI) . Continue reading Cross compiling C/C++ libraries for Android (updated)
Occlusion culling complements frustum culling by culling occluded objects. Frustum culling is an optimisation technique that discards meshes that sit outside of the viewing volume by testing each mesh against the six frustum planes. The culling can be accelerated with hierarchical spatial partitioning whereby the scene is carved up into a tree with each tree node representing a smaller region of space. A node contains all render-able objects enclosed in the node’s space allowing fast inclusion / rejection of the node objects. If, a node is found to intersect the viewing volume then the algorithm recurses down into the node’s child nodes etc.
That’s great however more can still be done. Continue reading Occlusion culling
Added post-processing shader support to the engine. Post-processing shaders can be used to apply full screen effects to scenes. Examples of such shaders include full-screen anti-aliasing, vignette effects and motion blur to name but a few. With a post processing shader the scene is typically rendered to a texture that is bound to an off screen frame buffer with appropriate depth and color attachments. The texture is then applied to a full screen quad and rendered to screen. This short clip demonstrates the post processing shader.
The entity, component system architecture enables the easy addition of components (behaviours) and systems (allocate and update components of a specific type in a cache friendly manner). Components can be attached to scene items to create more advanced scene elements and interactions. The engine supports the pluggable addition of systems and components. Systems and components can be defined within a shared library (aka “plugin”) which are then exported for loading into the engine and editor. The editor displays all loaded systems in the Systems tab and all loaded components can be attached to scene elements via the editor’s components panel. As with scene items the editor also reflects on system and component properties so they can be configured either manually (in the editor) or programmatically through script. Like scene elements, both system and component properties are serialised into the scene in a versionable format. Below is a short clip demonstrating the Physics plugin’s Physics System and the RigidBodyObjectComponent exported by the plugin.