Added componentised (as in Entity Component System based) navigation mesh support via a new Navigation plugin (available for download here) that uses Mikko Mononen’s Recast library. The key components are; TiledNavMeshComponent, NavMeshCrowdComponent, NavMeshObstacle and NavMeshAgent. As can be seen from the video in the editor you can now import models then generate navigable areas for agents to move around. Within the editor you can then wire up input devices to scripts that then perform ray -> mesh intersection to move your agents. The entire scene (models, nav mesh, agents, scripts and input handling) can then be saved to file or published as a code project from within the editor. A typical script (that is saved into the scene file) is included below. You would typically trigger this script in response to an event (i.e. an input device event – by adding a Mouse input device to the scene and wiring up its onMouseButtonDown event to the script’s execute method or in response to an object entering a collision volume. (see the collision volume support in the video log).
-- find the agent, nav mesh and crowd
scene = sdk:GetScene()
agent = scene:Find('Agent')
navMesh = scene:Find('NavMesh')
agentComponent = ext_NavigationScript.GetComponent_Navigation_NavMeshAgentComponent(agent)
crowdComponent = ext_NavigationScript.GetComponent_Navigation_NavMeshCrowdComponent(navMesh)
navMeshComponent = ext_NavigationScript.GetComponent_Navigation_TiledNavMeshComponent(navMesh)
-- unproject the current mouse pos at both near / far clip planes
mousePos = sdk:GetMousePosition()
pickRayNear = fireflyscript.vec3()
pickRayFar = fireflyscript.vec3()
sdk:UnprojectNearFar(mousePos:GetX(), mousePos:GetY(), pickRayNear, pickRayFar)
-- obtain navigation mesh transform
navMesh = fireflyscript.CastSceneItemToPickableSceneItem(navMesh)
worldToLocal = navMesh:GetWorldMatrixInverse()
localToWorld = navMesh:GetWorldMatrix()
-- test pick ray against nav mesh geometry
geom = navMeshComponent:GetMesh()
localPickRayNear = worldToLocal * fireflyscript.vec4(pickRayNear, 1)
localPickRayFar = worldToLocal * fireflyscript.vec4(pickRayFar, 1)
localIntersectionPoint = fireflyscript.vec3()
geom:LineTest(fireflyscript.vec3(localPickRayNear), fireflyscript.vec3(localPickRayFar), localIntersectionPoint)
-- move the agent(s)
crowdComponent:SetTargetPoint(fireflyscript.vec3(localToWorld * fireflyscript.vec4(localIntersectionPoint,1)))
A (single) post process effect is achieved by firstly rendering the scene to a framebuffer with appropriate attachments outputting the result to a texture, applying the texture to a full screen quad, specifying an output framebuffer (usually the default framebuffer) and then executing a shader on the texture to render to the output framebuffer (the screen). To chain you take the output framebuffer and feed it as input to another post process effect using the same sequence as before. The final shader (the final link in the chain) is then output to the screen (i.e. the main framebuffer). For performance reasons of course it’s better to have an uber-shader but the support’s there. I’ve made it so adding post processing shaders to the engine / editor is painless (you simply drop them into a special folder that the editor scans on start-up – which then generates a factory / menu item entry for each effect – the same approach I take for regular non-post-process shaders in the engine / editor). Here’s a video of a vignette -> pixelate -> chromatic aberration post processing chain in action in the editor:
Added post-processing shader support to the engine. Post-processing shaders can be used to apply full screen effects to scenes. Examples of such shaders include full-screen anti-aliasing, vignette effects and motion blur to name but a few. With a post processing shader the scene is typically rendered to a texture that is bound to an off screen frame buffer with appropriate depth and color attachments. The texture is then applied to a full screen quad and rendered to screen. This short clip demonstrates the post processing shader.
The entity, component system architecture enables the easy addition of components (behaviours) and systems (allocate and update components of a specific type in a cache friendly manner). Components can be attached to scene items to create more advanced scene elements and interactions. The engine supports the pluggable addition of systems and components. Systems and components can be defined within a shared library (aka “plugin”) which are then exported for loading into the engine and editor. The editor displays all loaded systems in the Systems tab and all loaded components can be attached to scene elements via the editor’s components panel. As with scene items the editor also reflects on system and component properties so they can be configured either manually (in the editor) or programmatically through script. Like scene elements, both system and component properties are serialised into the scene in a versionable format. Below is a short clip demonstrating the Physics plugin’s Physics System and the RigidBodyObjectComponent exported by the plugin.
The engine and editor supports CSG courtesy of the CSG plugin. This short clip demonstrates using the “Union” operation to union together all static meshes in a scene from which a navigation mesh is then generated. The CSG plugin is actually 2 plugins, one that plugs into the editor adding the Union, Intersection and Difference tools and its engine counterpart plugin that plugs into the engine adding the implementation of the CSG functions.