The Voxels library

Some days ago I finally released the first public alpha of my Voxels library project. For quite some time I’ve been interested in volume rendering for real-time applications. I believe it has a lot of future applications.

The library is the fruit of some of my work in that field but alas I do on it only in my extremely limited spare time as I’m concentrated on my company – Coherent Labs. The main ideas behind the library have already been highlighted in my previous posts and the talk I gave on Chaos groups’ CG2 seminar in October 2013. Preparing the library for release has been a much longer process than I expected but the Windows version at least is finally downloadable from github here, along with a very detailed sample here.

A hand-sculpted surface started as a ball. Voxels supports multiple materials and normal mapping

A hand-sculpted surface started as a ball. Voxels supports multiple materials and normal mapping

Some internal detail

The polygonization algorithm used is TransVoxel. I chose it because it is very fast, proven correct and it’s easy to parallelize. All Eric Lengyel’s Ph.D. thesis on the subject is very interesting and I recommend it to anyone interested not only in volume rendering but in real-time rendering in general. The algorithm addresses one of the most important issues with volume rendering techniques – the need for LOD. Currently the meshes produced by the library are very “organic” in their shape (due to the Marching Cubes roots of the algorithm) and are best suited for terrains and other earth-like surfaces.

My implementation produces correct meshes relatively fast, scales extremely well and tries to keep the memory usage low. Currently I’m using simple RLE compression on the grid which works surprisingly well giving very fast run times and large compression rations 30x+. Lengyel asserts using it in his implementation too with satisfactory results.

The polygonization process is multi-threaded and uses all the available machine cores. Here there is much room for API improvement to set limits on the resources used and eventually the exact cores employed.

In the sample application I’ve also added an implementation for an octree LOD class that culls blocks of the mesh and more importantly decides which levels to draw on which LOD level and when to draw the transitions (the transitions are meshes that fill the gaps between adjacent block of different LOD levels).

Future

I intend to continue the active development of the library. Currently the focus will be adding support for Linux and may be Mac OS X and improving the polygonization speed even further – especially when editing the voxel grid. The API also needs some polishing – I’m currently working on an integration of the library with an open-source engine and see some issues with it.

I’d also like to update the sample or create a new one that draws all the current state of the mesh in one draw call through some of the indirect rendering APIs.

Feedback is extremely appreciated. If you find the library interesting and would like to use it for something or have any suggestions/ideas – drop me a line.

Rendering experiments framework

Framework available on http://github.com/stoyannk/dx11-framework

I dedicate most of my professional time and 99% of my spare programming time to real-time graphics. Some years ago I started a small framework that I use on a daily basis for all the graphics experiments and demos I do.

Today I open-source this framework in the hopes that it might help somebody else in fast-prototyping something.

General notes

  • The framework is entirely geared towards fast prototyping of graphics techniques and algorithms. The current version was started at least 3-4 years ago and grew organically. Some parts are ancient and taken from previous file collections I used before for prototypes.
  • It is NOT a game engine, it is NOT a full graphics engine, it shouldn’t be used in production.
  • It doesn’t abstract anything related to graphics to leave as much room to experimentation as possible.
  • It is Windows, DirectX 11 only.

Structure

The sole goal of the framework is to quickly prototype ideas and algorithms for real-time (usually game) rendering. The framework is currently divided in 4 static libraries:

AppCore

Contains a base application class that takes care of windows creation, input forwarding, message loop etc. It’s pretty minimal and graphics back-end agnostic.

AppGrahics

Contains the classes that initialize an Application with a renderer. Currently only a Dx11 rendering app can be created.

Rendering

Contains all the graphics stuff. Everything is tightly DX11 bound except the loaders/savers.

  • The DxRenderer class that holds the DX11 device and the immediate context. It creates the default back-buffer and depth-stencil buffer. It also contains a list of all the rendering routines that will execute in turn every frame.
  • DxRenderingRoutine is an abstract class that allows specifying rendering passes. Most of the prototypes I’ve created with the framework are in essence a bunch of inheritors from this class. The routines are registered with the DxRenderer and called in turn each frame.
  • A Camera class for looking around
  • Mesh and Subset classes. A mesh is a Vertex Buffer and a collection of subsets. Every subset han an Index buffer, a material and an OOBB and a AABB.
  • Texture manager – a simple convenience class for loading, creating and managing textures with different formats.
  • Shader manager – a class for compiling and creating shaders from files. It also contains wrappers for easier creation of constant buffers.
  • Material shader manager – can inject in the shader information about the type of material that will be drawn. It inserts in the shader code “static bool” variables depending on the properties of the selected material that can be used for static branching later in the shader code. It also contains a map between a compiled shader for a specific material so that we can easily reuse them.
  • ScreenQuad – simple class for drawing a full-screen quad
  • FrustumCuller – culls subsets provided a view and projection matrix
  • DepthCuller – an unfinished class for software occlusion culling
  • SoftwareRasterizer – an unfinished and super experimental software rasterizer. I think it can currently just draw a couple of triangles.
  • OBJ file loader. Supports most of the OBJ and MTL format. I almost exclusively use the Sponza mesh for testing, so everything used in it is supported.
  • Raw file loader. “Raw” files are just memory dumps with vertex, index data and information about the materials and used textures
  • Raw file saver – saves a raw mesh.

Utilities

  • Logging – a multi-threaded, fast, easy to use logging system with custom severities, facility information and unlimited arguments for the log message.
  • Alignment – base classes and allocators for aligned types
  • Assertions – custom asserts
  • MathInlines – a couple of math functions to deal with a VS bug explained here.
  • Random number generator
  • STL allocators supporting a custom allocator
  • Some smart pointers for COM interfaces (all D3D objects)

That’s pretty much it. I plan to open-source shortly also some of my demos/experiments so a concrete usage of the framework will be shown there.

Usage & Dependencies

The framework depends on Boost(1.55+) and requires Visual Studio 2012+. To set-up the library you need to configure the property sheets it uses to find it’s dependencies.

The Property sheets are located in the “Utilities” folder and are named “PathProperty_template.props”, “Libraries_x86_template.props”, “Libraries_x64_template.props”. You must re-name them to “PathProperty.props”, “Libraries_x86.props”, “Libraries_x64.props” and edit them so that they point to your local Boost build. The “PathProperty.props” is designed to set the include paths while the other two to the link libraries for x86 and x64.

Contains glm as a third-party dependency committed in the repo. The framework itself doesn’t use it but it’s widely used in some of the demos I made, so it’s here.

A sample made with the framework - light-pre-pass, motion blur, FXAA

A sample made with the framework – light-pre-pass, motion blur, FXAA

Future

I will continue to use the libs for my Dx11 experiments in the future so I’ll update it when I need something else or find an issue. I don’t plan to abstract it enough to support OpenGL or other OSes different than Windows.

That said, I already need another framework for easy prototyping OpenGL stuff and Linux testing, so a new “OGL” version will probably be born when I have some more time to dedicate it.

License

I’m licensing the framework under the “New BSD License”, so you can pretty much do whatever you want with it. If you happen to use something, credit is always appreciated.

Feedback welcome.

Framework available on http://github.com/stoyannk/dx11-framework

Practical Volume Rendering for realtime applications – presentation

In October 2013 I gave a talk on Chaos Group’s CG2 seminar.

Now I share the English version slides for it. The talk briefly introduces the state of Volume rendering until now and the potential I see it has. We are seeing increasingly many uses of Volume rendering for games and many new applications are still emerging. The second part of the slides gives details about the TransVoxel algorithm by Eric Lengyel, my implementation of it and the lessons I learned from that. Some of the blog posts I’ve written are also based on the same research I’m doing.

Soon I plan to publish a C++ library for Volume rendering for games that encompasses everything highlighted in the slides and many other improvements.

Overview of modern volume rendering techniques for games – Part II

This post has been published also in Coherent Labs’s blog – the company I co-founded and work for.

In this blog series I write about some modern volume rendering techniques for real-time applications and why I believe their importance will grow in the future.

If you have not read part one of the series please check it out here, it is an introduction to the topic and overview of volume rendering techniques. Check it out if you haven’t already and then go on.

In this second post from our multi-post series on volume rendering for games, I’ll explain the technical basics that most solutions share. Through all the series I’ll concentrate on ‘realistic’, smooth rendering – not the ‘blocky’ one you can see in games like Minecraft.

Types of techniques

Volume rendering techniques can be divided in two main categories – direct and indirect.

Direct techniques produce a 2D image from the volume representation of the scene. Almost all modern algorithms use some variation of ray-casting and do their calculations on the GPU. You can read more on the subject in the papers/techniques “Efficient Sparse Voxel Octrees” and “Gigavoxels”.

Although direct techniques produce great looking images, they have some drawbacks that hinder their wide usage in games:

  1. Relatively high per-frame cost. The calculations rely heavily on compute shaders and while modern GPUs have great performance with them, they are still effectively designed to draw triangles.

  2. Difficulty to mix with other meshes. For some parts of the virtual world we might still want to use regular triangle meshes. The tools developed for editing them are well-known to artists and moving them to a voxel representation may be prohibitively difficult.

  3. Interop with other systems is difficult. Most physics systems for instance require triangle representations of the meshes.

Indirect techniques on the other hand generate a transitory representation of the mesh. Effectively they create a triangle mesh from the volume. Moving to a more familiar triangle mesh has many benefits.

The polygonization (the transformation from voxels to triangles) can be done only once – on game/level load. After that on every frame the triangle mesh is rendered. GPUs are designed to work well with triangles so we expect better per-frame performance. We also don’t need to do radical changes to our engine or third-party libraries because they probably work with triangles anyway.

In all the posts in this series I’ll talk about indirect volume rendering techniques – both the polygonization process and the way we can effectively use the created mesh and render it fast – even if it’s huge.

What is a voxel?

A voxel is the building block of our volume surface. The name ‘voxel’ comes from ‘volume element’ and is the 3D counterpart of the more familiar pixel. Every voxel has a position in 3D space and some properties attached to it. Although we can have any property we’d like, all the algorithms we’ll discuss require at least a scalar value that describes the surface. In games we are mostly interested in rendering the surface of an object and not its internals – this gives us some room for optimizations. More technically speaking we want to extract an isosurface from a scalar field (our voxels).

The set of voxels that will generate our mesh is usually parallelepipedal in shape and is called a ‘voxel grid’. If we employ a voxel grid the positions of the voxels in it are implicit.

In every voxel, the scalar we set is usually the value of the distance function at the point in space the voxel is located. The distance function is in the form f(x, y, z) = dxyz where dxyz is the shortest distance from the point x, y, z in space to the surface. If the voxel is “in” the mesh, than the value is negative.

If you imagine a ball as the mesh in our voxel grid, all voxels “in” the ball will have negative values, all voxels outside the ball positive, and all voxels that are exactly on the surface will have a value of 0.

Cube polygonized with a MC-based algorithm – notice the loss of detail on the edge

Cube polygonized with a MC-based algorithm – notice the loss of detail on the edge

Marching cubes

The simplest and most widely known polygonization algorithm is called ‘Marching cubes’. There are many techniques that give better results than it, but its simplicity and elegance are still well worth looking at. Marching cubes is also the base of many more advanced algorithms and will give us a frame in which we can more easily compare them.

The main idea is to take 8 voxels at a time that form the eight corners of an imaginary cube. We work with each cube independently from all others and generate triangles in it – hence we “march” on the grid.

To decide what exactly we have to generate, we use just the signs of the voxels on the corners and form one of 256 cases (there are 2^8 possible cases). A precomputed table of those cases tells us which vertices to generate, where and how to combine them in triangles.

The vertices are always generated on the edges of the cube and their exact position is computed by interpolating the values in the voxels on the corners of that edge.

I’ll not go into the details of the implementation – it is pretty simple and widely available on the Internet, but I want to underline some points that are valid for most of the MC-based algorithms.

  1. The algorithm expects a smooth surface. Vertices are never create inside a cube but always on the edges. If a sharp feature happens to be inside a cube (very likely) than it will be smoothed out. This makes the algorithm good for meshes with more organic forms – like terrain, but unsuitable for surface with sharp edges like buildings. To produce a sufficiently sharp feature you’d need a very high resolution voxel grid which is usually unfeasible.

  2. The algorithm is fast. The very difficult calculation of what triangles should be generated in which case is pre-computed in a table. The operations on each cube itself are very simple.

  3. The algorithm is easily parallelizable. Each cube is independent of the others and can be calculated in parallel. The algorithm is in the family “embarrassingly parallel”.

After marching all the cubes, the mesh is composed of all the generated triangles.

Marching cubes tends to generate many tiny triangles. This can quickly become a problem if we have large meshes.

If you plan to use it in production, beware that it doesn’t always produce ‘watertight’ meshes – there are configurations that will generate holes. This is pretty unpleasant and is fixed by later algorithms.

In the next series I’ll discuss what are the requirements of a good volume rendering implementation for a game in terms of polygonization speed, rendering performance and I’ll look into ways to achieve them with more advanced techniques.

References:

Cyril Crassin, Fabrice Neyret, Sylvain Lefebvre, Elmar Eisemann. 2009. GigaVoxels : Ray-Guided Streaming for Efficient and Detailed Voxel Rendering.

Samuli Laine, Tero Karras. 2010. Efficient Sparse Voxel Octrees.

Paul Bourke, 1994, Polygonising a scalar field

Marching cubes on Wikipedia.

PS:

I gave a talk entitled “Practical Volume Rendering for real-time applications” at Chaos Group‘s annual CG2 conference in Sofia.

Available here in Bulgarian:

Overview of modern volume rendering techniques for games – Part I

This post has been published also in Coherent Labs’s blog – the company I co-founded and work for.

A couple of months ago Sony revealed their upcoming MMO title “EverQuest Next”. What made me really excited about it was their decision to base their world on a volume representation. This enables them to show amazing videos like this one. I’ve been very interested in volume rendering for a lot of time and in this blog series I’d like to point at the techniques that are most suitable for games today and in the near future.

In a series I’ll explain the details of some of the algorithms as well as their practical implementations.

This first post introduces the concept of volume rendering and what are it’s greatest benefits for games.

Volume rendering is a well known family of algorithms that allow to project a set of 3D samples onto a 2D image. It is used extensively in a wide range of fields as medical imaging (MRI, CRT visualization), industry, biology, geophysics etc. It’s usage in games however is relatively modest with some interesting use cases in games like Delta Force, Outcast, C&C Tiberian Sun and others. The usage of volume rendering faded until recently, when we saw an increase in it’s popularity and a sort of “rediscovery”.

A voxel-based scene with complex geometry

A voxel-based scene with complex geometry

In games we usually are interested just in the surface of a mesh – it’s internal composition is seldom of interest – in contrast to medical applications. Relatively few applications selected volume rendering in place of the usual polygon-based mesh representations. Volumes however have two characteristics that are becoming increasingly important for modern games – destructibility and procedural generation.

Games like Minecraft have shown that players are very much engaged by the possibility of creating their own worlds and shaping them the way they want. On the other hand, titles like Red Faction place an emphasis on the destruction of the surrounding environment. Both these games, although very different, have essentially the same technology requirement.

Destructibility (and of course constructability) is a property that game designers are actively seeking.

One way to achieve modifications of the meshes is to apply it to the traditional polygonal models. This proved to be a quite complicated matter. Middleware solutions like NVIDIA Apex solve the polygon mesh destructibility, but usually still require input from a designer and the construction part remains largely unsolved.

Minecraft unleashed the creativity of users

Minecraft unleashed the creativity of users

Volume rendering can help a lot here. The representation of the mesh is a much more natural 3D grid of volume elements (voxels) than a collection of triangles. The volume already contains the important information about the shape of the object and it’s modification is close to what happens in the real world. We either add or subtract volumes from one another. Many artists already work in a similar way in tools like Zbrush.

Voxels themselves can contain any data we like, but usually they define a distance field – that means that every voxel encodes a value indicating how far we are from the surface of the mesh. Material information is also embedded in the voxel. With such a definition, constructive solid geometry (CSG) operations on voxel grids become trivial. We can freely add or subtract any volume we’d like from our mesh. This brings a tremendous amount of flexibility to the modelling process.

Procedural generation is another important feature that has many advantages. First and foremost it can save a lot of human effort and time. Level designers can generate a terrain procedurally and then just fine-tune it instead of having to start from absolute zero and work out every tedious detail. This save is especially relevant when very large environments have to be created – like in MMORPG games. With the new generation of consoles with more memory and power, players will demand much more and better content. Only with the use of procedural generation of content, the creators of virtual worlds will be able to achieve the needed variety for future games.

In short, procedural generation means that we create the mesh from a mathematical function that has relatively few input parameters. No sculpting is required by an artist at least for the first raw version of the model.

Developers can also achieve high compression ratios and save a lot of download resources and disk space by using procedural content generation. The surface is represented implicitly, with functions and coefficients, instead of heightmaps or 3D voxel grids (2 popular methods for surface representations used in games). We already see huge savings from procedurally generated textures – why shouldn’t the same apply for 3D meshes?

The use of volume rendering is not restricted to the meshes. Today we see some other uses too. Some of them include:

Global illumination (see the great work in Unreal Engine 4)

Fluid simulation

GPGPU ray-marching for visual effects

In the next posts in the series I’ll give a list and details on modern volume rendering algorithms that I believe have the greatest potential to be used in current and near-future games.

PS:

I gave a talk entitled “Practical Volume Rendering for real-time applications” at Chaos Group‘s annual CG2 conference in Sofia.

Available here in Bulgarian:

Modern Game UI with the Oculus Rift – Part II

This post has been published also in Coherent Labs’s blog – the company I co-founded and work for.

n this second part of the series I’d like to share some thoughts about how to create, integrate and organize UI in Virtual Reality(VR) and more specifically the Oculus Rift.
The conclusions I reached are very similar to the ones the Valve team got in their work in porting Team Fortress 2 for the device. I’ll also mention some ideas I will try in the future but hadn’t enough time to complete.

UI in Virtual reality

In traditional applications the UI can be divided conceptually in two types – UI elements residing in the 3D world and elements that get directly composed on the screen and hence ‘live’ in the 2D plane of the display. Recently the distinction between these types of interfaces is diminishing – almost all modern game UIs have 3D or pseudo 3D elements in their HUDs and menus. In VR the difference vanishes as we’ll see later in the post.

Some UI elements rendered with Coherent UI in a traditional non-vr application

Some UI elements rendered with Coherent UI in a traditional non-vr application

3D world UI elements usually will need no change when transitioning a game to VR. They already are in the game world so no special care should be taken. The overlay UI however will need significant modifications in it’s rendering and probably also in the elements itself to cope with the specifics of the Rift.

If you leave the same 2D overlay for the UI in VR you’ll get something like this:

client_wrong_overlay

This result is obviously wrong, most of the elements won’t be visible at all because they fall outside the field of view of the player. The central region of every eye in the Rift is where the player sees more clearly, everything else is periphery – the same applies to your eyes in the ‘real’ world.
If we composite the HUD before the distortion we’ll get this:

client_wrong_hud

Stereo UI

What I did is the same as the Valve team in TF 2 – drew the UI on a plane that always stays in front of the player.

The correct result

The correct result

TF 2 is a FPS game where you have a body that holds a weapon while you can freely rotate your head. Valve made a very clever decision when they noticed players were forgetting where their bodies are facing after looking around with their virtual heads. They always put the UI in front of the body, not the head. In this way the player always has a point of reference and can return facing forward in respect to her body.
In the Coherent UI demo we have a free flying camera so I locked the UI plane to always face the camera. This produced the very cool effect of feeling like a pilot of a fighter jet. The 3D effect that can be seen on some of the HUD elements becomes even more convincing in VR and adds a ‘digital helmet’ feeling.
Notice in the screenshot how small and concentrated the UI becomes – this is the position I personally felt most comfortable. It is unpleasant if you need to move your eyes inside the Oculus to look at a gauge or indicator that is too far from your focus point. The UI is semi-transparent so it doesn’t get in the way with the exception of the big element in the upper right corner with the CU logo. It is too big.

UI design considerations for VR

This brings me to point that having a UI that is correct in VR is not enough – it must be tailored for it. What looks very good and usable in non-VR will most probably be very different in VR.
First notice that the aspect of the HUD is different – in non-VR it is very wide and with the aspect of the screen. In the Rift however it needs to be replicated for every eye that by itself has a much more narrow aspect. This means that some elements that were very far apart will get closer and might even overlap.
UI elements close and in front of the player also means that they shouldn’t get in the way of gameplay. I think that transparency mostly solves this because the HUD is still in the semi-peripheral region of the player’s sight.
The resolution of the current generation of the Rift SDK is very low and makes reading text a real pain. The UI should be created with that in mind and numerical and textual information should be kept to a minimum and exchanged with more pictographic and color-coded elements.

In his TF 2 presentation Joe Ludwig argues that in VR the UI should be avoided but I think that it actually becomes even more interesting and compelling. The jet pilot helmet feeling I got after adding the HUD felt a lot more immersive to me than the 3D world alone.

I decided to also modify the sample menu scene with which the demo starts. The normal scene is just the logo of Coherent UI with some 3D animated buttons and a cool background. It is nice in 2D but looks somewhat dull in VR.

client_menu_novr

I did a very simple modification – just removed the background and placed the menu in an empty 3D world with just a skybox. This allows the player to look around even in the menu scene and the game responds to head movement immediately immersing in VR.

client_menu_vr

Future

There are some ideas that I plan to try out but didn’t have the time while doing this integration.
The most interesting plan I have is to try to simulate true 3D stereo elements in the HUD. Currently the gadgets are transformed in 3D in the space of the UI itself and then the resulting image is splatted on a plane in the game world. As Coherent UI supports all CSS3 transformations, it is possible to pass all relevant matrices to the UI itself to draw the elements as if they are correctly positioned in the 3D world and then just composite the whole image on the screen.
As far as the Rift goes, the biggest challenge in the context of UI is still the resolution. It is very difficult and tiring to read text. This however makes creating VR-aware UI even more interesting as new ways of expressing content must be found and employed.

UI in VR is a very new topic (at least for games) with many challenges still in the way and I will continue to share our experiments and ideas in the field.

Modern Game UI with the Oculus Rift – Part I

This post has been published also in Coherent Labs’s blog – the company I co-founded and work for.

In this series I would like to share with you the impressions I had while porting the Coherent UI demo graphics framework and demo interfaces for the Oculus Rift dev. kit. In the first part I’ll cover the rendering integration of the Rift while in the following posts I’ll talk about the strictly UI-related issues in virtual reality.
Much of the problems I’ll cover have been tackled by Valve in their porting of Team Fortress 2 for the Rift as well as in their R&D team. Extremely valuable resources on the current state of VR that helped me a lot while doing the port are given below in the references section.

The Rift

The Oculus Rift is a device that encompasses a head-mounted display and an array of sensors that track the orientation of the head. A good explanation on how to perform the integration and the details of the device can be found in the Rift SDK. It can be freely downloaded after a registration and I encourage anybody who is interested in VR to take a look at it even if you don’t have a Rift device yet. The dev. kit is still a ‘beta’ version of the final product. The major issues I find currently is the somewhat low resolution of the display and that there is some yaw drift that really makes the developer life tough. Oculus are working on these and I’m positive that the final consumer product will be amazing. Other than that the experience playing and developing with the Rift is a great one and I’d encourage anyone who hasn’t ordered his kit yet to hurry up and do it.

Oculus VR development kit case headset

Porting the demo client (application)

At Coherent Labs we have a small 3D application we use for some of our demos. It is based on DirectX 11 and an in-house framework designed for rapid prototyping of rendering tasks. It is not a complete engine but has a fairly good rendering backbone and a complete integration with Coherent UI – a lot of the functionality is exposed to JavaScript and you can create Coherent UI views as well as animate and record the camera movement for demos through the script in the views themselves.
The task at hand was to add support for VR, implemented via the Oculus Rift.
I’ll give a brief summary of what the process looked like for our framework. The Oculus SDK is very good at pointing what should be done and the process is not very complicated either. If the graphics pipeline of the engine is written with VR in mind it is actually trivial. Ours was not, so modifications were necessary.

From this..

From this..

.. to this

.. to this

The pipeline of the framework we use is based on a list of render phases that get executed every frame in order. We use the light pre-pass(LPP) technique and have several post-processing effects.
In order to support stereo rendering some phases must be done twice – once for the left eye and once for the right. Usually when drawing for the eyes we simply draw in the left and right helves of the RT for each eye respectively with different view and projection matrices.

The non-VR events look like this:

1) Set View & Projection matrices for the frame
2) Shadow maps building
3) Clear render targets
4) Fill GBuffer
5) Fill lights buffer (draw lights)
6) Resolve lighting (re-draw geometry)
7) Draw UI Views in the world
8) Motion blur
9) HDR to LDR (just some glow)
10) FXAA
11) Draw HUD
12) Present

Of those 4-7 must be done for each eye. LPP can be quite costly in terms of draw calls and vertex processing and even more so in the VR case. Our scenes are simple and hadn’t any problems but that’s something to be aware of.
I directly removed the motion blur because really makes me sick in VR and the Oculus documentation also points that motion blurs should be avoided. I also removed the HUD drawing as it is handled in another way than a full-screen quad as I’ll explain in next posts.

The VR pipeline looks like:

1) Set central View & Projection matrices for the frame
2) Shadow maps building
3) Clear render targets
4) Set left eye View & Projection
4.1) Fill GBuffer
4.2) Fill lights buffer (draw lights)
4.3) Resolve lighting (re-draw geometry)
4.4) Draw UI Views in the world
4.5) Draw HUD
5) Set right eye View & Projection
5.1) Fill GBuffer
5.2) Fill lights buffer (draw lights)
5.3) Resolve lighting (re-draw geometry)
5.4) Draw UI Views in the world
5.5) Draw HUD
6) HDR to LDR
7) FXAA
8) VR Distortion
9) Present

Conceptually it is not that much different and complicated but especially post-effects have to be modified to work correctly.

As I said I draw the left & right eye in the same RT.

The render-target before the distortion

The render-target before the distortion

Render routines modifications

The shared render target has several implications regarding any post-processing routines. The HDR to LDR routine in our application does some glow effect by smearing bright spots in the frame in a low-res texture that gets re-composed on the main render target. This means that some smearing might cross the edge between the eyes and ‘bleed’ on the other one. Imagine a bright light on the right side of the left eye (near the middle of the image) – if no precautions are taken the halo of the light will cross in the right eye and appear on it’s left side. This is noticeable and unpleasant looking like some kind of dust on the eye.
Post-process anti-aliasing algorithms might also suffer as they usually try to find edges as discontinuities in the image and will identify one where the left and right image meet. It is perfectly vertical however so no changes should be done.

The VR Distortion routine is the one that creates the interesting ‘goggle’ effect seen in screenshots and videos for the Rift. The lenses of the HMD introduce a large pincushion distortion that has to be compensated in software with a barrel distortion. The shader performing this is provided in the Oculus SDK and can be used verbatim. It also modifies the colors of the image slightly because when viewing images through lenses color get distorted by a phenomenon called “chromatic aberration” and the shader compensates for that too.

An important point that is mentioned in the Oculus documentation is that you should use a bigger render target to draw the image and have the shader distort it to the final size of the back-buffer (1280×800 on the current model of the Rift). If you use the same size, the image is correct but the fov is limited. This is extremely important. At least for me having the image synthesized from a same-size texture was very sick-inducing as I was seeing the ‘end’ of the image. The coefficient to scale the render target is provided by the StereoConfig::GetDistortionScale() method in the Rift library. In my implementation steps 4-8 are actually performed on the bigger RT.

The StereoConfig helper class is provided in the SDK an is very convenient. The SDK works with a right-handed coordinate system while we use a left-handed one – this requires attention when taking the orientation of the sensor (the head) from the device and if you use directly the projection and view adjustment matrices provided by the helper classes. I decided to just calculate them myself from the provided parameters – the required projection matrix is documented in the SDK and the view adjustment is trivial because it only involves moving each eye half the distance between the eyes left or right.

One small detail that kept me wondering for an hour is that if you plug the distortion parameters directly in the shader for both eyes (given by StereoConfig::GetDistortionConfig()) the image will not be symmetric with the outline of the right eye looking like the left one. For the right eye you have to negate the DistortionConfig::XCenterOffset. This is done in the Oculus demo but not very prominently and while there usually are parameter getters for both eyes, there is just one for the DistortionConfig which leads to think it might be the same for both eyes. If you analyze carefully the code in the shader you notice the discrepancy but the API successfully puzzled me for some time.

In the next posts I’ll specifically talk about UI in the Rift.

References

Michael Abrash’s blog
Lessons learned porting Team Fortress 2 to Virtual Reality
John Carmack – Latency Mitigation Strategies
Oculus Dev. Center