Geo-Baking in VR

One of the most common techniques used when preparing a model for GPU rendering is to essentially “paint” smaller geometric features into the model’s textures. Geometric surface features such as bricks, rivets, and bullet holes are common examples, but even larger features such as vines, pipes, windows and doors are sometimes baked-in in an effort to spare precious GPU cycles that would be required to render these features if modeled using geometry. Using normal maps to sell the dimensionality of the otherwise flat decals can help a lot – at least when the images are viewed on a monoscopic flat-screen display. But, when viewed in VR on stereoscopic head-mounted displays [HMDs], this technique can fail badly.

When your eyes converge on a nearby object your brain receives critical spatial information. The first bit of info arrives as a pair of offset images — one from the right eye and one from the left. The slight differences between those two images, combined with the degree of convergence of each eye (how “crossed” your eyes are when the images were focussed onto the retinas) provide you with an intuitive understanding of the 3D-ness of your environment.

Geo-baking textures on a 3D model works without stereoscopic vision.

In the absence of stereoscopic vision, you have no way of determining whether this surface has recessed bullet holes punched through the metal, or has simply had photorealistic bullet-hole decals applied.

When a game is displayed on a standard PC monitor (or any other single screen), the brain no longer has access to a pair of stereoscopic images, and regardless of whether you’re looking at distant or nearby objects on screen, your eyes don’t converge any differently, since really you’re focussing on screen pixels and not on physical objects in space. Game devs have long taken advantage of this spatial blindness by removing render-hogs like detailed geometry and replacing them with lightweight texture-based substitutes, and for all but the most oblique angles of view, this has worked well enough.

Full 3D model geometry versus textures only in VR.

For conventional game development, the hack on the right works well enough in many cases. With the help of normal maps it will even dynamically receive light fairly well. Viewed stereoscopically, however, the illusion falls apart just as it would if it were an actual surface half a meter away from you with actual stickers applied — you would know, and you would know immediately.

Viewed close-up in VR, these texture-for-geo techniques can come off as cheap hacks and rapidly pull users out of in-world presence. With all that in mind, here are some things to consider:

Baked Geo Textures Remain an Option for Distant Objects in VR

Items that will remain fairly distant — say, ten meters or more — are seen by the left and right eye almost identically, with very little convergence. So, if you know that an object will remain distant, the old tricks still work. This is very context-dependant however, so always consider feature size and complexity when deciding how far is far enough.

Parallax Occlusion Mapping (OPM) Can Be a Great Compromise

Like displacement mapping, parallax occlusion mapping uses a height map to create depth, but does so without actually affecting geometry. At a similar GPU cost, parallax mapping delivers what bump and normal mapping can not: the convincing appearance of self-occlusion and even fairly accurate shadow casting/receiving. It’s not perfect, and takes some wrangling, but parallax mapping can really deliver when you need the detail at a fraction of the GPU overhead. One can get impressive results pretty easily for broad flat surfaces such as floors, walls, and ceilings, but with some wrangling even more complex surfaces can take advantage of POM. The major game engines have all embraced POM, so it’s definitely worth considering when trying to shed polys.

Normal maps versus parallax maps for 3D modeling.

Normal map on the left, and parallax map on the right — each surface is comprised of just two triangles.

More Info: Unity: Height Mapping   |  Unreal: Parallax Occlusion Mapping

Avoid Planar Construction of Foliage and Ground Cover

Plant life is another place where mapping complex geometry to a simple surface is commonly used to avoid GPU-intensive geometry. Again, stereoscopy greatly diminishes the effectiveness of this technique. Trees are complicated, so there aren’t a ton of great alternatives, so use the density that you can stand in your scene, but plant life — especially that seen up close — should be structured more like their real-world counterparts when possible.

Planar construction doesn't work in VR.

The tree above was constructed by texturing three intersecting planes as seen on the right. On a monitor, it looks sort of ok. In VR, it looks like three intersecting planes with trees painted on them and could not possibly be mistaken for a tree.

It’s All About Proximity

Producing models that work well on stereoscopic VR displays may seem complicated, and difficult to judge – especially if you don’t personally have access to a VR headset. The thing to remember is that stereoscopic convergence is what causes geo-baking hacks fail, so the more parallel the direction of the eyes are, the less sensitive the user will be to the hack. Once a surface is within ten meters or so of the user, 2D hacks will become progressively less convincing as they close the distance to the observer. So, keep proximity and feature-size in mind when deciding what geometry to bake and what should be modeled or parallax mapped.