The Inner Product, August 2002
Jonathan Blow (jon@number-none.com)
Last updated 16 January 2004
 

Rendering Level-of-Detail Forecast


Related articles:

Unified Rendering LOD, Part 1 (March 2003)
Unified Rendering LOD, Part 2 (April 2003)
Unified Rendering LOD, Part 3 (May 2003)
Unified Rendering LOD, Part 4 (June 2003)
Unified Rendering LOD, Part 5 (July 2003)
 

Note: This article is currently missing its figures. 
We will attempt to remedy this situation in the future.



This month I'm going to talk about rendering level-of-detail (LOD) algorithms, and make some recommendations about what to in the game you're currently designing.

Just now I was careful to say "rendering level-of-detail", by which I mean reducing the number of triangles that are used to visually represent a 3D scene, at runtime, in a way that tries to put most of the geometric detail where it will be most effective. From now on I will just say "LOD" to be brief. It used to be a given that if you said "LOD" you meant "rendering LOD", but these days, we often talk about LOD for a number of different systems, like AI and physics. So don't be lulled into thinking that, if you solve the rendering LOD problem, your game is golden; other LOD-style problems may still be lurking.

For rendering LOD, we have a number of types of system to choose from. We can use a Continuous Level-Of-Detail (CLOD) system, progressive meshes, or the old low-tech solution whereby we have a number of precomputed levels of detail for each mesh, and we just switch meshes based on viewpoint distance criteria ("static mesh substitution"). The conclusion of this article is that you ought to be as low-tech as possible and use static mesh substitution. I'll give more detail eventually, but first I will justify this conclusion by criticizing the other methods of LOD. I'll start with CLOD, since that's what I have worked with the most.


A basic property of LOD

To see how CLOD schemes are flawed, let's take a look at the basic nature of LOD'd geometry. Figure 1 shows some screenshots from a terrain project I worked on a couple of years ago. Figure 1a shows the rendered image, and 1b shows a wireframe representation of the terrain geometry; I have circled groups of triangles in the scene based on their distance from the viewer. The triangles circled in red are very close to the camera; yellow indicates triangles in the middle distance, and green indicates triangles that are very far away.

In this image, roughly half the triangles on the screen are close to the viewpoint. This is the expected case for LOD'd geometry, assuming that the world is shaped mostly two-dimensionally (like a terrain, or most first-person shooter maps) and that the idealized geometry does not consist of large, flat surfaces. By "idealized geometry", I mean the geometry of your world at the maximum level of detail, which might even contain infinitely fine features (if your surfaces are fractally shaped, for example).

You can do some simple math showing that this behavior is expected. You start by writing down the constraint that your LOD algorithm uses in order to split triangles (usually it bounds the screenspace-projected length of some world-space distance, like Lindstrom-Koller's delta values or ROAM's wedgies). Assuming that the average value of this world-space distance is proportional to the length of a triangle's edge (meaning that detail does not diminish with scale), you can see that the LOD algorithm is trying to make the triangles projected to the screen be of roughly the same size. Then, do some trig: since the map is mostly 2D, idealize it as an infinite plane being projected to the screen. Imagine a frustum with a 90-degree vertical field of view, so that the vector from the eye through the bottom of the screen is pointing at your feet in worldspace, and the vector through top of the screen is pointing toward the horizon. Then, if your eye is a height 'h' from the ground, half the triangles on the screen are within distance 'h' of your feet! Now, a 90 degree vertical field of view is pretty excessive, and you don't usually walk around in such a way that you can see your feet, so this result is a little bit high. But if you play around with this on a piece of paper, you'll see that the numbers don't get all that much better as you tweak the parameters.


The problem with CLOD algorithms

So now I use this fact to harsh on CLOD. Most CLOD algorithms maintain a persistent mesh that they take great pains to update incrementally, under the misconception that these meshes will not change quickly. As the ROAM paper says: "ROAM execution time is proportionate to the number of triangle changes per frame, which is typically a few percent of the output mesh size, hence ROAM performance is insensitive to the resolution and extent of the input terrain". There's a contradiction inherent in that sentence.

If you, the avatar in a game world, are two meters tall, half your tessellated mesh is within two meters of you. By definition then, if you move two meters forward, most of the active tessellation will have to be recomputed. How much time does it take you to walk two meters in real life? How long does it take in a game world, where we often move at unrealistic speeds?

Most CLOD algorithms do a lot of bookkeeping to continuously change the mesh; that bookkeeping is much slower than the clean "discard this, use that" approach of static mesh substitution. When your terrain is detailed enough, and you move quickly enough, these algorithms drown in their own bookkeeping.

These problems don't usually arise in the research papers that present CLOD algorithms, because those papers use relatively low-density tessellations, low viewpoint speeds, and viewpoints that are very far from the terrain being viewed (without corresponding enlargement of the viewable terrain distance). But anyone who tries to render a 70,000-triangle CLOD terrain will need to sweat very hard to make the system run acceptably. At the same time, though, you can render a 500,000 triangle terrain with static mesh substitution, and you don't even need to think much; you need maybe 1/5th of the software engineering effort to get the program working, and afterward, the program is much easier to maintain.

In addition, CLOD systems usually impose topology constraints that we don't want (e.g. the terrain algorithms require the input data to be a height map, so you can't have overhangs or holes). And I have a lot of arguments about how nobody's even shown that CLOD algorithms provide good tessellation efficiency, which would supposedly have been the whole point of using them. But there's not enough space in the magazine for that particular rant.

The only time I can recommend use of a CLOD system is when your data set is constantly changing (like ocean waves during a storm) and you use a non-persistent form of CLOD, like Seumas McNally's "split-only ROAM".

Once upon a time when I was very enthusiastic about CLOD, I had an email conversation with Charles Bloom and Tom Forsythe. They were proponents of using VIPM, a certain kind of progressive mesh, to represent terrains, ;ut I was convinced that CLOD was better. Well, I've clearly changed my mind about CLOD, but I don't think progressive meshes are very good either, so I will now spend some time dissing them too.


The problem with progressive meshes

Like CLOD terrain, progressive meshes have a certain technological wow-factor, because they also provide a continuum of sorts: you can adjust the level-of-detail of a rendered object to a granularity of 1 triangle. And with certain progressive mesh implementations, like the VIPM implementation shown to me by Tom and Charles, you can render those objects efficiently with modern 3D hardware.

But when rendering an object in a game, like a character or a vehicle, we just don't need fine-grained control of the object's triangle count. When our total triangle budget is in the millions, small changes in triangle count are completely in the noise. When building static levels of detail, the number of triangles we want to discard is usually proportional to the number of triangles in the mesh. If you're building static LOD for a 5000-triangle object, it makes sense to build a reduced mesh at 2500 triangles, but building a mesh at 4900 triangles would be silly.

One might think that the single-vertex-at-a-time path provided by progressive meshes would help with tasks like geomorphing. But it doesn't; when geomorphing, you need to be able to skip across large numbers of collapses, because otherwise you're limiting the speed at which you can adjust the object's detail. This results in poor system behavior.

So progressive meshes do not provide a concrete benefit over static substitution. But they're more complicated, so they incur more software engineering overhead, making your game harder to finish. When approaching problems like building normal maps to augment geometry, with static meshes, you can consider each different-resolutioned mesh in isolation and get good results. With progressive meshes, triangles of different detail levels overlap in uv-space, so you have to write more complicated code to solve the problem.


The Plan

So if you're designing an LOD system for your next game, my recommendation is to switch between precomputed static meshes, just like we did in the mid-1990s. Back then, because of the low triangle counts, we would usually have art guys hand-create the low-res meshes. These days, there is no good reason to do that. High-quality mesh reduction algorithms, like Garland-Heckbert Error Quadric Simplification, do a good job of generating low-resolution meshes, and your artists' time is valuable.

I've recently been playing around with the normal-map generation technique put to good use in Doom 3. Using this technique we can render meshes that appear to have a large amount of geometric detail, but that really consist of a moderate number of triangles. But aside from just looking good, this technique helps make LOD more effective than it has been in the past. LOD is about removing vertices from a mesh, but historically, we have tended to color objects with vertex-based lighting, so the LOD was removing the maxima and minima of our lighting functions. This caused a lot of popping.

With dot3 normal mapping, we can greatly reduce the extent to which the object's appearance depends on individual vertex positions, and thus the popping decreases. This is a case where static mesh switching benefits tremendously, yet the more-complex CLOD systems find it very difficult to leverage this technique, so they are left further behind.


What about the map?

So that's what I recommend for objects, but what about big things, like terrains or office complexes?

Casey Muratori and I tend to make the same predictions about future environment LOD. There will be a unification, with the same kind of system used for indoors and outdoors. It'll involve chopping the world up into a bunch of blocks. We then perform a process like mipmapping, where each of these blocks of geometry is like a texel from a texture map. To get a lower level of detail for the whole world, combine every 4 neighboring blocks and detail-reduce the resulting mesh (4 blocks if the world mostly extends through 2 dimensions; it should be 8 blocks if the world is more volumetric). Keep doing this until you get all the way down to 1 block representing the entire world.

At runtime, you start near the camera, rendering high-resolution blocks; then, as you traverse the scene and go further from the viewpoint, you switch to lower-resolution blocks. At places where the resolution transitions occur, you will need to stitch together blocks of neighboring detail levels, or else cracks will appear in the scene. This stitching is trivial for the case of a height map; it becomes more difficult with arbitrary topologies crossing the detail boundaries, but the solution is largely a matter of bookeeping. Since your algorithm performed the detail-reduction steps for the blocks, it can effectively trace backward along that sequence of reductions to see how edges should match up.

Thatcher Ulrich implemented something like this for terrain rendering. Sometime soon, I plan to do it for triangle soups; drop me a line if you beat me to it.


References:

Mark Duchaineau et al, "ROAMing Terrain: Real-time Optimally Adapting Meshes", Proceedings of IEEE Visualization 1997.

Seamus McNally, "The Tread Marks Engine", part of "Two Advanced Terrain Rendering Systems", Game Developers Conference 2000.

The Virtual Terrain Project, http://vterrain.org.

Thatcher Ulrich, "Chunking LOD", http://tulrich.com/geekstuff/chunklod.html.

Michael Garland and Paul Heckbert, "Surface Simplification Using Quadric Error Metrics", Siggraph 1997. http://graphics.cs.uiuc.edu/~garland/research/quadrics.html.



Figures (currently missing)

Figure 1a: A rendered terrain.
Figure 1b: A wireframe of that terrain, with triangles grouped by distance.