A Survey of Polygonal Simplification Algorithms
UNC Technical Report TR97-045
Department of Computer Science
University of North Carolina at Chapel Hill
Polygonal simplification is at once a very current and a very old topic in computer graphics. As early as 1976 James Clark described the benefits of representing objects within a scene at several resolutions, and flight simulators have long used handcrafted multi-resolution models of airplanes to guarantee a constant frame rate [Clark 76, Cosman 81]. Recent years have seen a flurry of research into generating such multi-resolution representations of objects automatically by simplifying the polygonal geometry of the object. This paper surveys the field of polygonal simplification, describing the current state-of-the-art as well as attempting to identify the major issues and trends in the field to date.
Polygonal models currently dominate the field of interactive three-dimensional computer graphics. This is largely because their mathematical simplicity allows rapid rendering of polygonal datasets, which in turn has led to widely available polygon rendering hardware. In addition, polygons serve as a sort of lowest common denominator for computer models, since almost any model representation (spline, implicit-surface, volumetric) may be converted with arbitrary accuracy to a polygonal mesh. For these and other reasons, polygonal models are the most common representation for visualization of medical, scientific, and CAD datasets.
In many cases the complexity of such models exceeds the
ability of graphics hardware to render them interactively. Three
approaches are used to alleviate this problem:
? Augmenting the raw polygonal data to convey more visual detail per polygon. Gouraud shading and texture mapping fall into this category.
? Using information about the model to cull away large portions of the model that are occluded from the current viewpoint. The visibility processing approach described by Seth Teller and Carlo Sequin is an excellent example [Teller 91]. ? Polygonal simplification methods simplify the polygonal geometry of small or distant objects to reduce the rendering cost without a significant loss in the visual content of the scene. These methods are the subject of this paper.
Note that terrains, or tessellated height fields, are a special category of polygonal models. The regularity and two-dimensional nature of these models simplify some aspects of the simplification problem; most of the problems facing researchers in polygonal simplification have been solved a year or two earlier for the restricted domain of terrain datasets. At the risk of injustice to some elegant work on terrains, this survey focuses on solutions that apply to the more general realm of polygonal meshes.
A bewildering variety of simplification techniques have appeared in the recent literature; the next section attempts to classify the important similarities and differences among these techniques. A catalog of ten published algorithms follows, briefly describing each approach and placing it in this taxonomy. The next sections discuss some important issues and trends in the field of simplification, and speculate on possible avenues for future work. A few informal remarks close the paper.
The various simplification approaches described in the computer graphics literature of the last five years can be categorized along several axes. Some algorithms simplify the scene by iteratively removing polygons while others collapse multiple vertices together, some algorithms preserve topology while others ignore it, and so on. Polygonal simplification is by no means a solved problem; this section identifies some of the important areas in which existing solutions differ or resemble each other.
A primary classification of any simplification algorithm is the
underlying mechanism it uses to remove polygons from the
scene. Nearly every simplification technique in the literature
uses some variation or combination of four basic polygon elision
mechanisms: sampling, adaptive subdivision, decimation, and
? Sampling schemes begin by sampling the geometry of the initial model. These samples can be points on the 2-D manifold surfaces in the model or voxels in a 3-D grid superimposed upon the model. The algorithm then tries to create a polygonal simplification that closely matches the sampled data. Varying the number of samples taken regulates the accuracy of the created simplification.
? Adaptive subdivision approaches create a very simple polygonal approximation of the scene, called the base model. The base model consists of triangles or squares, shapes that lend themselves to recursive subdivision. This process of subdivision is applied to the base model until the resulting surface lies within some user-specified threshold of the original surface. Conceptually simple, adaptive subdivision methods suffer two disadvantages. First, creating the base model involves the very problem of polygonal simplification that the algorithm is attempting to solve. For this reason adaptive subdivision approaches have been more popular with the specialized case of terrains, whose base model is simple to calculate. Second, a recursive subdivision of the base model may not be able to capture the exact geometry of the original model, especially around sharp corners and ?creases? in the mesh [Hoppe 96].
? Decimation techniques iteratively remove vertices or faces from the mesh, retriangulating the resulting hole after each step. This process continues until it reaches a userspecified degree of simplification. Since most decimation algorithms do not permit a vertex or face removal that will change the local topology of the mesh, the decimation process may be unable to effect high degrees of simplification. ? Vertex merging schemes operate by merging two or more vertices of a triangulated model together into a single vertex, which can in turn be merged with other vertices. Triangles whose corners have been collapsed together become degenerate and can be eliminated, decreasing the total polygon count. Vertex merging approaches do not necessarily require manifold topology, though some algorithms have made that assumption implicitly. These algorithms use a limited vertex merge called an edge collapse, in