To run the mesh viewer, call the executable with the data-file name as argument e.g. MeshDecimator.exe torus.off; Tte program accepts OFF and OFFPM format files. The GUI accepts mouse movements and a handful of key presses. The following actions are general for both OFF and OFFPM inputs.
For our data structure, we chose to use a vertex to face adjacency map. For each vertex we store the set of faces that contains it as a corner. While not necessarily the most compact data-structure, it is complete in the sense that it allows us to traverse any connected region of the mesh through completely local operations. For instance, to move along an edge, we simply look upon one of the adjacent faces, and select one the vertices (not equal to the current one). We subsequently describe how we create this adjacency map from the OFF file
The OFF file contains a sequence of vertex positions followed by a sequence of triangular faces, where each face is defined by the three indices corresponding to its corners. For example, if a face is {0, 5, 2}, then its corners are the first, sixth, third vertex in the OFF file. These corner indices are labeled in counterclockwise fashion, such that the direction of the face normal is unambiguous. To create our adjacency map, each time a face f={i, j, k} appears, we simply add f to adjacency(vi), adjacency(vj), and adjacency(vk). That is all there is to initializing our data structure
To decimate an edge between vertices u and v, we first find the intersection of their adjacency sets. These are the shared faces that contain u--v as an edge. Since these shared faces will no longer exist after collapsing u--v, we go to each of the corner vertices of each shared face f and remove f from the adjacencies of those corners. Next, since u and v now become the same vertex w≡v, we go to each adjacent f∈adjacency(u) (note that the shared faces are no longer in this set), replace the corner index u with index v and add f to adjacency(v). Finally, we remove u from the adjacency list, or equivalently, set adjacency(u)=∅. (note that in all of this, the counterclockwise order of the face corners is preserved)
Vertex/face normals are a derived quantity, that are not needed to define the mesh, and are therefore, in some sense, redundant. However, we chose to store both. The motivation for this is that vertex normals are nessary for shading the geometry, and face normals are used for quadric simplification (more on this later). No matter, updating normals is straightforward. After performing the above steps, we go to adjacency(v) (which now contains all of u's original adjacency minus the shared faces), and update the face normals. Finally, we go to all the corner vertices of adjacency(v), and update the corresponding vertex normals (and quadrics).
We chose to define the vertex normals as a simple average between the adjacent face normals. That being said, under the assumption that the underlying geometry (which the mesh is trying to represent) is continuous, something slightly more reasonable might be to weight each by the inverse square root of the face area (square root, because that is the symmetric length scale for an equilateral triangle). Or better yet, explicitly compute the distance from the current vertex to each of the adjacent face barycentres, and weight by the inverse of this distance.
Shown below are some illustrative examples of edge decimation. In the previous section we described the process through which we update our adjacency list. Something to add here is that we also perform fin removal. A fin is defined by two faces that share the exact same corner vertices. This poses a problem for the shader. The two faces have opposing normals, but since the shader interpolates between the vertex normals, both faces will be shaded in the same way. One solution is to replicate the three corner vertices, so that the two faces can be uniquely defined. Alternatively, we could remove the fin altogether. Certainly, for a closed mesh, it seems more reasonable to simply remove the fin, since in this case, the mesh is volumetric, while the fin is not. For the testpatch shown below (which is not closed, at least superficially ), the two options seem equally resonable. For the plane, which is completely planar, fin removal is obviously the correct choice. Regardless, we ended up choosing to remove fins whenever they appear. To do so, we call the edge collapse function recursively. That is suppose that w is a shared vertex between u and v, and that upon collapsing u into v, w becomes part of a fin. We want to remove w, so we collapse w into v. Note that upon collapsing w into v, we may generate yet more fins to be removed, hence the recursion.
To find such a w for fin removal, we first save a set of candidates {wi}: this is just the set of vertices in the shared faces between u and v not equal to u,v. Then after collapsing u into v we look for f,f'∈adjacency(wi) such that f==f'. If such a pair of degenerate faces is found, we collapse w into v as aforementioned.
![]() |
![]() |
Below, we show an example of how the adjacency data for the collapse is updated. Note that for this plane OFF file, the vertices are enumerated from left to right and then from bottom to top (scanline). We collapse vertices 11 and 23 like so...
![]() |
![]() |
Collapsing random edges with midpoint approximation isn't nearly optimal. Much better is quadric simplification, which takes into account the approximate curvature at each vertex. For more on the mathematical formulation, we refer the reader to Garland's 97 paper on quadric mesh simplification.
To implement quadric simplification, we need keep track of the quadric at each vertex, along with the quadric error metric for all pairs of collapsable vertices. To be collapsable, the pair must either be an edge or be within some distance threshold t, which allows very close but not necessarily connected vertices to be fused; t=0 restricts us to only edge collapses.
To keep track of the collapseable pairs that remain, we create a new data-structure. That is, we maintain a vertex to vertex adjacency list (call it collapsablePartners(v)), where vertex u is adjacent to vertex v if they are connected by an edge or if they satisfy threshold t. Whenever u is collapsed into v, we update this datastructure, so that wherever u once appeared v now takes its place. We then add collapsablePartners(u) to collapsablePartners(v) and subsequently empty collapsablePartners(u). Is it worth remarking that if we are only considering edge collapses, the original vertex to face adjacency would suffice, but with nonzero t, we need something new like this. And it really is faster to look up edges in the vertex to vertex adjacency anyway.
Next comes the order of edge collapses. We adopt the usual greedy approach of always decimating the pair that has the minimum quadric error metric. To this end, we throw our collapsable pairs in a priority queue such that the pair with the least error always shows up on top. Since we only have immediate access to the top of the queue, it is impractical to reinitialize the heap after each collapse. Rather we just continue pushing elements in and allow the heap to grow and shrink within reason. The queue is therefore bound to have fallacious elements that are outdated. To discriminate between valid and nonvalid elements, we mark the pairs in the following way.
Finally, we address the necessary updates that need to be made after each collapse. Recomputing the relevant quadrics after each collapse is simple. Whenever a collapse is performed, the vertices for which the quadrics need to be updated (call it {vupdate}) are precisely those for which the normals need to be updated. We already explained in section 3.2 how we use the adjacency information to find the vertices that need their normals updated. The only difference here is that instead of computing a normal, we compute a quadric. Since we have updated {vupdate}, we musn't forget to update T in tandem.
Updating the metrics is no more difficult, at least in principle. For each v∈{vupdate}, we find all the pairs (v,w) in our vertex to vertex adjacency (which by now has been updated) and compute m=metric(v,w). We then look up (tv,tw)=(TvTw), package the information as {(v,w),m,(tv,tw)}, and push it into the priority queue.
The following images show the bunny mesh collapsed to various complexities. The animations show random edge collapse sequences (with midpoint approximation) alongside the deterministic quadric collapse sequence. As can be seen, quadric simplification preserves the geometry much better.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
To save the sequence of collapses, we output the data to a file. Without going into the nuances of the exact format, the data for each collapse of u into v is as follows:
We also store the current mesh data (in the same format as OFF) upon creation of the OFFPM file. All of this suffices to reconstruct the full mesh by undoing the collapses in the OFFPM in reverse order. Of course, once we have increased the complexity, we can also go back down by performing the collapses sequentially.
For a smooth visual transition, we allow the complexity to take on floating values, and interpolate between the nearest whole-number-complexity meshes. This is the Hoppe Geomorph (see testpatch animations in section 3.4 for slow-paced examples). The vertex positions are linearly interpolated. The suface normals, on the other hand, are a bit more problematic. For one, interpolating normals (even with normalization) doesn't seem very reasonable on a fundamental level. Visually though, this isn't a big deal; a smooth transition masks the underlying inprecision. A bigger concern is that each collapse affects neighboring vertex normals as well. It isn't difficult to save all these changes, but it is excceedingly cumbersome on the size of the representaion. With this in mind, we only save the changes to the normals of the collapsed vertices and forego updating the neighbors. Doing this, for the majority of the mesh, the appearance is still satisfactory. However, as can be seen in the examples below, many models have unappealing speckles. If one wishes to fix the appearance, a press of r in the GUI will rebuild the adjacency datastructure from the index buffer and recompute the normals. Actually, this was precisely how the static bunny images in section 3.4 were created. First, an OFFPM file was generated. Using the resulting mesh history, the bunny was collapsed to various complexities: each time of which, the adjacency was rebuilt and the vertex normals recomputed. The reason why we don't update the adjacency at each step is because it is substantially slower to insert and remove elements in the adjacncy data-structure than it is to update the index buffer. We felt that one of the main advantages of progressive meshes is the speed with which we can increase and decrease the mesh complexity. So to preserve interactive framerates, we only update the adjacency when needed (for instance, when the normals begin to deviate from reality).
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |