Todd Kosloff's Final Project

Interactive Mesh Deformation


Problem Statement

The Need For Deformation

3D meshes are of central importance in geometric modeling. Whether we are making models for videogames, or computer generated movies, or whether we are designing parts that will be manufactured, we are interested in creating shapes. Creating a good model is a lot of work. Countless hours might have been spent with CAD tools to create the shape. Perhaps the model was generated procedurally, in which case code had to have been written and debugged. Alternatively, the model may have come from a 3D range scanner, in which case we get a computer model of a real-world object that we happened to have on hand. Sometimes we will have a beautiful model created using one of the techniques mentioned above, or others, but something isn't quite right. Perhaps we have a horse, but the horse is looking to the left, whereas we want it to be looking straight ahead. Or perhaps we have a bunny rabbit with drooping ears, and we would have a rabbit with ears that are pointing upwards. It would be painful to recreate these models from scratch, just to enact a slight change such as this.

Previous Approaches

The solution is mesh deformation. We want a program that can take meshes as input, and with user interaction, change the mesh to meet the new requirements. Systems such as these have existed for a long time.

One such class of systems are those that warp the underlying space, such as Free Form Deformation (FFD) [1]. The classic FFD technique deforms space using a trivariate Bezier patch. Each vertex in the mesh is associated with a point in parameter space. When the control points of the Bezier patch are moved from their original locations, a deformation is induced in the patch. When we evaluate the parameter values associated with mesh vertices, we get a different point in world space. This mapping from parameter space to deformed world space deforms any object placed within the patch, much as if space was a block of transparent jello and we were squishing the jello, with objects embedded in the jello being squished as well.

Space deformation introduces a layer of indirection between what the user wants 'move the mesh into a specific configuration' and what direct manipulations the user makes 'move these control points'. It might be better to allow the user to directly manipulate the mesh itself. One approach to deformation via direct manipulation is to use physical simulation to model elasticity and plasticity. The user can then pull at one point on an object, and the object as a whole will deform in a physically plausible way. Accurately simulating physics, however, is overkill for this task. Calculating stresses and strains, and so forth, is not really necessary when all that we desire is a visually pleasing deformation. Therefore other techniques have been created that allow the user to place 'handles' on the mesh, and then tug on the handles [2] [3]. As the handles are tugged, the mesh deforms in such a way that the vertices of the mesh near the handles are moved a lot, so as to stay near the the handle, and vertices far away from the handle remain relatively static, to achieve local control.

The above mentioned approaches are good, but have shortcomings. Hyperpatch approaches are not intuitive. What does it mean to warp space using a Bezier hyperpatch? How do the control points relate to the deformation? Handle based approaches are more better, but what if we want to deform an entire section of a mesh, such as the leg of a horse, rather than just one point, such as the nose? If our goals are specifically to deform cylindrical portions of meshes using an intuitive direct-manipulation interface, we can come up with a system that is relatively simple to implement, while being quite easy to use.

What I have Accomplished

The Sketching Mesh Deformations paper [4] presents some good ideas, which I used as the basis for my project.

First, the deformation happens in two dimensional screen space. While the object being deformed is in fact 3D, we see a 2D projection on our screen, and can only directly manipulate 2D coordinates with the mouse. If a 3D deformation is in fact genuinely desired, the user can perform a series of 2D deformations. If each 2D deformation is done from a different 3D point of view, then the effect is a 3D deformation.

Second, the deformation is controlled by a simple curve. Curves are easy to draw, easy to manipulate, and naturally conform to the geometry of an important class of objects. This class is the class of objects with a roughly cylindrical shape. Examples include arms, legs, tails, and necks.

Both in my system and in the Sketching Mesh Deformations paper that inspired my system, interaction proceeds as follows.

  • Draw a reference curve. The user draws a curve down the portion of the mesh that is to be deformed. For example, a curve is drawn the leg of a horse mesh. For simplicity, my curves are piecewise linear polylines. The polyline is 'drawn' by clicking at several points. The points are connected by line segments.
  • Deform the reference curve. The user moves the vertices of the polyline to create a new curve. This new curve encodes the deformation. For example, if the reference curve was a straight line going down the leg of a horse, the user might bend the curve to indicate that the deformation should result in a horse with a bent leg.
  • The above steps are then repeated, as desired. Implicit in drawing the reference curve is selection of a region of interest. When a curve is drawn down a leg, the user is indicating to the system that only the leg is to be deformed. The system automatically determines which vertices belong to the given reference curve.

    The Inner Workings

    To implement this system, I used the Coin3D implementation of the Inventor API. Inventor provides a convenient high level interface for specifying and rendering geometry, shielding me from some of the boilerplate details that a direct OpenGL implementation would have required.

    The first interesting algorithmic aspect of this system is region of interest selection.

    For this step I closely follow the Sketching Mesh Deformations paper. We can assume with a reasonable degree of sureness that if the curve obscures a visible vertex of the mesh, then that vertex ought to be included in the region of interest. However, the user most likely does not intend to only move the vertices that immediately lie under an infinitely thin curve. Rather the user wishes to move vertices that are near the curve, in some sense. I define 'near' in this case only a lengthwise sense. That is, a vertex lying 'next to' the curve is included in the region of interest, but a vertex lying beyond the end or 'before' the beginning of the curve will not be selected. To implement this, I define a cutting plane at the start and another cutting plane at the end of the reference curve. These cutting planes pass through their respective curve vertices, are normal to the first or last segment of the reference curve. Next, a vertex is selected that represents the region. This vertex is chosen to be the vertex closest to a point midway between the first and second points on the polyline. Next, a vertex floodfill is performed, seeded by the selected points, and bounded by the cutting planes. All vertices filled by the floodfill are considered part of the region of interest. This method works well for selecting things like arms, legs, and necks, as these objects can naturally be bounded by two planes.

    Performing the deformation means moving mesh vertices. A mesh vertex moves in analogy to the way an associated point along the reference curve moves. Two components of this motion are considered: translation and rotation. Consider for a curve consisting of just one ling segment. The following reasoning can be applied to the curve as a whole by considering the curve on a segment by segment basis. If a point along the curve gets translated up by distance dy and right by distance dx, then associated mesh vertices will also be moved up by dy and right by dx. This results in deformations consisting of rigid translations, nonuniform scale, and shear. The rigid translation should obviously happen. If the entire curve is translated, the corresponding region of the mesh should clearly translate in the same way. The nonuniform scale also makes sense. If the line segment changes length, the mesh should change length accordingly.

    The shear, on the other hand, deserves closer examination. When a line segment changes in such a way that it is not simply translating, but isn't simply changing length, the missing component can be described either as a shear, or as a rotation. For objects more complicated than line segments, a shear is not the same as a rotation. Intuitively, it makes more sense that this kind of deformation be interpreted as a rotation. After all, it is quite likely that a user would wish to rotate an arm, but fairly unlikely that the user wants to shear an arm. Therefore I must explicitly apply rotations. Mesh vertices are rotated by an angle theta about the associated curve point, by the same theta that the Frenet frame of the associated curve point rotates.

    Line segments are easy to manipulate, but the system, when implement as described above, lead to severe discontinuities of the deformation where one line segment transitions into another. To smooth out these discontinuities, I use the polyline not as a polyline per-se, but rather as the control polyline for a cubic B-spline curve. To ensure that the spline interpolates the start and end of the curve, the start and end points of the polyline are automatically tripled up. The above explanation for how the deformation works is still applicable, except instead of line segments, we have parametric cubic segments. The smoothness of the spline leads to a smooth deformation. The user still gets to manipulate a polyline, the spline is entirely behind-the-scenes. The spline shows its presence purely in the smoothness of the deformation. Upon careful examination, it can be seen that the spline shows up in one more way: as the control points are moved, the mesh lags behind a bit. This is a consequence of the approximating, rather than interpolating nature of the B-spline. While users unfamiliar with approximating splines may need a moment to get used to this idea, a bit of interactive exploration should allow the user to quickly become accustomed to the notion that the control points should overshoot the intended deformation.

    Not only is blending needed between line segments of the polyline, but blending is also needed between the deforming part of the mesh and the undeformed portion of the mesh. I found it best to rely on the B-spline for that purpose. To ensure a smooth blend, the user should leave the first and last control points static. When the intermediary control points are moved, the smooth blend inherent to the spline ensures that the deformation smoothly fades out into nothingness.

    Before the deformation can be applied, each vertex of the mesh must be associated with an appropriate point along the curve. Clearly the appropriate point on the curve is the point on the curve closest to the mesh vertex in question. To find the closest point, I use the simplest, most naive algorithm I could think of. I exhaustively consider a large number of points along the curve, regularly sampling parameter space. For each of these points, I compute distance to the vertex in question. This algorithm, as simple as it is, produces reasonably good results with a reasonably fast running time.

    The Value of Visual Debugging

    Printing out a series of numbers, or debugging by setting breakpoints and inspecting variables, are techniques of limited utility for debugging geometric algorithms. I found color coding to be an extremely useful debugging tool. By coloring vertices to indicate what was selected and what was not, I can immediately see what my flood fill and cutting planes are doing. Furthermore, I had to decide how many points along the curve to sample when searching for the closest point. To aid in this task, I color code vertices to indicate which point along the curve that vertex was associated with. I then increase the number of steps until the vertex coloration appears continuous. Before I could get to tuning the granularity of the search, I had to verify that the search was even working at all. To do this, I implemented an interactive debugging aid. I put a circle on the screen at the point of the curve closest to the mouse pointer. That way I can see how this closest point changes as I move the mouse. Thoughts on ease of use vs. raw capability vs. tools at hand, amount of work, whatever.

    Results

    References

    [1] "Free-form deformation of solid geometric models", T. Sederberg and S. Parry, SIGGRAPH 1986

    [2] "Laplacian Surface Editing", O. Sorkine and Y. Lipman and D. Cohen-Or and M. Alexa and C. Rossl and H.-P. Seidel, SIGGRAPH symposium on Geometry processing 2004

    [3] "Poisson Mesh Editing", Y. Yu and K. Zhou and D. Xu and X. Shi and H. Bao and B. Guo and H. Shum, SIGGRAPH 2004

    [4] "Sketching Mesh Deformations", Y. Kho and M. Garland, I3D 2005

    User's Manual