One such class of systems are those that warp the underlying space, such as Free Form Deformation (FFD) [1]. The classic FFD technique deforms space using a trivariate Bezier patch. Each vertex in the mesh is associated with a point in parameter space. When the control points of the Bezier patch are moved from their original locations, a deformation is induced in the patch. When we evaluate the parameter values associated with mesh vertices, we get a different point in world space. This mapping from parameter space to deformed world space deforms any object placed within the patch, much as if space was a block of transparent jello and we were squishing the jello, with objects embedded in the jello being squished as well.
Space deformation introduces a layer of indirection between what the user wants 'move the mesh into a specific configuration' and what direct manipulations the user makes 'move these control points'. It might be better to allow the user to directly manipulate the mesh itself. One approach to deformation via direct manipulation is to use physical simulation to model elasticity and plasticity. The user can then pull at one point on an object, and the object as a whole will deform in a physically plausible way. Accurately simulating physics, however, is overkill for this task. Calculating stresses and strains, and so forth, is not really necessary when all that we desire is a visually pleasing deformation. Therefore other techniques have been created that allow the user to place 'handles' on the mesh, and then tug on the handles [2] [3]. As the handles are tugged, the mesh deforms in such a way that the vertices of the mesh near the handles are moved a lot, so as to stay near the the handle, and vertices far away from the handle remain relatively static, to achieve local control.
The above mentioned approaches are good, but have shortcomings. Hyperpatch approaches are not intuitive. What does it mean to warp space using a Bezier hyperpatch? How do the control points relate to the deformation? Handle based approaches are more better, but what if we want to deform an entire section of a mesh, such as the leg of a horse, rather than just one point, such as the nose? If our goals are specifically to deform cylindrical portions of meshes using an intuitive direct-manipulation interface, we can come up with a system that is relatively simple to implement, while being quite easy to use.
First, the deformation happens in two dimensional screen space. While the object being deformed is in fact 3D, we see a 2D projection on our screen, and can only directly manipulate 2D coordinates with the mouse. If a 3D deformation is in fact genuinely desired, the user can perform a series of 2D deformations. If each 2D deformation is done from a different 3D point of view, then the effect is a 3D deformation.
Second, the deformation is controlled by a simple curve. Curves are easy to draw, easy to manipulate, and naturally conform to the geometry of an important class of objects. This class is the class of objects with a roughly cylindrical shape. Examples include arms, legs, tails, and necks.
Both in my system and in the Sketching Mesh Deformations paper that inspired my system, interaction proceeds as follows.
The above steps are then repeated, as desired. Implicit in drawing the reference curve is selection of a region of interest. When a curve is drawn down a leg, the user is indicating to the system that only the leg is to be deformed. The system automatically determines which vertices belong to the given reference curve.
The first interesting algorithmic aspect of this system is region of interest selection.
The shear, on the other hand, deserves closer examination. When a line segment changes in such a way that it is not simply translating, but isn't simply changing length, the missing component can be described either as a shear, or as a rotation. For objects more complicated than line segments, a shear is not the same as a rotation. Intuitively, it makes more sense that this kind of deformation be interpreted as a rotation. After all, it is quite likely that a user would wish to rotate an arm, but fairly unlikely that the user wants to shear an arm. Therefore I must explicitly apply rotations. Mesh vertices are rotated by an angle theta about the associated curve point, by the same theta that the Frenet frame of the associated curve point rotates.
Line segments are easy to manipulate, but the system, when implement as described above, lead to severe discontinuities of the deformation where one line segment transitions into another. To smooth out these discontinuities, I use the polyline not as a polyline per-se, but rather as the control polyline for a cubic B-spline curve. To ensure that the spline interpolates the start and end of the curve, the start and end points of the polyline are automatically tripled up. The above explanation for how the deformation works is still applicable, except instead of line segments, we have parametric cubic segments. The smoothness of the spline leads to a smooth deformation. The user still gets to manipulate a polyline, the spline is entirely behind-the-scenes. The spline shows its presence purely in the smoothness of the deformation. Upon careful examination, it can be seen that the spline shows up in one more way: as the control points are moved, the mesh lags behind a bit. This is a consequence of the approximating, rather than interpolating nature of the B-spline. While users unfamiliar with approximating splines may need a moment to get used to this idea, a bit of interactive exploration should allow the user to quickly become accustomed to the notion that the control points should overshoot the intended deformation.
Not only is blending needed between line segments of the polyline, but blending is also needed between the deforming part of the mesh and the undeformed portion of the mesh. I found it best to rely on the B-spline for that purpose. To ensure a smooth blend, the user should leave the first and last control points static. When the intermediary control points are moved, the smooth blend inherent to the spline ensures that the deformation smoothly fades out into nothingness.
Before the deformation can be applied, each vertex of the mesh must be associated with an appropriate point along the curve. Clearly the appropriate point on the curve is the point on the curve closest to the mesh vertex in question. To find the closest point, I use the simplest, most naive algorithm I could think of. I exhaustively consider a large number of points along the curve, regularly sampling parameter space. For each of these points, I compute distance to the vertex in question. This algorithm, as simple as it is, produces reasonably good results with a reasonably fast running time.
[2] "Laplacian Surface Editing", O. Sorkine and Y. Lipman and D. Cohen-Or and M. Alexa and C. Rossl and H.-P. Seidel, SIGGRAPH symposium on Geometry processing 2004
[3] "Poisson Mesh Editing", Y. Yu and K. Zhou and D. Xu and X. Shi and H. Bao and B. Guo and H. Shum, SIGGRAPH 2004
[4] "Sketching Mesh Deformations", Y. Kho and M. Garland, I3D 2005