previous up next
Next: Experiments Up: Hybrid rigid and non-rigid Previous: Training

Synthesis

Synthesis is conceptually specified by new fist/elbow/shoulder positions. Our current method for computing new fist/elbow/shoulder positions from user input is described in Appendix A

Given the new fist/elbow/shoulder positions and their joint angles $\Theta$,it is straightforward to synthesize new images. The shape and texture of the new images are interpolated with:

\begin{displaymath}
\vec{N} = C \vec{r}\end{displaymath}

where $\vec{r}_j = \phi(\vert\vert\Delta(\Theta - \Theta_j)\vert\vert)$. We then reshape $\vec{N}$ to N, using the same dimensions as P and Q. N represents the new shape and texture of the limb.

Finally, we transform N to the new elbow/fist or fist/shoulder position as before. The transformation consists of a rotation to align the rectified limb direction with the new direction, and a translation to align the rectified pivot point to the new pivot point. The same synthesis process is applied to any other segments, and all of the segments are composited in an image.

The final transformation could also include a global camera transformation. For 3D images, this would allow viewing the new figure from any angle.

In our current system, many holes appear in these appears due to sampling problems -- these are the same holes that appear in any simple forward mapping implementation. Additionally, a small fissure sometimes appears at the elbow joint. A hole-filling algorithm is used to eliminate holes. Our current hack just creates a blurred version of the source image, and then fills all 0's in the source with the values from the blurred image. This method has the side-effect of creating some unnecessary halo artifacts around the segments.


previous up next
Next: Experiments Up: Hybrid rigid and non-rigid Previous: Training
Trevor Darrell
10/29/1998