We present a unified deformation model for the markerless capture of multiple
scales of human movement, including facial expressions, body motion, and hand
gestures. An initial model is generated by locally stitching together models of
the individual parts of the human body, which we refer to as the "Frankenstein"
model. This model enables the full expression of part movements, including face
and hands by a single seamless model. Using a large-scale capture of people
wearing everyday clothes, we optimize the Frankenstein model to create "Adam".
Adam is a calibrated model that shares the same skeleton hierarchy as the
initial model but can express hair and clothing geometry, making it directly
usable for fitting people as they normally appear in everyday life. Finally, we
demonstrate the use of these models for total motion tracking, simultaneously
capturing the large-scale body movements and the subtle face and hand motion of
a social group of people