3D tracking of non-rigid articulated objects

Abstract

Articulated objects can be found in many living beings. Tracking is essential if we want to interpret the behavior of such objects. This thesis describes a framework for learning the relationship between the state and the appearance of the object. It will also show how to use this representation to track the state of the articulated object.The learning phase of the method results in populations of models that describe the appearance of small regions of the object for small regions of the state space. To efficiently train off-line, it is necessary to model the appearance of the object as function of the state. The local models use Principal Component Analysis (PCA) on windowed regions of the projected object. Manifolds in PCA subspace represent the appearance of the small local regions as they undergo deformations.The tracking algorithm recursively matches the link appearances while searching in the state space of the articulated object. To match the object appearance to the model, a coarse search finds the models that are active. The error of the projected object image is then minimized (at the new unknown state) in model subspace by fine-tuning the state.Algorithm performance is evaluated on real and synthetic data of a 4 d.o.f. finger following arbitrary 3-D paths. The results show that the local PCA models capture the deformations successfully even after discarding some of the bases. These deformations account for key features that are essential to the matching process. Also, the way in which the appearance data is partitioned allows for a fast and efficient caching strategy, thus allowing the algorithm to meet real-time constraints. Finally the merging of predictions and observations makes the algorithm very robust to outliers

    Similar works

    Full text

    thumbnail-image

    Available Versions