Computers should be able to detect and track the articulated 3-D pose of a human being moving through a video sequence. Current tracking methods often prove slow and unreliable, and many must be initialized by a human operator before they can track a sequence. This paper introduces a simple yet effective algorithm for tracking articulated pose, based upon looking up observed silhouettes in a collection of known poses. The new algorithm runs quickly, can initialize itself without human intervention, and can automatically recover from critical tracking errors made while tracking previous frames in a video sequence
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.