Pose and illumination variations remain a persistent problem in face recognition algorithms. In this paper we present a method for accurately estimating the pose and illumination conditions, and use it for registration and tracking in video-based face recognition algorithms. This is achieved by using a joint motion, illumination and shape model that is bilinear in the motion and illumination variables. The motion is represented in terms of translation and rotation of the object centroid, and the illumination is represented using a spherical harmonics linear basis. We start by estimating a rough pose by projecting the image onto the spherical harmonics basis functions. This pose estimate is used to initialize the registration algorithm, which works by minimizing a square error criterion in the bilinear model. Thereafter, 3D tracking proceeds by alternately estimating motion and illumination parameters. The method does not assume any model for the variation of the illumination conditions- lighting can change slowly or drastically and can originate from combination of point and extended sources. We demonstrate the effectiveness of our methods on several real-world video sequences under severe changes of lighting conditions
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.