Abstract — We present a real-time framework for Action Units(AU) and Expression recognition based on facial features tracking and Adaboost. Accurate feature tracking faces several challenges due to changes in illumination, subject’s skin color, large head rotations, partial occlusions and fast head movements. We use models based on Active Shapes to localize facial features on the face in a generic pose. Shapes of facial features undergo non-linear transformation as the head rotates from frontal view to profile view. We learn the non-linear shape manifold as multipleoverlapping subspaces with different subspaces representing different head poses. Further, we use the tracked features to accurately extract bounded faces in a video sequence and use it for recognizing facial expressions. Our approach is based on coded dynamical features. In order to capture the dynamical characteristics of facial events, we design the dynamical haar-lik
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.