1,209 research outputs found

    Hierarchical eyelid and face tracking

    Get PDF
    Most applications on Human Computer Interaction (HCI) require to extract the movements of user faces, while avoiding high memory and time expenses. Moreover, HCI systems usually use low-cost cameras, while current face tracking techniques strongly depend on the image resolution. In this paper, we tackle the problem of eyelid tracking by using Appearance-Based Models, thus achieving accurate estimations of the movements of the eyelids, while avoiding cues, which require high-resolution faces, such as edge detectors or colour information. Consequently, we can track the fast and spontaneous movements of the eyelids, a very hard task due to the small resolution of the eye regions. Subsequently, we combine the results of eyelid tracking with the estimations of other facial features, such as the eyebrows and the lips. As a result, a hierarchical tracking framework is obtained: we demonstrate that combining two appearance-based trackers allows to get accurate estimates for the eyelid, eyebrows, lips and also the 3D head pose by using low-cost video cameras and in real-time. Therefore, our approach is shown suitable to be used for further facial-expression analysis.Peer Reviewe

    FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality

    No full text
    We introduce FaceVR, a novel method for gaze-aware facial reenactment in the Virtual Reality (VR) context. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. In addition to these face reconstruction components, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances. For instance, we can alter facial expressions, change gaze directions, or remove the VR goggles in realistic re-renderings. In a live setup with a source and a target actor, we apply these newly-introduced algorithmic components. We assume that the source actor is wearing a VR device, and we capture his facial expressions and eye movement in real-time. For the target video, we mimic a similar tracking process; however, we use the source input to drive the animations of the target video, thus enabling gaze-aware facial reenactment. To render the modified target video on a stereo display, we augment our capture and reconstruction process with stereo data. In the end, FaceVR produces compelling results for a variety of applications, such as gaze-aware facial reenactment, reenactment in virtual reality, removal of VR goggles, and re-targeting of somebody's gaze direction in a video conferencing call

    Hierarchical online appearance-based tracking for 3D head pose, eyebrows, lips, eyelids, and irises

    Get PDF
    In this paper, we propose an On-line Appearance-Based Tracker (OABT) for simultaneous tracking of 3D head pose, lips, eyebrows, eyelids and irises in monocular video sequences. In contrast to previously proposed tracking approaches, which deal with face and gaze tracking separately, our OABT can also be used for eyelid and iris tracking, as well as 3D head pose, lips and eyebrows facial actions tracking. Furthermore, our approach applies an on-line learning of changes in the appearance of the tracked target. Hence, the prior training of appearance models, which usually requires a large amount of labeled facial images, is avoided. Moreover, the proposed method is built upon a hierarchical combination of three OABTs, which are optimized using a Levenberg–Marquardt Algorithm (LMA) enhanced with line-search procedures. This, in turn, makes the proposed method robust to changes in lighting conditions, occlusions and translucent textures, as evidenced by our experiments. Finally, the proposed method achieves head and facial actions tracking in real-time

    Circle-based Eye Center Localization (CECL)

    Full text link
    We propose an improved eye center localization method based on the Hough transform, called Circle-based Eye Center Localization (CECL) that is simple, robust, and achieves accuracy on a par with typically more complex state-of-the-art methods. The CECL method relies on color and shape cues that distinguish the iris from other facial structures. The accuracy of the CECL method is demonstrated through a comparison with 15 state-of-the-art eye center localization methods against five error thresholds, as reported in the literature. The CECL method achieved an accuracy of 80.8% to 99.4% and ranked first for 2 of the 5 thresholds. It is concluded that the CECL method offers an attractive alternative to existing methods for automatic eye center localization.Comment: Published and presented at The 14th IAPR International Conference on Machine Vision Applications, 2015. http://www.mva-org.jp/mva2015

    A review on automated facial nerve function assessment from visual face capture

    Get PDF
    • …
    corecore