682 research outputs found

    A Comprehensive Performance Evaluation of Deformable Face Tracking "In-the-Wild"

    Full text link
    Recently, technologies such as face detection, facial landmark localisation and face recognition and verification have matured enough to provide effective and efficient solutions for imagery captured under arbitrary conditions (referred to as "in-the-wild"). This is partially attributed to the fact that comprehensive "in-the-wild" benchmarks have been developed for face detection, landmark localisation and recognition/verification. A very important technology that has not been thoroughly evaluated yet is deformable face tracking "in-the-wild". Until now, the performance has mainly been assessed qualitatively by visually assessing the result of a deformable face tracking technology on short videos. In this paper, we perform the first, to the best of our knowledge, thorough evaluation of state-of-the-art deformable face tracking pipelines using the recently introduced 300VW benchmark. We evaluate many different architectures focusing mainly on the task of on-line deformable face tracking. In particular, we compare the following general strategies: (a) generic face detection plus generic facial landmark localisation, (b) generic model free tracking plus generic facial landmark localisation, as well as (c) hybrid approaches using state-of-the-art face detection, model free tracking and facial landmark localisation technologies. Our evaluation reveals future avenues for further research on the topic.Comment: E. Antonakos and P. Snape contributed equally and have joint second authorshi

    GrabCut-Based Human Segmentation in Video Sequences

    Get PDF
    In this paper, we present a fully-automatic Spatio-Temporal GrabCut human segmentation methodology that combines tracking and segmentation. GrabCut initialization is performed by a HOG-based subject detection, face detection, and skin color model. Spatial information is included by Mean Shift clustering whereas temporal coherence is considered by the historical of Gaussian Mixture Models. Moreover, full face and pose recovery is obtained by combining human segmentation with Active Appearance Models and Conditional Random Fields. Results over public datasets and in a new Human Limb dataset show a robust segmentation and recovery of both face and pose using the presented methodology

    A graphical model based solution to the facial feature point tracking problem

    Get PDF
    In this paper a facial feature point tracker that is motivated by applications such as human-computer interfaces and facial expression analysis systems is proposed. The proposed tracker is based on a graphical model framework. The facial features are tracked through video streams by incorporating statistical relations in time as well as spatial relations between feature points. By exploiting the spatial relationships between feature points, the proposed method provides robustness in real-world conditions such as arbitrary head movements and occlusions. A Gabor feature-based occlusion detector is developed and used to handle occlusions. The performance of the proposed tracker has been evaluated on real video data under various conditions including occluded facial gestures and head movements. It is also compared to two popular methods, one based on Kalman filtering exploiting temporal relations, and the other based on active appearance models (AAM). Improvements provided by the proposed approach are demonstrated through both visual displays and quantitative analysis

    3D Face Recognition

    Get PDF

    Generative Interpretation of Medical Images

    Get PDF

    Computer Vision for Timber Harvesting

    Get PDF

    Wize Mirror - a smart, multisensory cardio-metabolic risk monitoring system

    Get PDF
    In the recent years personal health monitoring systems have been gaining popularity, both as a result of the pull from the general population, keen to improve well-being and early detection of possibly serious health conditions and the push from the industry eager to translate the current significant progress in computer vision and machine learning into commercial products. One of such systems is the Wize Mirror, built as a result of the FP7 funded SEMEOTICONS (SEMEiotic Oriented Technology for Individuals CardiOmetabolic risk self-assessmeNt and Self-monitoring) project. The project aims to translate the semeiotic code of the human face into computational descriptors and measures, automatically extracted from videos, multispectral images, and 3D scans of the face. The multisensory platform, being developed as the result of that project, in the form of a smart mirror, looks for signs related to cardio-metabolic risks. The goal is to enable users to self-monitor their well-being status over time and improve their life-style via tailored user guidance. This paper is focused on the description of the part of that system, utilising computer vision and machine learning techniques to perform 3D morphological analysis of the face and recognition of psycho-somatic status both linked with cardio-metabolic risks. The paper describes the concepts, methods and the developed implementations as well as reports on the results obtained on both real and synthetic datasets
    corecore