18,151 research outputs found

    Automatic facial analysis for objective assessment of facial paralysis

    Get PDF
    Facial Paralysis is a condition causing decreased movement on one side of the face. A quantitative, objective and reliable assessment system would be an invaluable tool for clinicians treating patients with this condition. This paper presents an approach based on the automatic analysis of patient video data. Facial feature localization and facial movement detection methods are discussed. An algorithm is presented to process the optical flow data to obtain the motion features in the relevant facial regions. Three classification methods are applied to provide quantitative evaluations of regional facial nerve function and the overall facial nerve function based on the House-Brackmann Scale. Experiments show the Radial Basis Function (RBF) Neural Network to have superior performance

    Biomedical image sequence analysis with application to automatic quantitative assessment of facial paralysis

    Get PDF
    Facial paralysis is a condition causing decreased movement on one side of the face. A quantitative, objective, and reliable assessment system would be an invaluable tool for clinicians treating patients with this condition. This paper presents an approach based on the automatic analysis of patient video data. Facial feature localization and facial movement detection methods are discussed. An algorithm is presented to process the optical flow data to obtain the motion features in the relevant facial regions. Three classification methods are applied to provide quantitative evaluations of regional facial nerve function and the overall facial nerve function based on the House-Brackmann scale. Experiments show the radial basis function (RBF) neural network to have superior performance

    Biomedical image sequence analysis with application to automatic quantitative assessment of facial paralysis

    Get PDF
    Facial paralysis is a condition causing decreased movement on one side of the face. A quantitative, objective, and reliable assessment system would be an invaluable tool for clinicians treating patients with this condition. This paper presents an approach based on the automatic analysis of patient video data. Facial feature localization and facial movement detection methods are discussed. An algorithm is presented to process the optical flow data to obtain the motion features in the relevant facial regions. Three classification methods are applied to provide quantitative evaluations of regional facial nerve function and the overall facial nerve function based on the House-Brackmann scale. Experiments show the radial basis function (RBF) neural network to have superior performance

    Automatic landmark annotation and dense correspondence registration for 3D human facial images

    Full text link
    Dense surface registration of three-dimensional (3D) human facial images holds great potential for studies of human trait diversity, disease genetics, and forensics. Non-rigid registration is particularly useful for establishing dense anatomical correspondences between faces. Here we describe a novel non-rigid registration method for fully automatic 3D facial image mapping. This method comprises two steps: first, seventeen facial landmarks are automatically annotated, mainly via PCA-based feature recognition following 3D-to-2D data transformation. Second, an efficient thin-plate spline (TPS) protocol is used to establish the dense anatomical correspondence between facial images, under the guidance of the predefined landmarks. We demonstrate that this method is robust and highly accurate, even for different ethnicities. The average face is calculated for individuals of Han Chinese and Uyghur origins. While fully automatic and computationally efficient, this method enables high-throughput analysis of human facial feature variation.Comment: 33 pages, 6 figures, 1 tabl

    Facial Action Recognition for Facial Expression Analysis from Static Face Images

    No full text
    Automatic recognition of facial gestures (i.e., facial muscle activity) is rapidly becoming an area of intense interest in the research field of machine vision. In this paper, we present an automated system that we developed to recognize facial gestures in static, frontal- and/or profile-view color face images. A multidetector approach to facial feature localization is utilized to spatially sample the profile contour and the contours of the facial components such as the eyes and the mouth. From the extracted contours of the facial features, we extract ten profile-contour fiducial points and 19 fiducial points of the contours of the facial components. Based on these, 32 individual facial muscle actions (AUs) occurring alone or in combination are recognized using rule-based reasoning. With each scored AU, the utilized algorithm associates a factor denoting the certainty with which the pertinent AU has been scored. A recognition rate of 86% is achieved

    A quantitative assessment of 3D facial key point localization ïŹtting 2D shape models to curvature information

    Get PDF
    This work addresses the localization of 11 prominent facial landmarks in 3D by ïŹtting state of the art shape models to 2D data. Quantitative results are provided for 34 scans at high resolution (texture maps of 10 M-pixels) in terms of accuracy (with respect to manual measurements) and precision (repeatability on different images from the same individual). We obtain an average accuracy of approximately 3 mm, and median repeatability of inter-landmark distances typically below 2 mm, which are values comparable to current algorithms on automatic localization of facial landmarks. We also show that, in our experiments, the replacement of texture information by curvature features produced little change in performance, which is an important ïŹnding as it suggests the applicability of the method to any type of 3D data

    Multi-stream gaussian mixture model based facial feature localization=Çoklu gauss karÄ±ĆŸÄ±m modeli tabanlı yĂŒz öznitelikleri bulma algoritması

    Get PDF
    This paper presents a new facial feature localization system which estimates positions of eyes, nose and mouth corners simultaneously. In contrast to conventional systems, we use the multi-stream Gaussian mixture model (GMM) framework in order to represent structural and appearance information of facial features. We construct a GMM for the region of each facial feature, where the principal component analysis is used to extract each facial feature. We also build a GMM which represents the structural information of a face, relative positions of facial features. Those models are combined based on the multi-stream framework. It can reduce the computation time to search region of interest (ROI). We demonstrate the effectiveness of our algorithm through experiments on the BioID Face Database
    • 

    corecore