13 research outputs found

    Comparing landmarking methods for face recognition

    Get PDF
    Good registration (alignment to a reference) is essential for accurate face recognition. We use the locations of facial features (eyes, nose, mouth, etc) as landmarks for registration. Two landmarking methods are explored and compared: (1) the Most Likely-Landmark Locator (MLLL), based on maximizing the likelihood ratio [1], and (2) Viola-Jones detection [2]. Further, a landmark-correction method based on projection into a subspace is introduced. Both landmarking methods have been trained on the landmarked images in the BioID database [3]. The MLLL has been trained for locating 17 landmarks and the Viola-Jones method for 5 landmarks. The localization error and effects on the equal-error rate (EER) have been measured. In these experiments ground- truth data has been used as a reference. The results are described as follows:\ud 1. The localization errors obtained on the FRGC database are 4.2, 8.6 and 4.6 pixels for the Viola-Jones, the MLLL, and the MLLL after landmark correction, respectively. The inter-eye distance of the reference face is 100 pixels. The MLLL with landmark correction scores best in the verification experiment.\ud 2. Using more landmarks decreases the average localization error and the EER

    A landmark paper in face recognition

    Get PDF
    Good registration (alignment to a reference) is essential for accurate face recognition. The effects of the number of landmarks on the mean localization error and the recognition performance are studied. Two landmarking methods are explored and compared for that purpose: (1) the most likely-landmark locator (MLLL), based on maximizing the likelihood ratio, and (2) Viola-Jones detection. Both use the locations of facial features (eyes, nose, mouth, etc) as landmarks. Further, a landmark-correction method (BILBO) based on projection into a subspace is introduced. The MLLL has been trained for locating 17 landmarks and the Viola-Jones method for 5. The mean localization errors and effects on the verification performance have been measured. It was found that on the eyes, the Viola-Jones detector is about 1% of the interocular distance more accurate than the MLLL-BILBO combination. On the nose and mouth, the MLLL-BILBO combination is about 0.5% of the inter-ocular distance more accurate than the Viola-Jones detector. Using more landmarks will result in lower equal-error rates, even when the landmarking is not so accurate. If the same landmarks are used, the most accurate landmarking method gives the best verification performance

    Evaluación de desempeño del algoritmo de seguimiento de características faciales basado en modelos ASM usando Kinect

    Get PDF
    En este trabajo se presenta la evaluación del desempeño de un algoritmo de seguimiento de las características faciales, aplicando modelos de forma activa (ASM) y utilizando el sensor Kinect como dispositivo de captura de imagen. El desarrollo se realizó mediante las librerías de OpenCV, en un PC portátil con Procesador Core i5 a 2.4Ghz, 4 Gigabytes de memoria RAM, que corre bajo sistema operativo Windows 7. Para la evaluación, se ejecutó el algoritmo para observar la respuesta respecto a las distintas posturas y expresiones faciales. Se tomaron los tiempos de estabilización de los puntos sobre la imagen y se analizó punto a punto y con criterio humano la localización de los puntos sobre la imagen. Para facilitar el análisis se agrupan los puntos acordes a la zona del rostro: Contorno de la cara, cejas, nariz, ojos y boca. Por último, se presentan los resultados del tiempo promedio de ajuste del modelo, el promedio de frames, así como un error promedio de posicionamiento en las distintas condiciones del rostro, lo cual muestra la robustez de este trabajo y la adaptabilidad para trabajos futuros

    Precise eye localization through a general-to-specific model definition

    Full text link
    We present a method for precise eye localization that uses two Support Vector Machines trained on properly selected Haar wavelet coefficients. The evaluation of our technique on many standard databases exhibits very good performance. Furthermore, we study the strong correlation between the eye localization error and the face recognition rate

    Data Fusion for Real-time Multimodal Emotion Recognition through Webcams and Microphones in E-Learning

    Get PDF
    The original article is available on the Taylor & Francis Online website in the following link: http://www.tandfonline.com/doi/abs/10.1080/10447318.2016.1159799?journalCode=hihc20This paper describes the validation study of our software that uses combined webcam and microphone data for real-time, continuous, unobtrusive emotion recognition as part of our FILTWAM framework. FILTWAM aims at deploying a real time multimodal emotion recognition method for providing more adequate feedback to the learners through an online communication skills training. Herein, timely feedback is needed that reflects on their shown intended emotions and which is also useful to increase learners’ awareness of their own behaviour. At least, a reliable and valid software interpretation of performed face and voice emotions is needed to warrant such adequate feedback. This validation study therefore calibrates our software. The study uses a multimodal fusion method. Twelve test persons performed computer-based tasks in which they were asked to mimic specific facial and vocal emotions. All test persons’ behaviour was recorded on video and two raters independently scored the showed emotions, which were contrasted with the software recognition outcomes. A hybrid method for multimodal fusion of our multimodal software shows accuracy between 96.1% and 98.6% for the best-chosen WEKA classifiers over predicted emotions. The software fulfils its requirements of real-time data interpretation and reliable results.The Netherlands Laboratory for Lifelong Learning (NELLL) of the Open University Netherlands

    A practical subspace approach to landmarking

    Get PDF
    A probabilistic, maximum aposteriori approach to finding landmarks in a face image is proposed, which provides a theoretical framework for template based landmarkers. One such landmarker, based on a likelihood ratio detector, is discussed in detail. Special attention is paid to training and implementation issues, in order to minimize storage and processing requirements. In particular a fast approximate singular value decomposition method is proposed to speed up the training process and implementation of the landmarker in the Fourier domain is presented that will speed up the search process. A subspace method for outlier correction and an iterative implementation of the landmarker are both shown to improve its accuracy. The impact of carefully tuning the many parameters of the method is illustrated. The method is extensively tested and compared with alternatives.\ud \ud \u

    A Practical Subspace Approach To Landmarking

    Full text link

    Image analysis for extracapsular hip fracture surgery

    Get PDF
    PhD ThesisDuring the implant insertion phase of extracapsular hip fracture surgery, a surgeon visually inspects digital radiographs to infer the best position for the implant. The inference is made by “eye-balling”. This clearly leaves room for trial and error which is not ideal for the patient. This thesis presents an image analysis approach to estimating the ideal positioning for the implant using a variant of the deformable templates model known as the Constrained Local Model (CLM). The Model is a synthesis of shape and local appearance models learned from a set of annotated landmarks and their corresponding local patches extracted from digital femur x-rays. The CLM in this work highlights both Principal Component Analysis (PCA) and Probabilistic PCA as regularisation components; the PPCA variant being a novel adaptation of the CLM framework that accounts for landmark annotation error which the PCA version does not account for. Our CLM implementation is used to articulate 2 clinical metrics namely: the Tip-Apex Distance and Parker’s Ratio (routinely used by clinicians to assess the positioning of the surgical implant during hip fracture surgery) within the image analysis framework. With our model, we were able to automatically localise signi cant landmarks on the femur, which were subsequently used to measure Parker’s Ratio directly from digital radiographs and determine an optimal placement for the surgical implant in 87% of the instances; thereby, achieving fully automatic measurement of Parker’s Ratio as opposed to manual measurements currently performed in the surgical theatre during hip fracture surgery
    corecore