7 research outputs found

    Desarrollo de un Modelo de Reconocimiento y Clasificación de Rostro Utilizando Técnicas de Inteligencia Artificial (Lambda-Fuzzy)

    Get PDF
    En este artículo se presenta una estrategia para el reconocimiento de imágenes estáticas, específicamente de reconocimiento facial a través de una técnica novedosa y reciente de clasificación que se llama método Lamda (Learning Algorithm for Multivariable Data Analysis). La estrategia consta de tres etapas que conforman el modelo de reconocimiento y clasificación presentado en este trabajo, la primera, denominada pre-procesamiento, es la encargada de adecuar las imágenes con procesos de filtrado y comprensión. La segunda etapa es la correspondiente a la extracción de características para obtener los atributos de las imágenes y diferenciarlas de manera correcta. Finalmente, la etapa de clasificación, que es la que relaciona las clases con las imágenes analizadas con la técnica Lambda

    Noise modelling for denoising and 3D face recognition algorithms performance evaluation

    Get PDF
    This study proposes an algorithm is proposed to quantitatively evaluate the performance of three‐dimensional (3D) holistic face recognition algorithms when various denoising methods are used. First, a method is proposed to model the noise on the 3D face datasets. The model not only identifies those regions on the face which are sensitive to the noise but can also be used to simulate noise for any given 3D face. Then, by incorporating the noise model in a novel 3D face recognition pipeline, seven different classification and matching methods and six denoising techniques are used to quantify the face recognition algorithms performance for different powers of the noise. The outcome: (i) shows the most reliable parameters for the denoising methods to be used in a 3D face recognition pipeline; (ii) shows which parts of the face are more vulnerable to noise and require further post‐processing after data acquisition; and (iii) compares the performance of three different categories of recognition algorithms: training‐free matching‐based, subspace projection‐based and training‐based (without projection) classifiers. The results show the high performance of the bootstrap aggregating tree classifiers and median filtering for very high intensity noise. Moreover, when different noisy/denoised samples are used as probes or in the gallery, the matching algorithms significantly outperform the training‐based (including the subspace projection) methods

    Facial feature point fitting with combined color and depth information for interactive displays

    Get PDF
    Interactive displays are driven by natural interaction with the user, necessitating a computer system that recognizes body gestures and facial expressions. User inputs are not easily or reliably recognized for a satisfying user experience, as the complexities of human communication are difficult to interpret in real-time. Recognizing facial expressions in particular is a problem that requires high-accuracy and efficiency for stable interaction environments. The recent availability of the Kinect, a low cost, low resolution sensor that supplies simultaneous color and depth images, provides a breakthrough opportunity to enhance the interactive capabilities of displays and overall user experience. This new RGBD (RGB + depth) sensor generates an additional channel of depth information that can be used to improve the performance of existing state of the art technology and develop new techniques. The Active Shape Model (ASM) is a well-known deformable model that has been extensively studied for facial feature point placement. Previous shape model techniques have applied 3D reconstruction techniques using multiple cameras or other statistical methods for producing 3D information from 2D color images. These methods showed improved results compared to using only color data, but required an additional deformable model or expensive imaging equipment. In this thesis, an ASM model is trained using the RGBD image produced by the Kinect. The real-time information from the depth sensor is registered to the color image to create a pixel-for-pixel match. To improve the quality of the depth image, a temporal median filter is applied to reduce random noise produced by the sensor. The resulting combined model is designed to produce more robust fitting of facial feature points compared to a purely color based active shape model

    3D face morphology classification for medical applications

    Get PDF
    Classification of facial morphology traits is an important problem for many medical applications, especially with regard to determining associations between facial morphological traits or facial abnormalities and genetic variants. A modern approach to the classification of facial characteristics(traits) is to use three-dimensional facial images. In clinical practice, classification is usually performed manually, which makes the process very tedious, time-consuming and prone to operator error. Also using simple landmark-to-landmark facial measurements may not accurately represent the underlying complex three-dimensional facial shape. This thesis presents the first automatic approach for classification and categorisation of facial morphological traits with application to lips and nose traits. It also introduces new 3D geodesic curvature features obtained along the geodesic paths between 3D facial anthropometric landmarks. These geometric features were used for lips and nose traits classification and categorisation. Finally, the influence of the discovered categories on the facial physical appearance are analysed using a new visualisation method in order to gain insight into suitability of categories for description of the underlying facial traits. The proposed approach was tested on the ALSPAC (Avon Longitudinal Study of Parents and Children) dataset consisting of 4747 3D full face meshes. The classification accuracy obtained using expert manual categories was not very high, in the region of 72%-79%, indicating that the manual categories may be unreliable. In an attempt to improve these accuracies,an automatic categorisation method was applied. In general,the classification accuracies based on the automatic lip categories were higher than those obtained using the manual categories by at least 8% and the automatic categories were found to be statistically more significant in the lip area than the manual categories. The same approach was used to categorise the nose traits, the result indicating that the proposed categorisation approach was capable of categorising any face morphological trait without the ground truth about its traits categories. Additionally, to test the robustness of the proposed features, they were used in a popular problem of gender classification and analysis. The results demonstrated superior classification accuracy to that of comparable methods. Finally, a discovery phase of a genome wide association analysis(GWAS) was carried out for 11 automatic lip and nose traits categories. As a result, statistically significant associations were found between four traits and six single nucleotide polymorphisms (SNPs). This is a very good result considering that for the 27 manual lip traits categories provided by medical expert, the associations were found between two traits and two SNPs only. This result testifies that the method proposed in this thesis for automatic categorisation of 3D facial morphology has a considerable potential for application to GWAS

    Profile-based 3D aided face recognition,

    No full text
    a b s t r a c t This paper presents a framework for automatic face recognition based on a silhouetted face profile (URxD-PV). Previous research has demonstrated the high discriminative potential of this biometric. Compared to traditional approaches in profile-based recognition, our approach is not limited to only standard side-view faces. We propose to explore the feature space of profiles under various rotations with the aid of a 3D face model. In the enrollment mode, 3D data of subjects are acquired and used to create profiles under different rotations. The features extracted from these profiles are used to train a classifier. In the identification mode, the profile is extracted from the side-view image and its metadata is matched with the gallery metadata. We validate the accuracy of URxD-PV using data from publicly available databases
    corecore