12 research outputs found

    Integration of 2D Textural and 3D Geometric Features for Robust Facial Expression Recognition

    Get PDF
    Recognition of facial expressions is critical for successful social interactions and relationships. Facial expressions transmit emotional information, which is critical for human-machine interaction; therefore, significant research in computer vision has been conducted, with promising findings in using facial expression detection in both academia and industry. 3D pictures acquired enormous popularity owing to their ability to overcome some of the constraints inherent in 2D imagery, such as lighting and variation. We present a method for recognizing facial expressions in this article by combining features extracted from 2D textured pictures and 3D geometric data using the Local Binary Pattern (LBP) and the 3D Voxel Histogram of Oriented Gradients (3DVHOG), respectively. We performed various pre-processing operations using the MDPA-FACE3D and Bosphorus datasets, then we carried out classification process to classify images into seven universal emotions, namely anger, disgust, fear, happiness, sadness, neutral, and surprise. Using Support Vector Machine classifier, we achieved the accuracy of 88.5 % and 92.9 % on the MDPA-FACE3D and the Bosphorus datasets, respectively

    Binary Pattern Analysis for 3D Facial Action Unit Detection

    Full text link
    In this paper we propose new binary pattern features for use in the problem of 3D facial action unit (AU) detection. Two representations of 3D facial geometries are employed, the depth map and the Azimuthal Projection Distance Image (APDI). To these the traditional Local Binary Pattern is applied, along with Local Phase Quantisation, Gabor filters and Monogenic filters, followed by the binary pattern feature extraction method. Feature vectors are formed for each feature type through concatenation of histograms formed from the resulting binary numbers. Feature selection is then performed using a two-stage GentleBoost approach. Finally, we apply Support Vector Machines as classifiers for detection of each AU. This system is tested in two ways. First we perform 10-fold cross-validation on the Bosphorus database, and then we perform cross-database testing by training on this database and then testing on apex frames from the D3DFACS database, achieving promising results in both

    Reconocimiento de expresiones con Kinect

    Get PDF
    Los progresos recientes en análisis automáticos de rasgos faciales y comportamentales han abierto una puerta a nuevas aplicaciones para la identificación de conductas no verbales. Por ejemplo, en el área de la salud, los algoritmos de visión por ordenador pueden ayudar al personal sanitario a mejorar la comunicación a través de la telemedicina. Los descriptores automáticos del comportamiento pueden además añadir información cuantitativa a las interacciones profesional - paciente. Otro campo de aplicación es el de la psicología, donde una aplicación relevante para este tipo de tecnología no verbal es el desarrollo de descriptores robustos de comportamiento que se correlacionan con trastornos psicológicos como la depresión, ansiedad, trastornos de estrés postraumático o de hiperactividad, etc. De esta forma, el objetivo de este proyecto fin de carrera ha sido el de detectar variaciones faciales que se presentan en la cara de una persona ante diferentes estímulos. En este caso, los estímulos han consistido en vídeos que se han mostrado a las personas que se sometieron al estudio piloto que se llevó a cabo en la Fundación Jiménez Díaz. Estos estímulos son recogidos a través del nuevo sensor Kinect, disponible en el mercado desde mediados del 2014, y que permite obtener un análisis detallado de los movimientos faciales. Una colección de 3 vídeos de corta duración fue mostrada a los participantes, cada uno de ellos con un objetivo en concreto. Los resultados del estudio serán analizados en términos de FACS, considerado como uno de los métodos más eficaces para medir los comportamientos faciales. Este sistema divide las expresiones faciales en unidades de acción (UAs) en vez de hacer una clasificación en unas cuantas emociones básicasIngeniería de Telecomunicació

    Forensic comparison of fired cartridge cases: Feature-extraction methods for feature-based calculation of likelihood ratios

    Get PDF
    We describe and validate a feature-based system for calculation of likelihood ratios from 3D digital images of fired cartridge cases. The system includes a database of 3D digital images of the bases of 10 cartridges fired per firearm from approximately 300 firearms of the same class (semi-automatic pistols that fire 9 mm diameter centre-fire Luger-type ammunition, and that have hemispherical firing pins and parallel breech-face marks). The images were captured using Evofinder®, an imaging system that is commonly used by operational forensic laboratories. A key component of the research reported is the comparison of different feature-extraction methods. Feature sets compared include those previously proposed in the literature, plus Zernike-moment based features. Comparisons are also made of using feature sets extracted from the firing-pin impression, from the breech-face region, and from the whole region of interest (firing-pin impression + breech-face region + flowback if present). Likelihood ratios are calculated using a statistical modelling pipeline that is standard in forensic voice comparison. Validation is conducted and results are assessed using validation procedures and validation metrics and graphics that are standard in forensic voice comparison

    Reconhecimento de expressões faciais compostas em imagens 3D : ambiente forçado vs ambiente espontâneo

    Get PDF
    Orientadora: Profa. Dra. Olga Regina Pereira BellonDissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 16/12/2017Inclui referências: p.56-60Área de concentração: Ciência da ComputaçãoResumo: Neste trabalho, realiza-se o reconhecimento de Expressões Faciais Compostas (EFCs), em imagens 3D, nos ambientes de captura: forçado e espontâneo. Explora-se assim, uma moderna categorização de expressões faciais, diferente das expressões faciais básicas, por ser construída pela combinação de duas expressões básicas. A pesquisa se orienta através da utilização de imagens 3D por conta de suas vantagens intrínsecas: não apresentam problemas decorrentes de variações de pose, iluminação e de outras mudanças na aparência facial. Consideram-se dois ambientes de captura de expressões: forçado (quando o sujeito é instruído para realizar a expressão) e espontâneo (quando o sujeito produz a expressão por meio de estímulos). Isto, com a intenção de comparar o comportamento dos dois em relação ao reconhecimento de EFCs, já que, diferem em várias dimensões, incluindo dentre elas: complexidade, temporalidade e intensidade. Por fim, propõe-se um método para reconhecer EFCs. O método em questão representa uma nova aplicação de detectores de movimentos dos músculos faciais já existentes. Esses movimentos faciais detectar são denotados no sistema de codificação de ação facial (FACS) como Unidades de Ação (AUs). Consequentemente, implementam-se detectores de AUs em imagens 3D baseados em padrões binários locais de profundidade (LDBP). Posteriormente, o método foi aplicado em duas bases de dados públicas com imagens 3D: Bosphorus (ambiente forçado) e BP4D-Spontaneus (ambiente espontâneo). Nota-se que o método desenvolvido não diferencia as EFCs que apresentam a mesma configuração de AUs, sendo estas: "felicidade com nojo", "horror" e "impressão", por conseguinte, considera-se essas expressões como um "caso especial". Portanto, ponderaram-se 14 EFCs, mais o "caso especial" e imagens sem EFCs. Resultados obtidos evidenciam a existência de EFCs em imagens 3D, das quais aproveitaramse algumas características. Além disso, notou-se que o ambiente espontâneo, teve melhor comportamento em reconhecer EFCs tanto pelas AUs anotadas na base, quanto pelas AUs detectadas automaticamente; reconhecendo mais casos de EFCs e com melhor desempenho. Pelo nosso conhecimento, esta é a primeira vez que EFCs são investigadas em imagens 3D. Palavras-chave: Expressões faciais compostas, FACS, Detecção de AUs, Ambiente forçado, Ambiente espontâneo.Abstract: The following research investigates Compound Facial Expressions (EFCs) in 3D images captured in the domains: forced and spontaneous. The work explores a modern categorization of facial expressions, different than basic facial expressions, but constructed by the combination of two basic categories of emotion. The investigation used 3D images because of their intrinsic advantages: they do not present problems due to variations in pose, lighting and other changes in facial appearance. For this purpose, this research considers both forced (when the subject is instructed to perform the expression) and spontaneous (when the subject produces the expression by means of stimuli) expression caption domains. This has the intention of comparing the behavior of both domains by analyzing the recognition of EFCs, because they differ in many dimentions, including complexity, time and intensity. Finally, a method for EFCs recognition is proposed. The method in question represents a new application of existing detectors of facial muscle movements. These facial movimentes to detect are denoted in the Facial Action Coding System (FACS) as Action Units (AUs). Consequently, 3D Facial AUs Detectors are developed based on Local Depth Binary Patterns (LDBP). Subsequently, the method was applied to two public databases with 3D images: Bosphorus (forced domain) and BP4D-Spontaneous (spontaneous domain). Note that the developed method does not differentiate the EFCs that present the same AU configuration: "sadly disgusted", "appalled" and "hateful", therefore, these expressions are considered a "special case". Thus, 14 EFCs are observed, plus the "special case" and the non-EFCs images. The results confirm the existence of EFCs in 3D images, from which some characteristics were exploit. In addition, noticed that the spontaneous environment was better at recognizing EFCs by the AUs annotated at the database and by the AUs detected; recognizing more cases of EFCs and with better performance. From our best knowledge, this is the first time that EFCs are explored for 3D images. Keywords: Coumpound facial expression, FACS, AUs detection, posed domain, spontaneous domain

    Computational Modeling of Facial Response for Detecting Differential Traits in Autism Spectrum Disorders

    Get PDF
    This dissertation proposes novel computational modeling and computer vision methods for the analysis and discovery of differential traits in subjects with Autism Spectrum Disorders (ASD) using video and three-dimensional (3D) images of face and facial expressions. ASD is a neurodevelopmental disorder that impairs an individual’s nonverbal communication skills. This work studies ASD from the pathophysiology of facial expressions which may manifest atypical responses in the face. State-of-the-art psychophysical studies mostly employ na¨ıve human raters to visually score atypical facial responses of individuals with ASD, which may be subjective, tedious, and error prone. A few quantitative studies use intrusive sensors on the face of the subjects with ASD, which in turn, may inhibit or bias the natural facial responses of these subjects. This dissertation proposes non-intrusive computer vision methods to alleviate these limitations in the investigation for differential traits from the spontaneous facial responses of individuals with ASD. Two IRB-approved psychophysical studies are performed involving two groups of age-matched subjects: one for subjects diagnosed with ASD and the other for subjects who are typically-developing (TD). The facial responses of the subjects are computed from their facial images using the proposed computational models and then statistically analyzed to infer about the differential traits for the group with ASD. A novel computational model is proposed to represent the large volume of 3D facial data in a small pose-invariant Frenet frame-based feature space. The inherent pose-invariant property of the proposed features alleviates the need for an expensive 3D face registration in the pre-processing step. The proposed modeling framework is not only computationally efficient but also offers competitive performance in 3D face and facial expression recognition tasks when compared with that of the state-ofthe-art methods. This computational model is applied in the first experiment to quantify subtle facial muscle response from the geometry of 3D facial data. Results show a statistically significant asymmetry in specific pair of facial muscle activation (p\u3c0.05) for the group with ASD, which suggests the presence of a psychophysical trait (also known as an ’oddity’) in the facial expressions. For the first time in the ASD literature, the facial action coding system (FACS) is employed to classify the spontaneous facial responses based on facial action units (FAUs). Statistical analyses reveal significantly (p\u3c0.01) higher prevalence of smile expression (FAU 12) for the ASD group when compared with the TD group. The high prevalence of smile has co-occurred with significantly averted gaze (p\u3c0.05) in the group with ASD, which is indicative of an impaired reciprocal communication. The metric associated with incongruent facial and visual responses suggests a behavioral biomarker for ASD. The second experiment shows a higher prevalence of mouth frown (FAU 15) and significantly lower correlations between the activation of several FAU pairs (p\u3c0.05) in the group with ASD when compared with the TD group. The proposed computational modeling in this dissertation offers promising biomarkers, which may aid in early detection of subtle ASD-related traits, and thus enable an effective intervention strategy in the future
    corecore