3,744 research outputs found

    Py-Feat: Python Facial Expression Analysis Toolbox

    Full text link
    Studying facial expressions is a notoriously difficult endeavor. Recent advances in the field of affective computing have yielded impressive progress in automatically detecting facial expressions from pictures and videos. However, much of this work has yet to be widely disseminated in social science domains such as psychology. Current state of the art models require considerable domain expertise that is not traditionally incorporated into social science training programs. Furthermore, there is a notable absence of user-friendly and open-source software that provides a comprehensive set of tools and functions that support facial expression research. In this paper, we introduce Py-Feat, an open-source Python toolbox that provides support for detecting, preprocessing, analyzing, and visualizing facial expression data. Py-Feat makes it easy for domain experts to disseminate and benchmark computer vision models and also for end users to quickly process, analyze, and visualize face expression data. We hope this platform will facilitate increased use of facial expression data in human behavior research.Comment: 25 pages, 3 figures, 5 table

    Automatic landmark annotation and dense correspondence registration for 3D human facial images

    Full text link
    Dense surface registration of three-dimensional (3D) human facial images holds great potential for studies of human trait diversity, disease genetics, and forensics. Non-rigid registration is particularly useful for establishing dense anatomical correspondences between faces. Here we describe a novel non-rigid registration method for fully automatic 3D facial image mapping. This method comprises two steps: first, seventeen facial landmarks are automatically annotated, mainly via PCA-based feature recognition following 3D-to-2D data transformation. Second, an efficient thin-plate spline (TPS) protocol is used to establish the dense anatomical correspondence between facial images, under the guidance of the predefined landmarks. We demonstrate that this method is robust and highly accurate, even for different ethnicities. The average face is calculated for individuals of Han Chinese and Uyghur origins. While fully automatic and computationally efficient, this method enables high-throughput analysis of human facial feature variation.Comment: 33 pages, 6 figures, 1 tabl

    Automatic Analysis of Facial Expressions Based on Deep Covariance Trajectories

    Get PDF
    In this paper, we propose a new approach for facial expression recognition using deep covariance descriptors. The solution is based on the idea of encoding local and global Deep Convolutional Neural Network (DCNN) features extracted from still images, in compact local and global covariance descriptors. The space geometry of the covariance matrices is that of Symmetric Positive Definite (SPD) matrices. By conducting the classification of static facial expressions using Support Vector Machine (SVM) with a valid Gaussian kernel on the SPD manifold, we show that deep covariance descriptors are more effective than the standard classification with fully connected layers and softmax. Besides, we propose a completely new and original solution to model the temporal dynamic of facial expressions as deep trajectories on the SPD manifold. As an extension of the classification pipeline of covariance descriptors, we apply SVM with valid positive definite kernels derived from global alignment for deep covariance trajectories classification. By performing extensive experiments on the Oulu-CASIA, CK+, and SFEW datasets, we show that both the proposed static and dynamic approaches achieve state-of-the-art performance for facial expression recognition outperforming many recent approaches.Comment: A preliminary version of this work appeared in "Otberdout N, Kacem A, Daoudi M, Ballihi L, Berretti S. Deep Covariance Descriptors for Facial Expression Recognition, in British Machine Vision Conference 2018, BMVC 2018, Northumbria University, Newcastle, UK, September 3-6, 2018. ; 2018 :159." arXiv admin note: substantial text overlap with arXiv:1805.0386

    Automated Privacy Protection for Mobile Device Users and Bystanders in Public Spaces

    Get PDF
    As smartphones have gained popularity over recent years, they have provided usersconvenient access to services and integrated sensors that were previously only available through larger, stationary computing devices. This trend of ubiquitous, mobile devices provides unparalleled convenience and productivity for users who wish to perform everyday actions such as taking photos, participating in social media, reading emails, or checking online banking transactions. However, the increasing use of mobile devices in public spaces by users has negative implications for their own privacy and, in some cases, that of bystanders around them. Specifically, digital photography trends in public have negative implications for bystanders who can be captured inadvertently in users’ photos. Those who are captured often have no knowledge of being photographed and have no control over how photos of them are distributed. To address this growing issue, a novel system is proposed for protecting the privacy of bystanders captured in public photos. A fully automated approach to accurately distinguish the intended subjects from strangers is explored. A feature-based classification scheme utilizing entire photos is presented. Additionally, the privacy-minded case of only utilizing local face images with no contextual information from the original image is explored with a convolutional neural network-based classifier. Three methods of face anonymization are implemented and compared: black boxing, Gaussian blurring, and pose-tolerant face swapping. To validate these methods, a comprehensive user survey is conducted to understand the difference in viability between them. Beyond photographing, the privacy of mobile device users can sometimes be impacted in public spaces, as visual eavesdropping or “shoulder surfing” attacks on device screens become feasible. Malicious individuals can easily glean personal data from smartphone and mobile device screens while they are accessed visually. In order to protect displayed user content, anovel, sensor-based visual eavesdropping detection scheme using integrated device cameras is proposed. In order to selectively obfuscate private content while an attacker is nearby, a dynamic scheme for detecting and hiding private content is also developed utilizing User-Interface-as-an-Image (UIaaI). A deep, convolutional object detection network is trained and utilized to identify sensitive content under this scheme. To allow users to customize the types ofcontent to hide, dynamic training sample generation is introduced to retrain the content detection network with very few original UI samples. Web applications are also considered with a Chrome browser extension which automates the detection and obfuscation of sensitive web page fields through HTML parsing and CSS injection

    'Aariz: A Benchmark Dataset for Automatic Cephalometric Landmark Detection and CVM Stage Classification

    Full text link
    The accurate identification and precise localization of cephalometric landmarks enable the classification and quantification of anatomical abnormalities. The traditional way of marking cephalometric landmarks on lateral cephalograms is a monotonous and time-consuming job. Endeavours to develop automated landmark detection systems have persistently been made, however, they are inadequate for orthodontic applications due to unavailability of a reliable dataset. We proposed a new state-of-the-art dataset to facilitate the development of robust AI solutions for quantitative morphometric analysis. The dataset includes 1000 lateral cephalometric radiographs (LCRs) obtained from 7 different radiographic imaging devices with varying resolutions, making it the most diverse and comprehensive cephalometric dataset to date. The clinical experts of our team meticulously annotated each radiograph with 29 cephalometric landmarks, including the most significant soft tissue landmarks ever marked in any publicly available dataset. Additionally, our experts also labelled the cervical vertebral maturation (CVM) stage of the patient in a radiograph, making this dataset the first standard resource for CVM classification. We believe that this dataset will be instrumental in the development of reliable automated landmark detection frameworks for use in orthodontics and beyond

    Performance of distance-based matching algorithms in 3D facial identification

    Get PDF
    AbstractFacial image identification is an area of forensic sciences, where an expert provides an opinion on whether or not two or more images depict the same individual. The primary concern for facial image identification is that it must be based on sound scientific principles. The recent extensive development in 3D recording technology, which is presumed to enhance performances of identification tasks, has made essential to question conditions, under which 3D images can yield accurate and reliable results. The present paper explores the effect of mesh resolution, adequacy of selected measures of dissimilarity and number of variables employed to encode identity-specific facial features on a dataset of 528 3D face models sampled from the Fidentis 3D Face Database (N∌2100). In order to match 3D images two quantitative approaches were tested, the first based on closest point-to-point distances computed from registered surface models and the second grounded on Procrustes distances derived from discrete 3D facial points collected manually on textured 3D facial models. The results expressed in terms of rank-1 identification rates, ROC curves and likelihood ratios show that under optimized conditions the tested algorithms have the capacity to provide very accurate and reliable results. The performance of the tested algorithms is, however, highly dependent on mesh resolution and the number of variables employed in the task. The results also show that in addition to numerical measures of dissimilarity, various 3D visualization tools can be of assistance in the decision-making
    • 

    corecore