181,540 research outputs found

    Face detection in profile views using fast discrete curvelet transform (FDCT) and support vector machine (SVM)

    Get PDF
    Human face detection is an indispensable component in face processing applications, including automatic face recognition, security surveillance, facial expression recognition, and the like. This paper presents a profile face detection algorithm based on curvelet features, as curvelet transform offers good directional representation and can capture edge information in human face from different angles. First, a simple skin color segmentation scheme based on HSV (Hue - Saturation - Value) and YCgCr (luminance - green chrominance - red chrominance) color models is used to extract skin blocks. The segmentation scheme utilizes only the S and CgCr components, and is therefore luminance independent. Features extracted from three frequency bands from curvelet decomposition are used to detect face in each block. A support vector machine (SVM) classifier is trained for the classification task. In the performance test, the results showed that the proposed algorithm can detect profile faces in color images with good detection rate and low misdetection rate

    Exploring Human attitude during Human-Robot Interaction

    Get PDF
    The aim of this work is to provide an automatic analysis to assess the user attitude when interacts with a companion robot. In detail, our work focuses on defining which combination of social cues the robot should recognize so that to stimulate the ongoing conversation and how. The analysis is performed on video recordings of 9 elderly users. From each video, low-level descriptors of the behavior of the user are extracted by using open-source automatic tools to extract information on the voice, the body posture, and the face landmarks. The assessment of 3 types of attitude (neutral, positive and negative) is performed through 3 machine learning classification algorithms: k-nearest neighbors, random decision forest and support vector regression. Since intra- and intersubject variability could affect the results of the assessment, this work shows the robustness of the classification models in both scenarios. Further analysis is performed on the type of representation used to describe the attitude. A raw and an auto-encoded representation is applied to the descriptors. The results of the attitude assessment show high values of accuracy (>0.85) both for unimodal and multimodal data. The outcome of this work can be integrated into a robotic platform to automatically assess the quality of interaction and to modify its behavior accordingly

    Exploiting Emotional Dependencies with Graph Convolutional Networks for Facial Expression Recognition

    Full text link
    Over the past few years, deep learning methods have shown remarkable results in many face-related tasks including automatic facial expression recognition (FER) in-the-wild. Meanwhile, numerous models describing the human emotional states have been proposed by the psychology community. However, we have no clear evidence as to which representation is more appropriate and the majority of FER systems use either the categorical or the dimensional model of affect. Inspired by recent work in multi-label classification, this paper proposes a novel multi-task learning (MTL) framework that exploits the dependencies between these two models using a Graph Convolutional Network (GCN) to recognize facial expressions in-the-wild. Specifically, a shared feature representation is learned for both discrete and continuous recognition in a MTL setting. Moreover, the facial expression classifiers and the valence-arousal regressors are learned through a GCN that explicitly captures the dependencies between them. To evaluate the performance of our method under real-world conditions we perform extensive experiments on the AffectNet and Aff-Wild2 datasets. The results of our experiments show that our method is capable of improving the performance across different datasets and backbone architectures. Finally, we also surpass the previous state-of-the-art methods on the categorical model of AffectNet.Comment: 9 pages, 8 figures, 5 tables, revised submission to the 16th IEEE International Conference on Automatic Face and Gesture Recognitio

    Region-based facial expression recognition in still images

    Get PDF
    In Facial Expression Recognition Systems (FERS), only particular regions of the face are utilized for discrimination. The areas of the eyes, eyebrows, nose, and mouth are the most important features in any FERS. Applying facial features descriptors such as the local binary pattern (LBP) on such areas results in an effective and efficient FERS. In this paper, we propose an automatic facial expression recognition system. Unlike other systems, it detects and extracts the informative and discriminant regions of the face (i.e., eyes, nose, and mouth areas) using Haar-feature based cascade classifiers and these region-based features are stored into separate image files as a preprocessing step. Then, LBP is applied to these image files for facial texture representation and a feature-vector per subject is obtained by concatenating the resulting LBP histograms of the decomposed region-based features. The one-vs-rest SVM, which is a popular multi-classification method, is employed with the Radial Basis Function (RBF) for facial expression classification. Experimental results show that this approach yields good performance for both frontal and near-frontal facial images in terms of accuracy and time complexity. Cohn-Kanade and JAFFE, which are benchmark facial expression datasets, are used to evaluate this approach

    Facial Expression Recognition

    Get PDF

    Hybrid image representation methods for automatic image annotation: a survey

    Get PDF
    In most automatic image annotation systems, images are represented with low level features using either global methods or local methods. In global methods, the entire image is used as a unit. Local methods divide images into blocks where fixed-size sub-image blocks are adopted as sub-units; or into regions by using segmented regions as sub-units in images. In contrast to typical automatic image annotation methods that use either global or local features exclusively, several recent methods have considered incorporating the two kinds of information, and believe that the combination of the two levels of features is beneficial in annotating images. In this paper, we provide a survey on automatic image annotation techniques according to one aspect: feature extraction, and, in order to complement existing surveys in literature, we focus on the emerging image annotation methods: hybrid methods that combine both global and local features for image representation

    Automatic Prediction of Facial Trait Judgments: Appearance vs. Structural Models

    Get PDF
    Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions
    • 

    corecore