995 research outputs found

    First impressions: A survey on vision-based apparent personality trait analysis

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.Peer ReviewedPostprint (author's final draft

    Recent Trends in Deep Learning Based Personality Detection

    Full text link
    Recently, the automatic prediction of personality traits has received a lot of attention. Specifically, personality trait prediction from multimodal data has emerged as a hot topic within the field of affective computing. In this paper, we review significant machine learning models which have been employed for personality detection, with an emphasis on deep learning-based methods. This review paper provides an overview of the most popular approaches to automated personality detection, various computational datasets, its industrial applications, and state-of-the-art machine learning models for personality detection with specific focus on multimodal approaches. Personality detection is a very broad and diverse topic: this survey only focuses on computational approaches and leaves out psychological studies on personality detection

    Audio-visual deep learning regression of apparent personality

    Get PDF
    Treballs Finals de Grau d'Enginyeria Informàtica, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2020, Director: Sergio Escalera Guerrero, Cristina Palmero Cantariño i Julio Jacques Junior[en] Personality perception is based on the relationship of the human being with the individuals of his surroundings. This kind of perception allows to obtain conclusions based on the analysis and interpretation of the observable, mainly face expressions, tone of voice and other nonverbal signals, allowing the construction of an apparent personality (or first impression) of people. Apparent personality (or first impressions) are subjective, and subjectivity is an inherent property of perception based exclusively on the point of view of each individual. In this project, we approximate such subjectivity using a multi-modal deep neural network with audiovisual signals as input and a late fusion strategy of handcrafted features, achieving accurate results. The aim of this work is to perform an analysis of the influence of automatic prediction for apparent personality (based on the Big-Five model), of the following characteristics: raw audio, visual information (sequence of face images) and high-level features, including Ekman's universal basic emotions, gender and age. To this end, we have defined different modalities, performing combinations of them and determining how much they contribute to the regression of apparent personality traits. The most remarkable results obtained through the experiments performed are as follows: in all modalities, females have a higher average accuracy than men, except in the modality with only audio; for happy emotion, the best accuracy score is found in the Conscientiousness trait; Extraversion and Conscientiousness traits get the highest accuracy scores in almost all emotions; visual information is the one that most positively influences the results; the combination of high-level features chosen slightly improves the accuracy performance for predictions

    Assessment of ethnic and gender bias in automated first impression analysis

    Get PDF
    This thesis aims to investigate possible gender and ethnic biases in state-of-the-art deep learning methods in first impression analysis. Analysing a person with some software, businesses want to find the best candidate, without the person being judged by their gender or ethnicity. To achieve this, a first impression dataset about the big five personality traits, with additional information about the person’s gender and ethnic background, was used. Biases were both investigated with models trained on balanced and imbalanced data, where balanced here refers to the number of frames used from people classified as Asian, African-American, or Caucasian in the dataset. The results with both the balanced and imbalanced datasets were similar. With all the models the accuracy for Asians was much higher compared to others, which may come from the fact that the dataset did not include enough variance in the Asian data, so when evaluating, all Asians were seen similarly

    Interpretable Convolutional Neural Networks

    Full text link
    This paper proposes a method to modify traditional convolutional neural networks (CNNs) into interpretable CNNs, in order to clarify knowledge representations in high conv-layers of CNNs. In an interpretable CNN, each filter in a high conv-layer represents a certain object part. We do not need any annotations of object parts or textures to supervise the learning process. Instead, the interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. Our method can be applied to different types of CNNs with different structures. The clear knowledge representation in an interpretable CNN can help people understand the logics inside a CNN, i.e., based on which patterns the CNN makes the decision. Experiments showed that filters in an interpretable CNN were more semantically meaningful than those in traditional CNNs.Comment: In this version, we release the website of the code. Compared to the previous version, we have corrected all values of location instability in Table 3--6 by dividing the values by sqrt(2), i.e., a=a/sqrt(2). Such revisions do NOT decrease the significance of the superior performance of our method, because we make the same correction to location-instability values of all baseline
    corecore