6 research outputs found

    Estimating self-assessed personality from body movements and proximity in crowded mingling scenarios

    Get PDF
    ArtículoThis paper focuses on the automatic classi cation of self- assessed personality traits from the HEXACO inventory du- ring crowded mingle scenarios. We exploit acceleration and proximity data from a wearable device hung around the neck. Unlike most state-of-the-art studies, addressing per- sonality estimation during mingle scenarios provides a cha- llenging social context as people interact dynamically and freely in a face-to-face setting. While many former studies use audio to extract speech-related features, we present a novel method of extracting an individual's speaking status from a single body worn triaxial accelerometer which scales easily to large populations. Moreover, by fusing both speech and movement energy related cues from just acceleration, our experimental results show improvements on the estima- tion of Humility over features extracted from a single behav- ioral modality. We validated our method on 71 participants where we obtained an accuracy of 69% for Honesty, Consci- entiousness and Openness to Experience. To our knowledge, this is the largest validation of personality estimation carried out in such a social context with simple wearable sensors

    Comparing Social Science and Computer Science Workflow Processes for Studying Group Interactions

    Get PDF
    In this article, a team of authors from the Geeks and Groupies workshop, in Leiden, the Netherlands, compare prototypical approaches to studying group interaction in social science and computer science disciplines, which we call workflows. To help social and computer science scholars understand and manage these differences, we organize workflow into three major stages: research design, data collection, and analysis. For each stage, we offer a brief overview on how scholars from each discipline work. We then compare those approaches and identify potential synergies and challenges. We conclude our article by discussing potential directions for more integrated and mutually beneficial collaboration that go beyond the producer–consumer model

    Personality Trait Classification via Co-Occurrent Multiparty Multimodal Event Discovery

    Get PDF
    This paper proposes a novel feature extraction framework from mutli-party multimodal conversation for inference of personality traits and emergent leadership. The proposed framework represents multi modal features as the combination of each participant’s nonverbal activity and group activity. This feature representationenables to compare the nonverbal patterns extracted from the participants of different groups in a metric space. It captures how the target member outputs nonverbal behavior observed in a group (e.g. the member speaks while all members move their body), and can be available for any kind of multiparty conversation task. Frequent co-occurrent events are discovered using graph clustering from multimodal sequences. The proposed framework is applied for the ELEA corpus which is an audio visual dataset collected from groupmeetings. We evaluate the framework for binary classification task of 10 personality traits. Experimental results show that the model trained with co-occurrence features obtained higher accuracy than previously related work in 8 out of 10 traits. In addition, the co-occurrence features improve the accuracy from 2% up to 17%

    First impressions: A survey on vision-based apparent personality trait analysis

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.Peer ReviewedPostprint (author's final draft
    corecore