10,634 research outputs found

    Detecting Low Rapport During Natural Interactions in Small Groups from Non-Verbal Behaviour

    Full text link
    Rapport, the close and harmonious relationship in which interaction partners are "in sync" with each other, was shown to result in smoother social interactions, improved collaboration, and improved interpersonal outcomes. In this work, we are first to investigate automatic prediction of low rapport during natural interactions within small groups. This task is challenging given that rapport only manifests in subtle non-verbal signals that are, in addition, subject to influences of group dynamics as well as inter-personal idiosyncrasies. We record videos of unscripted discussions of three to four people using a multi-view camera system and microphones. We analyse a rich set of non-verbal signals for rapport detection, namely facial expressions, hand motion, gaze, speaker turns, and speech prosody. Using facial features, we can detect low rapport with an average precision of 0.7 (chance level at 0.25), while incorporating prior knowledge of participants' personalities can even achieve early prediction without a drop in performance. We further provide a detailed analysis of different feature sets and the amount of information contained in different temporal segments of the interactions.Comment: 12 pages, 6 figure

    Towards responsive Sensitive Artificial Listeners

    Get PDF
    This paper describes work in the recently started project SEMAINE, which aims to build a set of Sensitive Artificial Listeners – conversational agents designed to sustain an interaction with a human user despite limited verbal skills, through robust recognition and generation of non-verbal behaviour in real-time, both when the agent is speaking and listening. We report on data collection and on the design of a system architecture in view of real-time responsiveness

    Recognizing Emotions in a Foreign Language

    Get PDF
    Expressions of basic emotions (joy, sadness, anger, fear, disgust) can be recognized pan-culturally from the face and it is assumed that these emotions can be recognized from a speaker's voice, regardless of an individual's culture or linguistic ability. Here, we compared how monolingual speakers of Argentine Spanish recognize basic emotions from pseudo-utterances ("nonsense speech") produced in their native language and in three foreign languages (English, German, Arabic). Results indicated that vocal expressions of basic emotions could be decoded in each language condition at accuracy levels exceeding chance, although Spanish listeners performed significantly better overall in their native language ("in-group advantage"). Our findings argue that the ability to understand vocally-expressed emotions in speech is partly independent of linguistic ability and involves universal principles, although this ability is also shaped by linguistic and cultural variables

    First impressions: A survey on vision-based apparent personality trait analysis

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.Peer ReviewedPostprint (author's final draft

    The time course of authenticity and valence perception in nonverbal emotional vocalizations

    Get PDF
    There is evidence that the recognition of sadness and happiness in nonverbal vocalizations reaches an adult standard before the recognition of anger and fear, and that men and women are equally good at recognizing emotions, regardless of whether male or female speakers produce them. Still, there is no evidence regarding how much time we need to identify the authenticity of vocal emotional expressions, as well as the type of vocalization itself. How much acoustic information do we need to perceive if a vocal expression, such as a laughter, is authentic or voluntary? How long does it take to perceive if its laughter or crying? The present study addresses these questions. The main objective is to determine the time course of authenticity and type of vocalization recognition in laughter and crying sounds. For this purpose, the procedure was done using a gating paradigm and a sample of 395 participants. Results showed that the recognition accuracy of nonverbal vocalizations improves with the increase of the gate duration, and that the identification of the type of vocalization (laughter vs. crying) happens at earlier stages than the identification of their authenticity (authentic vs. voluntary).Há evidências de que o reconhecimento de tristeza e felicidade em vocalizações não verbais atinge um padrão adulto antes do reconhecimento de raiva e medo, e que homens e mulheres são igualmente bons a reconhecer emoções, independentemente de estas emoções serem produzidas por falantes do sexo masculino ou do sexo feminino. Ainda assim, não há evidências de quanto tempo precisamos para identificar a autenticidade das expressões emocionais vocais, bem como o tipo de vocalização em si. De quanta informação acústica precisamos para perceber se uma expressão vocal, tal como um riso ou choro, é autêntica ou voluntária? Quanto tempo se demora a perceber se é riso ou choro? O presente estudo aborda estas questões. O objetivo principal é determinar o tempo de reconhecimento da autenticidade e tipo de vocalização em estímulos de riso e choro. Para tal, foi utilizado um paradigma de gating e uma amostra de 395 participantes. Os resultados mostraram que a precisão do reconhecimento de vocalizações não verbais melhora com o aumento da duração do gate e que a identificação do tipo de vocalização (riso vs. choro) ocorre em fases mais precoces do que a identificação da sua autenticidade (autêntica vs. voluntária)

    Personalized Core Vocabulary Books In A Federal Four Special Education School

    Get PDF
    The world of special education has progressed through leaps and bounds over the past 25 or more years. Students who were previously underserved or not served at all are being fully included within the educational environment. Through teaching students with moderate to severe disabilities, the author determined that an area of special education that was lacking adequate supports was communication, specifically supports that included the use of alternative and augmentative communication. The area of core vocabulary is part of the best practice systems when supporting students who are non-verbal or are early communicators. Though this strategy was considered by many to be the best practice (Cannon and Edmond, 2009), there was a clear need for resources to better instruct students on how to use these words. Because of this, the purpose of the capstone project is to answer the question what are the attributes of personalized books that could help students in a special education setting acquire core vocabulary? To develop the capstone project, current best practice strategies for teaching students with autism and communication needs were reviewed in addition to reflecting on personal teaching experiences in a federal four setting special education school. The capstone project discovered that core vocabulary should be presented in a clear manner and personalized core vocabulary books could be an effective strategy for teaching students to functionally use core vocabulary words. Additional elements of personalization can be included like student names and pictures, a students likes and dislikes, and personalized situations. After the completion of the capstone project, it was determined that for the ease of the creation of the books, a software program or template should be developed so that additional books with differing core vocabulary words can be created
    corecore