8,383 research outputs found

    Virtual Meeting Rooms: From Observation to Simulation

    Get PDF
    Virtual meeting rooms are used for simulation of real meeting behavior and can show how people behave, how they gesture, move their heads, bodies, their gaze behavior during conversations. They are used for visualising models of meeting behavior, and they can be used for the evaluation of these models. They are also used to show the effects of controlling certain parameters on the behavior and in experiments to see what the effect is on communication when various channels of information - speech, gaze, gesture, posture - are switched off or manipulated in other ways. The paper presents the various stages in the development of a virtual meeting room as well and illustrates its uses by presenting some results of experiments to see whether human judges can induce conversational roles in a virtual meeting situation when they only see the head movements of participants in the meeting

    GLIMPSED:Improving natural language processing with gaze data

    Get PDF

    Mapping dynamic interactions among cognitive biases in depression

    Get PDF
    Depression is theorized to be caused in part by biased cognitive processing of emotional information. Yet, prior research has adopted a reductionist approach that does not characterize how biases in cognitive processes such as attention and memory work together to confer risk for this complex multifactorial disorder. Grounded in affective and cognitive science, we highlight four mechanisms to understand how attention biases, working memory difficulties, and long-term memory biases interact and contribute to depression. We review evidence for each mechanism and highlight time- and context-dependent dynamics. We outline methodological considerations and recommendations for research in this area. We conclude with directions to advance the understanding of depression risk, cognitive training interventions, and transdiagnostic properties of cognitive biases and their interactions

    Self-Supervised Vision-Based Detection of the Active Speaker as Support for Socially-Aware Language Acquisition

    Full text link
    This paper presents a self-supervised method for visual detection of the active speaker in a multi-person spoken interaction scenario. Active speaker detection is a fundamental prerequisite for any artificial cognitive system attempting to acquire language in social settings. The proposed method is intended to complement the acoustic detection of the active speaker, thus improving the system robustness in noisy conditions. The method can detect an arbitrary number of possibly overlapping active speakers based exclusively on visual information about their face. Furthermore, the method does not rely on external annotations, thus complying with cognitive development. Instead, the method uses information from the auditory modality to support learning in the visual domain. This paper reports an extensive evaluation of the proposed method using a large multi-person face-to-face interaction dataset. The results show good performance in a speaker dependent setting. However, in a speaker independent setting the proposed method yields a significantly lower performance. We believe that the proposed method represents an essential component of any artificial cognitive system or robotic platform engaging in social interactions.Comment: 10 pages, IEEE Transactions on Cognitive and Developmental System

    Automatic Gaze Classification for Aviators: Using Multi-task Convolutional Networks as a Proxy for Flight Instructor Observation

    Get PDF
    In this work, we investigate how flight instructors observe aviator scan patterns and assign quality to an aviator\u27s gaze. We first establish the reliability of instructors to assign similar quality to an aviator\u27s scan patterns, and then investigate methods to automate this quality using machine learning. In particular, we focus on the classification of gaze for aviators in a mixed-reality flight simulation. We create and evaluate two machine learning models for classifying gaze quality of aviators: a task-agnostic model and a multi-task model. Both models use deep convolutional neural networks to classify the quality of pilot gaze patterns for 40 pilots, operators, and novices, as compared to visual inspection by three experienced flight instructors. Our multi-task model can automate the process of gaze inspection with an average accuracy of over 93.0% for three separate flight tasks. Our approach could assist existing flight instructors to provide feedback to learners, or it could open the door to more automated feedback for pilots learning to carry out different maneuvers

    First impressions: A survey on vision-based apparent personality trait analysis

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.Peer ReviewedPostprint (author's final draft

    Attention to the model's face when learning from video modeling examples in adolescents with and without autism spectrum disorder

    Get PDF
    We investigated the effects of seeing the instructor's (i.e., the model's) face in video modeling examples on students' attention and their learning outcomes. Research with university students suggested that the model's face attracts students' attention away from what the model is doing, but this did not hamper learning. We aimed to investigate whether we would replicate this finding in adolescents (prevocational education) and to establish how adolescents with autism spectrum disorder, who have been found to look less at faces generally, would process video examples in which the model's face is visible. Results showed that typically developing adolescents who did see the model's face paid significantly less attention to the task area than typically developing adolescents who did not see the model's face. Adolescents with autism spectrum disorder paid less attention to the model's face and more to the task demonstration area than typically developing adolescents who saw the model's face. These differences in viewing behavior, however, did not affect learning outcomes. This study provides further evidence that seeing the model's face in video examples affects students' attention but not their learning outcomes
    corecore