1,103 research outputs found

    Analysing the Direction of Emotional Influence in Nonverbal Dyadic Communication: A Facial-Expression Study

    Full text link
    Identifying the direction of emotional influence in a dyadic dialogue is of increasing interest in the psychological sciences with applications in psychotherapy, analysis of political interactions, or interpersonal conflict behavior. Facial expressions are widely described as being automatic and thus hard to overtly influence. As such, they are a perfect measure for a better understanding of unintentional behavior cues about social-emotional cognitive processes. With this view, this study is concerned with the analysis of the direction of emotional influence in dyadic dialogue based on facial expressions only. We exploit computer vision capabilities along with causal inference theory for quantitative verification of hypotheses on the direction of emotional influence, i.e., causal effect relationships, in dyadic dialogues. We address two main issues. First, in a dyadic dialogue, emotional influence occurs over transient time intervals and with intensity and direction that are variant over time. To this end, we propose a relevant interval selection approach that we use prior to causal inference to identify those transient intervals where causal inference should be applied. Second, we propose to use fine-grained facial expressions that are present when strong distinct facial emotions are not visible. To specify the direction of influence, we apply the concept of Granger causality to the time series of facial expressions over selected relevant intervals. We tested our approach on newly, experimentally obtained data. Based on the quantitative verification of hypotheses on the direction of emotional influence, we were able to show that the proposed approach is most promising to reveal the causal effect pattern in various instructed interaction conditions.Comment: arXiv admin note: text overlap with arXiv:1810.1217

    Affective reactions towards socially interactive agents and their computational modeling

    Get PDF
    Over the past 30 years, researchers have studied human reactions towards machines applying the Computers Are Social Actors paradigm, which contrasts reactions towards computers with reactions towards humans. The last 30 years have also seen improvements in technology that have led to tremendous changes in computer interfaces and the development of Socially Interactive Agents. This raises the question of how humans react to Socially Interactive Agents. To answer these questions, knowledge from several disciplines is required, which is why this interdisciplinary dissertation is positioned within psychology and computer science. It aims to investigate affective reactions to Socially Interactive Agents and how these can be modeled computationally. Therefore, after a general introduction and background, this thesis first provides an overview of the Socially Interactive Agent system used in this work. Second, it presents a study comparing a human and a virtual job interviewer, which shows that both interviewers induce shame in participants to the same extent. Thirdly, it reports on a study investigating obedience towards Socially Interactive Agents. The results indicate that participants obey human and virtual instructors in similar ways. Furthermore, both types of instructors evoke feelings of stress and shame to the same extent. Fourth, a stress management training using biofeedback with a Socially Interactive Agent is presented. The study shows that a virtual trainer can teach coping techniques for emotionally challenging social situations. Fifth, it introduces MARSSI, a computational model of user affect. The evaluation of the model shows that it is possible to relate sequences of social signals to affective reactions, taking into account emotion regulation processes. Finally, the Deep method is proposed as a starting point for deeper computational modeling of internal emotions. The method combines social signals, verbalized introspection information, context information, and theory-driven knowledge. An exemplary application to the emotion shame and a schematic dynamic Bayesian network for its modeling are illustrated. Overall, this thesis provides evidence that human reactions towards Socially Interactive Agents are very similar to those towards humans, and that it is possible to model these reactions computationally.In den letzten 30 Jahren haben Forschende menschliche Reaktionen auf Maschinen untersucht und dabei das “Computer sind soziale Akteure”-Paradigma genutzt, in dem Reaktionen auf Computer mit denen auf Menschen verglichen werden. In den letzten 30 Jahren hat sich ebenfalls die Technologie weiterentwickelt, was zu einer enormen Veränderung der Computerschnittstellen und der Entwicklung von sozial interaktiven Agenten geführt hat. Dies wirft Fragen zu menschlichen Reaktionen auf sozial interaktive Agenten auf. Um diese Fragen zu beantworten, ist Wissen aus mehreren Disziplinen erforderlich, weshalb diese interdisziplinäre Dissertation innerhalb der Psychologie und Informatik angesiedelt ist. Sie zielt darauf ab, affektive Reaktionen auf sozial interaktive Agenten zu untersuchen und zu erforschen, wie diese computational modelliert werden können. Nach einer allgemeinen Einführung in das Thema gibt diese Arbeit daher, erstens, einen Überblick über das Agentensystem, das in der Arbeit verwendet wird. Zweitens wird eine Studie vorgestellt, in der eine menschliche und eine virtuelle Jobinterviewerin miteinander verglichen werden, wobei sich zeigt, dass beide Interviewerinnen bei den Versuchsteilnehmenden Schamgefühle in gleichem Maße auslösen. Drittens wird eine Studie berichtet, in der Gehorsam gegenüber sozial interaktiven Agenten untersucht wird. Die Ergebnisse deuten darauf hin, dass Versuchsteilnehmende sowohl menschlichen als auch virtuellen Anleiterinnen ähnlich gehorchen. Darüber hinaus werden durch beide Instruktorinnen gleiche Maße von Stress und Scham hervorgerufen. Viertens wird ein Biofeedback-Stressmanagementtraining mit einer sozial interaktiven Agentin vorgestellt. Die Studie zeigt, dass die virtuelle Trainerin Techniken zur Bewältigung von emotional herausfordernden sozialen Situationen vermitteln kann. Fünftens wird MARSSI, ein computergestütztes Modell des Nutzeraffekts, vorgestellt. Die Evaluation des Modells zeigt, dass es möglich ist, Sequenzen von sozialen Signalen mit affektiven Reaktionen unter Berücksichtigung von Emotionsregulationsprozessen in Beziehung zu setzen. Als letztes wird die Deep-Methode als Ausgangspunkt für eine tiefer gehende computergestützte Modellierung von internen Emotionen vorgestellt. Die Methode kombiniert soziale Signale, verbalisierte Introspektion, Kontextinformationen und theoriegeleitetes Wissen. Eine beispielhafte Anwendung auf die Emotion Scham und ein schematisches dynamisches Bayes’sches Netz zu deren Modellierung werden dargestellt. Insgesamt liefert diese Arbeit Hinweise darauf, dass menschliche Reaktionen auf sozial interaktive Agenten den Reaktionen auf Menschen sehr ähnlich sind und dass es möglich ist diese menschlichen Reaktion computational zu modellieren.Deutsche Forschungsgesellschaf

    Defining turn taking in intervention for young children with autism: A review of the literature

    Get PDF
    Turn taking is a form of preverbal, dyadic, reciprocal communication that may support key areas of development, such as language and joint attention, and may serve different functions depending on each communicative partner’s intent. As such, it has been incorporated in interventions targeting various outcomes in young children with autism. However, there is inconsistency in how researchers define turn taking and explorations on how turn taking is defined across these interventions have not yet been reported in the current literature. Therefore, the purpose of this review was to investigate how turn taking is operationally defined based on communicative intent in the current literature on interventions for young children with autism and to explore additional intervention content to provide fuller context to how turn taking has been promoted. A search was conducted across databases to identify intervention studies for young children with autism that incorporated an embedded turn-taking component. Peer-reviewed articles were then coded based on turn-taking communicative intent, and additional intervention content was categorized. Findings across 14 studies indicate variability among turn-taking definitions both in communicative function and form. The results also reveal that turn taking has been promoted through different intervention approaches that incorporate diverse agents, settings, and methodology. Researchers and practitioners should consider specificity and clarity when defining turn taking to most optimally meet the developmental needs of young children with autism in future interventions

    Can Real-time, Adaptive Human-Robot Motor Coordination Improve Humans’ Overall Perception of a Robot?

    Get PDF
    Previous research on social interaction among humans suggested that interpersonal motor coordination can help to establish social rapport. Our research addresses the question of whether, in a human-humanoid interaction experiment, the human’s overall perception of a robot can be improved by realising motor coordination behaviour that allows the robot to adapt in real-time to a person’s behaviour. A synchrony detection method using information distance was adopted to realise the real-time human-robot motor coordination behaviour, which guided the humanoid robot to coordinate its movements to a human by measuring the behaviour synchrony between the robot and the human. The feedback of the participants indicated that most of the participants preferred to interact with the humanoid robot with the adaptive motor coordination capability. The results of this proof-of-concept study suggest that the motor coordination mechanism improved humans’ overall perception of the humanoid robot. Together with our previous findings, namely that humans actively coordinate their behaviours to a humanoid robot’s behaviours, this study further supports the hypothesis that bidirectional motor coordination could be a valid approach to facilitate adaptive human-humanoid interaction.Peer reviewedFinal Accepted Versio

    Understanding joint action: Current theoretical and empirical approaches

    Get PDF
    Joint actions are omnipresent, ranging from a handshake between two people to the coordination of groups of people playing in an orchestra. We are highly skilled at coordinating our actions with those of others to reach common goals and rely on this ability throughout our daily lives. What are the social, cognitive and neural processes underlying this ability? How do others around us influence our task representations? How does joint action influence interpersonal interactions? How do language and gesture support joint action? What differentiates joint action from individual action? This article forms an introductory editorial to the field of joint action. It accompanies contributions to the special issue entitled "Current Issues in Joint Action Research". The issue brings together conceptual and empirical approaches on different topics, ranging from lower-level issues such as the link between perception and joint action, to higher-level issues such as language as a form of joint action

    Sensing, interpreting, and anticipating human social behaviour in the real world

    Get PDF
    Low-level nonverbal social signals like glances, utterances, facial expressions and body language are central to human communicative situations and have been shown to be connected to important high-level constructs, such as emotions, turn-taking, rapport, or leadership. A prerequisite for the creation of social machines that are able to support humans in e.g. education, psychotherapy, or human resources is the ability to automatically sense, interpret, and anticipate human nonverbal behaviour. While promising results have been shown in controlled settings, automatically analysing unconstrained situations, e.g. in daily-life settings, remains challenging. Furthermore, anticipation of nonverbal behaviour in social situations is still largely unexplored. The goal of this thesis is to move closer to the vision of social machines in the real world. It makes fundamental contributions along the three dimensions of sensing, interpreting and anticipating nonverbal behaviour in social interactions. First, robust recognition of low-level nonverbal behaviour lays the groundwork for all further analysis steps. Advancing human visual behaviour sensing is especially relevant as the current state of the art is still not satisfactory in many daily-life situations. While many social interactions take place in groups, current methods for unsupervised eye contact detection can only handle dyadic interactions. We propose a novel unsupervised method for multi-person eye contact detection by exploiting the connection between gaze and speaking turns. Furthermore, we make use of mobile device engagement to address the problem of calibration drift that occurs in daily-life usage of mobile eye trackers. Second, we improve the interpretation of social signals in terms of higher level social behaviours. In particular, we propose the first dataset and method for emotion recognition from bodily expressions of freely moving, unaugmented dyads. Furthermore, we are the first to study low rapport detection in group interactions, as well as investigating a cross-dataset evaluation setting for the emergent leadership detection task. Third, human visual behaviour is special because it functions as a social signal and also determines what a person is seeing at a given moment in time. Being able to anticipate human gaze opens up the possibility for machines to more seamlessly share attention with humans, or to intervene in a timely manner if humans are about to overlook important aspects of the environment. We are the first to propose methods for the anticipation of eye contact in dyadic conversations, as well as in the context of mobile device interactions during daily life, thereby paving the way for interfaces that are able to proactively intervene and support interacting humans.Blick, Gesichtsausdrücke, Körpersprache, oder Prosodie spielen als nonverbale Signale eine zentrale Rolle in menschlicher Kommunikation. Sie wurden durch vielzählige Studien mit wichtigen Konzepten wie Emotionen, Sprecherwechsel, Führung, oder der Qualität des Verhältnisses zwischen zwei Personen in Verbindung gebracht. Damit Menschen effektiv während ihres täglichen sozialen Lebens von Maschinen unterstützt werden können, sind automatische Methoden zur Erkennung, Interpretation, und Antizipation von nonverbalem Verhalten notwendig. Obwohl die bisherige Forschung in kontrollierten Studien zu ermutigenden Ergebnissen gekommen ist, bleibt die automatische Analyse nonverbalen Verhaltens in weniger kontrollierten Situationen eine Herausforderung. Darüber hinaus existieren kaum Untersuchungen zur Antizipation von nonverbalem Verhalten in sozialen Situationen. Das Ziel dieser Arbeit ist, die Vision vom automatischen Verstehen sozialer Situationen ein Stück weit mehr Realität werden zu lassen. Diese Arbeit liefert wichtige Beiträge zur autmatischen Erkennung menschlichen Blickverhaltens in alltäglichen Situationen. Obwohl viele soziale Interaktionen in Gruppen stattfinden, existieren unüberwachte Methoden zur Augenkontakterkennung bisher lediglich für dyadische Interaktionen. Wir stellen einen neuen Ansatz zur Augenkontakterkennung in Gruppen vor, welcher ohne manuelle Annotationen auskommt, indem er sich den statistischen Zusammenhang zwischen Blick- und Sprechverhalten zu Nutze macht. Tägliche Aktivitäten sind eine Herausforderung für Geräte zur mobile Augenbewegungsmessung, da Verschiebungen dieser Geräte zur Verschlechterung ihrer Kalibrierung führen können. In dieser Arbeit verwenden wir Nutzerverhalten an mobilen Endgeräten, um den Effekt solcher Verschiebungen zu korrigieren. Neben der Erkennung verbessert diese Arbeit auch die Interpretation sozialer Signale. Wir veröffentlichen den ersten Datensatz sowie die erste Methode zur Emotionserkennung in dyadischen Interaktionen ohne den Einsatz spezialisierter Ausrüstung. Außerdem stellen wir die erste Studie zur automatischen Erkennung mangelnder Verbundenheit in Gruppeninteraktionen vor, und führen die erste datensatzübergreifende Evaluierung zur Detektion von sich entwickelndem Führungsverhalten durch. Zum Abschluss der Arbeit präsentieren wir die ersten Ansätze zur Antizipation von Blickverhalten in sozialen Interaktionen. Blickverhalten hat die besondere Eigenschaft, dass es sowohl als soziales Signal als auch der Ausrichtung der visuellen Wahrnehmung dient. Somit eröffnet die Fähigkeit zur Antizipation von Blickverhalten Maschinen die Möglichkeit, sich sowohl nahtloser in soziale Interaktionen einzufügen, als auch Menschen zu warnen, wenn diese Gefahr laufen wichtige Aspekte der Umgebung zu übersehen. Wir präsentieren Methoden zur Antizipation von Blickverhalten im Kontext der Interaktion mit mobilen Endgeräten während täglicher Aktivitäten, als auch während dyadischer Interaktionen mittels Videotelefonie

    Bridging the Research Gap: Making HRI Useful to Individuals with Autism

    Get PDF
    While there is a rich history of studies involving robots and individuals with autism spectrum disorders (ASD), few of these studies have made substantial impact in the clinical research community. In this paper we first examine how differences in approach, study design, evaluation, and publication practices have hindered uptake of these research results. Based on ten years of collaboration, we suggest a set of design principles that satisfy the needs (both academic and cultural) of both the robotics and clinical autism research communities. Using these principles, we present a study that demonstrates a quantitatively measured improvement in human-human social interaction for children with ASD, effected by interaction with a robot

    A Spectrum of Tech: An Integrated Literature Review of Technologies to Target Social Skills in Students with Autism Spectrum Disorders

    Get PDF
    Students with autism spectrum disorders (ASD) often have limited social or communication skills and, thus, need extra assistance in learning when and how to engage in appropriate interactions with those around them. However, because there are several different individual skills (e.g., joint attention, emotional expression, etc.) that fall under the categories of social and communication skills, and there are even more options of devices and programs to choose from within assistive technology (AT) and instructional technology (IT), it may seem daunting to find the right technology to meet a specific child’s needs and to determine whether that technology procedures lasting results. The purpose of this integrated literature review was to investigate whether devices used for social skills intervention in PreK-12 students with ASD function as either AT or IT, with the secondary goal of determining which technologies promote better maintenance and generalization than others in social skills interventions in PreK-12 students with ASD. Analysis of published research studies on Virtual Reality, Augmented Reality, Games, Video Modeling, Social Robots, and Wearable Assistive Technologies demonstrate that many of these technologies function as either AT or IT, depending on the context of the situation. Furthermore, it was found that certain devices, specifically Video Modeling and Social Robots, promote better maintenance and generalization

    Developing an engagement and social interaction model for a robotic educational agent

    Get PDF
    Effective educational agents should accomplish four essential goals during a student's learning process: 1) monitor engagement, 2) re-engage when appropriate, 3) teach novel tasks, and 4) improve retention. In this dissertation, we focus on all of these objectives through use of a teaching device (computer, tablet, or virtual reality game) and a robotic educational agent. We begin by developing and validating an engagement model based on the interactions between the student and the teaching device. This model uses time, performance, and/or eye gaze to determine the student's level of engagement. We then create a framework for implementing verbal and nonverbal, or gestural, behaviors on a humanoid robot and evaluate its perception and effectiveness for social interaction. These verbal and nonverbal behaviors are applied throughout the learning scenario to re-engage the students when the engagement model deems it necessary. Finally, we describe and validate the entire educational system that uses the engagement model to activate the behavioral strategies embedded on the robot when learning a new task. We then follow-up this study to evaluate student retention when using this system. The outcome of this research is the development of an educational system that effectively monitors student engagement, applies behavioral strategies, teaches novel tasks, and improves student retention to achieve individualized learning.Ph.D
    • …
    corecore