2 research outputs found

    Modeling the dynamics of nonverbal behavior on interpersonal trust for human-robot interactions

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2011.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 105-108).We describe the design, implementation, and validation of a computational model for recognizing interpersonal trust in social interactions. We begin by leverage pre-existing datasets to understand the relationship between synchronous movement, mimicry, and gestural cues with trust. We found that although synchronous movement was not predictive of trust, synchronous movement is positively correlated with mimicry. That is, people who mimicked each other more frequently also move more synchronously in time together. And revealing the versatile nature of unconscious mimicry, we found mimicry to be predictive of liking between participants instead of trust. We reconfirmed that the following four negative gestural cues, leaning-backward, face-touching, hand-touching, and crossing-arms, when taken together are predictive of lower levels of trust, while the following three positive gestural cues, leaning-forward, having arms-in-lap, and open-arms, were predictive of higher levels of trust. We train and validate a probabilistic graphical model using natural social interaction data from 74 participants. And by observing how these seven important gestures unfold throughout the social interaction, our Trust Hidden Markov Model is able to predict with 94% accuracy whether an individual is willing to behave cooperatively or uncooperatively with their novel partner. And by simulating the resulting model, we found that not only does the frequency in the emission of the predictive gestures matter as well, but also the sequence in which we emit negative to positive cues matter. We attempt to automate this recognition process by detecting those trust-related behaviors through 3D motion capture technology and gesture recognition algorithms. And finally, we test how accurately our entire system, with low-level gesture recognition for high-level trust recognition, can predict whether an individual finds another to be trustworthy or untrustworthy.by Jin Joo Lee.S.M

    A Bayesian theory of mind approach to nonverbal communication for human-robot interactions : a computational formulation of intentional inference and belief manipulation

    No full text
    Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2017.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 115-122).Much of human social communication is channeled through our facial expressions, body language, gaze directions, and many other nonverbal behaviors. A robot's ability to express and recognize the emotional states of people through these nonverbal channels is at the core of artificial social intelligence. The purpose of this thesis is to define a computational framework to nonverbal communication for human-robot interactions. We address both sides to nonverbal communication, the decoding and encoding of social-emotional states through nonverbal behaviors, and also demonstrate their shared underlying representation. We use our computational framework to model engagement/attention in storytelling interactions. Storytelling is an interaction form that is mutually regulated between storytellers and listeners where a key dynamic is the back-and- forth process of speaker cues and listener responses. Listeners convey attentiveness through nonverbal back-channels, while storytellers use nonverbal cues to elicit this feedback. We demonstrate that storytellers employ plans, albeit short, to influence and infer the attentive state of listeners using these speaker cues.We computationally model the intentional inference of storytellers as a planning problem of getting listeners to pay attention. When accounting for this intentional context of storytellers, our attention estimator outperforms current state-of-the-art approaches to emotion recognition. By formulating emotion recognition as a planning problem, we apply a recent artificial intelligence method of inverting planning models to perform belief inference. We computationally model emotion expression as a combined process of estimating a person's beliefs through inference inversion and then producing nonverbal expressions to affect those beliefs.We demonstrate that a robotic agent operating under our belief manipulation paradigm more effectively communicates an attentive state compared to current state-of- the-art approaches that cannot dynamically capture how the robot's expressions are interpreted by the human partner.Jin Joo Lee.Ph. D
    corecore