243 research outputs found

    Detecting Low Rapport During Natural Interactions in Small Groups from Non-Verbal Behaviour

    Full text link
    Rapport, the close and harmonious relationship in which interaction partners are "in sync" with each other, was shown to result in smoother social interactions, improved collaboration, and improved interpersonal outcomes. In this work, we are first to investigate automatic prediction of low rapport during natural interactions within small groups. This task is challenging given that rapport only manifests in subtle non-verbal signals that are, in addition, subject to influences of group dynamics as well as inter-personal idiosyncrasies. We record videos of unscripted discussions of three to four people using a multi-view camera system and microphones. We analyse a rich set of non-verbal signals for rapport detection, namely facial expressions, hand motion, gaze, speaker turns, and speech prosody. Using facial features, we can detect low rapport with an average precision of 0.7 (chance level at 0.25), while incorporating prior knowledge of participants' personalities can even achieve early prediction without a drop in performance. We further provide a detailed analysis of different feature sets and the amount of information contained in different temporal segments of the interactions.Comment: 12 pages, 6 figure

    Conversing with a devil’s advocate: Interpersonal coordination in deception and disagreement

    Get PDF
    abstract: This study investigates the presence of dynamical patterns of interpersonal coordination in extended deceptive conversations across multimodal channels of behavior. Using a novel "devil’s advocate" paradigm, we experimentally elicited deception and truth across topics in which conversational partners either agreed or disagreed, and where one partner was surreptitiously asked to argue an opinion opposite of what he or she really believed. We focus on interpersonal coordination as an emergent behavioral signal that captures interdependencies between conversational partners, both as the coupling of head movements over the span of milliseconds, measured via a windowed lagged cross correlation (WLCC) technique, and more global temporal dependencies across speech rate, using cross recurrence quantification analysis (CRQA). Moreover, we considered how interpersonal coordination might be shaped by strategic, adaptive conversational goals associated with deception. We found that deceptive conversations displayed more structured speech rate and higher head movement coordination, the latter with a peak in deceptive disagreement conversations. Together the results allow us to posit an adaptive account, whereby interpersonal coordination is not beholden to any single functional explanation, but can strategically adapt to diverse conversational demands.The article is published at http://journals.plos.org/plosone/article?id=10.1371/journal.pone.017814

    The role of closeness in the relationship between nonverbal mimicry and cooperation

    Get PDF
    The ‘social glue’ function of nonverbal mimicry has received much support in the empirical literature, with research demonstrating its prosocial consequences, including increased cooperation. When looking to explain why nonverbal mimicry effects behaviour, some research has pointed to interpersonal closeness. However, in these studies, a robust measurement of nonverbal mimicry and closeness is absent, making it impossible to confidently argue that the observed mimicry resulted from increased closeness and not a third factor. Likewise, without a reliable measure of nonverbal mimicry it is not possible to determine that nonverbal mimicry was manipulated sufficiently. This thesis addresses this by testing the impact of nonverbal mimicry on cooperation through closeness, using rigorous measures. In chapter 3 I use high-resolution motion tracking—Xsens MVN systems—todemonstrate that an increased closeness towards a partner is associated with more nonverbal mimicry of that partner. It also identified regions of mimicry (discreet body movements) that are related to closeness. Chapter 4 showed a positive relationship between nonverbal mimicry and closeness but found no mediation effect of closeness on the relationship between mimicry and cooperation. In chapter 5, I controlled for methodological limitations in Chapter 4 and found a positive relationship between nonverbal mimicry and cooperation andsupported a mediating effect of closeness. Extending beyond mimicry within the dyad, chapter 6 showed that third-party observers would be more willing to engage in conversation with dyads who showed increased nonverbal mimicry compared to lower amounts of nonverbal mimicry. The effects of third-party nonverbal mimicry were mediated by closeness towards the dyad. Overall, this thesis provides robust evidence for closeness as one of the psychological mechanisms underpinning how nonverbal mimicry works to increase cooperation and provides new insight into the relationship between nonverbal mimicry and social judgements

    The role of facial movements in emotion recognition

    Get PDF
    Most past research on emotion recognition has used photographs of posed expressions intended to depict the apex of the emotional display. Although these studies have provided important insights into how emotions are perceived in the face, they necessarily leave out any role of dynamic information. In this Review, we synthesize evidence from vision science, affective science and neuroscience to ask when, how and why dynamic information contributes to emotion recognition, beyond the information conveyed in static images. Dynamic displays offer distinctive temporal information such as the direction, quality and speed of movement, which recruit higher-level cognitive processes and support social and emotional inferences that enhance judgements of facial affect. The positive influence of dynamic information on emotion recognition is most evident in suboptimal conditions when observers are impaired and/or facial expressions are degraded or subtle. Dynamic displays further recruit early attentional and motivational resources in the perceiver, facilitating the prompt detection and prediction of others’ emotional states, with benefits for social interaction. Finally, because emotions can be expressed in various modalities, we examine the multimodal integration of dynamic and static cues across different channels, and conclude with suggestions for future research

    Using novel methods to examine the role of mimicry in trust and rapport

    Get PDF
    Without realising it, people unconsciously mimic each other’s postures, gestures and mannerisms. This ‘chameleon effect’ is thought to play an important role in creating affiliation, rapport and trust. Existing theories propose that mimicry is used as a social strategy to bond with other members of our social groups. There is strong behavioural and neural evidence for the strategic control of mimicry. However, evidence that mimicry leads to positive social outcomes is less robust. In this thesis, I aimed to rigorously test the prediction that mimicry leads to rapport and trust, using novel virtual reality methods with high experimental control. In the first study, we developed a virtual reality task for measuring implicit trust behaviour in a virtual maze. Across three experiments we demonstrated the suitability of this task over existing economic games for measuring trust towards specific others. In the second and third studies we tested the effects of mimicry from virtual characters whose other social behaviours were tightly controlled. In the second study, we found that virtual mimicry significantly increased rapport and this was not affected by the precise time delay in mimicking. In the third study we found this result was not replicated using a strict, pre-registered design, and the effects of virtual mimicry did not change depending on the ingroup or outgroup status of the mimicker. In the fourth study we went beyond mimicry to explore new ways of modelling coordinated behaviour as it naturally occurs in social interactions. We used high-resolution motion capture to record motion in dyadic conversations and calculated levels of coordination using wavelet analysis. We found a reliable pattern of decoupling as well as coordination in people’s head movements. I discuss how the findings of our experiments relate to theories about the social function of mimicry and suggest directions for future research

    Computational Modeling of Facial Response for Detecting Differential Traits in Autism Spectrum Disorders

    Get PDF
    This dissertation proposes novel computational modeling and computer vision methods for the analysis and discovery of differential traits in subjects with Autism Spectrum Disorders (ASD) using video and three-dimensional (3D) images of face and facial expressions. ASD is a neurodevelopmental disorder that impairs an individual’s nonverbal communication skills. This work studies ASD from the pathophysiology of facial expressions which may manifest atypical responses in the face. State-of-the-art psychophysical studies mostly employ na¨ıve human raters to visually score atypical facial responses of individuals with ASD, which may be subjective, tedious, and error prone. A few quantitative studies use intrusive sensors on the face of the subjects with ASD, which in turn, may inhibit or bias the natural facial responses of these subjects. This dissertation proposes non-intrusive computer vision methods to alleviate these limitations in the investigation for differential traits from the spontaneous facial responses of individuals with ASD. Two IRB-approved psychophysical studies are performed involving two groups of age-matched subjects: one for subjects diagnosed with ASD and the other for subjects who are typically-developing (TD). The facial responses of the subjects are computed from their facial images using the proposed computational models and then statistically analyzed to infer about the differential traits for the group with ASD. A novel computational model is proposed to represent the large volume of 3D facial data in a small pose-invariant Frenet frame-based feature space. The inherent pose-invariant property of the proposed features alleviates the need for an expensive 3D face registration in the pre-processing step. The proposed modeling framework is not only computationally efficient but also offers competitive performance in 3D face and facial expression recognition tasks when compared with that of the state-ofthe-art methods. This computational model is applied in the first experiment to quantify subtle facial muscle response from the geometry of 3D facial data. Results show a statistically significant asymmetry in specific pair of facial muscle activation (p\u3c0.05) for the group with ASD, which suggests the presence of a psychophysical trait (also known as an ’oddity’) in the facial expressions. For the first time in the ASD literature, the facial action coding system (FACS) is employed to classify the spontaneous facial responses based on facial action units (FAUs). Statistical analyses reveal significantly (p\u3c0.01) higher prevalence of smile expression (FAU 12) for the ASD group when compared with the TD group. The high prevalence of smile has co-occurred with significantly averted gaze (p\u3c0.05) in the group with ASD, which is indicative of an impaired reciprocal communication. The metric associated with incongruent facial and visual responses suggests a behavioral biomarker for ASD. The second experiment shows a higher prevalence of mouth frown (FAU 15) and significantly lower correlations between the activation of several FAU pairs (p\u3c0.05) in the group with ASD when compared with the TD group. The proposed computational modeling in this dissertation offers promising biomarkers, which may aid in early detection of subtle ASD-related traits, and thus enable an effective intervention strategy in the future

    The eyes have it

    Get PDF

    Cross-correlation- and entropy-based measures of movement synchrony: Non-convergence of measures leads to different associations with depressive symptoms

    Get PDF
    Background: Several algorithms have been proposed to quantify synchronization. However, little is known about their convergent and predictive validity. Methods: The sample included 30 persons who completed a manualized interview focusing on psychosomatic symptoms. The intensity of body motions was measured using motion-energy analysis. We computed several measures of movement synchrony based on the time series of the interviewer and participant: mutual information, windowed cross-recurrence analysis, cross-correlation, rMEA, SUSY, SUCO, WCLC–PP and WCLR–PP. Depressive symptoms were assessed with the Patient Health Questionnaire (PHQ9). Results: According to the explorative factor analyses, all the variants of cross-correlation and all the measures of SUSY, SUCO and rMEA–WCC led to similar synchrony measures and could be assigned to the same factor. All the mutual-information measures, rMEA–WCLC, WCLC–PP–F, WCLC–PP–R2, WCLR–PP–F, and WinCRQA–DET loaded on the second factor. Depressive symptoms correlated negatively with WCLC–PP–F and WCLR–PP–F and positively with rMEA–WCC, SUCO–ES–CO, and MI–Z. Conclusion: More standardization efforts are needed because different synchrony measures have little convergent validity, which can lead to contradictory conclusions concerning associations between depressive symptoms and movement synchrony using the same dataset

    Coordination of Nods in Dialogue.

    Get PDF
    PhD ThesisBehavioral mimicry has been claimed to be a nonconscious behavior that evokes prosocial e ects | liking, trust, empathy, persuasiveness | between interaction partners. Recently Intelligent Virtual Agents (IVAs) and Immersive Virtual Environments (IVEs) have provided rich new possibilities for nonverbal behavior studies such as mimicry studies. One of the best known e ects is the \Digital Chameleons" in which an IVA appears to be more persuasive if it automatically mimics a listener's head nods. However, this e ect has not been consistently replicated. This thesis explores the basis of the \chameleon e ects" using a customized IVE integrated with full-body motion capture system that support realtime behavior manipulation in the IVE. Two replications exploring the e ectiveness of the virtual speaker and head nodding behavior of interaction partners in the agent-listener interaction and avatar-listener interaction by manipulating the virtual speaker's head nods and provide mixed results. The rst experiment fails to replicate the original nding of mimicry leading to higher ratings of an agent's e ectiveness. The second experiment shows a higher rating for agreement with a mimicking avatar. Overall, an avatar speaker appears more likely to activate an e ect of behavioral mimicry than an agent speaker, probably because the avatar speaker provides richer nonverbal cues than the agent speaker. Detailed analysis of the motion data for speaker and listener head movements reveals systematic di erences in a) head nodding between a speaker producing a monologue and a speaker engaged in a dialogue b) head nodding of speakers and listeners in the high and low frequency domain and c) the reciprocal dynamics of head-nodding with di erent virtual speaker's head nodding behavior. We conclude that: i) the activation of behavioral mimicry requires a certain number of nonverbal cues, ii) speakers behave di erently in monologue and dialogue, iii) speakers and listeners nod asymmetrically in di erent frequency domains, iv) the coordination of head nods in natural dialogue is no more than we would expect by chance, v) speakers' and listeners' head nods become coordinated by spontaneous collaborative adjustment of their head nods
    corecore