10,566 research outputs found
Detecting Low Rapport During Natural Interactions in Small Groups from Non-Verbal Behaviour
Rapport, the close and harmonious relationship in which interaction partners
are "in sync" with each other, was shown to result in smoother social
interactions, improved collaboration, and improved interpersonal outcomes. In
this work, we are first to investigate automatic prediction of low rapport
during natural interactions within small groups. This task is challenging given
that rapport only manifests in subtle non-verbal signals that are, in addition,
subject to influences of group dynamics as well as inter-personal
idiosyncrasies. We record videos of unscripted discussions of three to four
people using a multi-view camera system and microphones. We analyse a rich set
of non-verbal signals for rapport detection, namely facial expressions, hand
motion, gaze, speaker turns, and speech prosody. Using facial features, we can
detect low rapport with an average precision of 0.7 (chance level at 0.25),
while incorporating prior knowledge of participants' personalities can even
achieve early prediction without a drop in performance. We further provide a
detailed analysis of different feature sets and the amount of information
contained in different temporal segments of the interactions.Comment: 12 pages, 6 figure
I Probe, Therefore I Am: Designing a Virtual Journalist with Human Emotions
By utilizing different communication channels, such as verbal language,
gestures or facial expressions, virtually embodied interactive humans hold a
unique potential to bridge the gap between human-computer interaction and
actual interhuman communication. The use of virtual humans is consequently
becoming increasingly popular in a wide range of areas where such a natural
communication might be beneficial, including entertainment, education, mental
health research and beyond. Behind this development lies a series of
technological advances in a multitude of disciplines, most notably natural
language processing, computer vision, and speech synthesis. In this paper we
discuss a Virtual Human Journalist, a project employing a number of novel
solutions from these disciplines with the goal to demonstrate their viability
by producing a humanoid conversational agent capable of naturally eliciting and
reacting to information from a human user. A set of qualitative and
quantitative evaluation sessions demonstrated the technical feasibility of the
system whilst uncovering a number of deficits in its capacity to engage users
in a way that would be perceived as natural and emotionally engaging. We argue
that naturalness should not always be seen as a desirable goal and suggest that
deliberately suppressing the naturalness of virtual human interactions, such as
by altering its personality cues, might in some cases yield more desirable
results.Comment: eNTERFACE16 proceeding
Continuous Interaction with a Virtual Human
Attentive Speaking and Active Listening require that a Virtual Human be capable of simultaneous perception/interpretation and production of communicative behavior. A Virtual Human should be able to signal its attitude and attention while it is listening to its interaction partner, and be able to attend to its interaction partner while it is speaking – and modify its communicative behavior on-the-fly based on what it perceives from its partner. This report presents the results of a four week summer project that was part of eNTERFACE’10. The project resulted in progress on several aspects of continuous interaction such as scheduling and interrupting multimodal behavior, automatic classification of listener responses, generation of response eliciting behavior, and models for appropriate reactions to listener responses. A pilot user study was conducted with ten participants. In addition, the project yielded a number of deliverables that are released for public access
Robust Modeling of Epistemic Mental States
This work identifies and advances some research challenges in the analysis of
facial features and their temporal dynamics with epistemic mental states in
dyadic conversations. Epistemic states are: Agreement, Concentration,
Thoughtful, Certain, and Interest. In this paper, we perform a number of
statistical analyses and simulations to identify the relationship between
facial features and epistemic states. Non-linear relations are found to be more
prevalent, while temporal features derived from original facial features have
demonstrated a strong correlation with intensity changes. Then, we propose a
novel prediction framework that takes facial features and their nonlinear
relation scores as input and predict different epistemic states in videos. The
prediction of epistemic states is boosted when the classification of emotion
changing regions such as rising, falling, or steady-state are incorporated with
the temporal features. The proposed predictive models can predict the epistemic
states with significantly improved accuracy: correlation coefficient (CoERR)
for Agreement is 0.827, for Concentration 0.901, for Thoughtful 0.794, for
Certain 0.854, and for Interest 0.913.Comment: Accepted for Publication in Multimedia Tools and Application, Special
Issue: Socio-Affective Technologie
Automatic Context-Driven Inference of Engagement in HMI: A Survey
An integral part of seamless human-human communication is engagement, the
process by which two or more participants establish, maintain, and end their
perceived connection. Therefore, to develop successful human-centered
human-machine interaction applications, automatic engagement inference is one
of the tasks required to achieve engaging interactions between humans and
machines, and to make machines attuned to their users, hence enhancing user
satisfaction and technology acceptance. Several factors contribute to
engagement state inference, which include the interaction context and
interactants' behaviours and identity. Indeed, engagement is a multi-faceted
and multi-modal construct that requires high accuracy in the analysis and
interpretation of contextual, verbal and non-verbal cues. Thus, the development
of an automated and intelligent system that accomplishes this task has been
proven to be challenging so far. This paper presents a comprehensive survey on
previous work in engagement inference for human-machine interaction, entailing
interdisciplinary definition, engagement components and factors, publicly
available datasets, ground truth assessment, and most commonly used features
and methods, serving as a guide for the development of future human-machine
interaction interfaces with reliable context-aware engagement inference
capability. An in-depth review across embodied and disembodied interaction
modes, and an emphasis on the interaction context of which engagement
perception modules are integrated sets apart the presented survey from existing
surveys
- …