20,081 research outputs found
Explorations in engagement for humans and robots
This paper explores the concept of engagement, the process by which
individuals in an interaction start, maintain and end their perceived
connection to one another. The paper reports on one aspect of engagement among
human interactors--the effect of tracking faces during an interaction. It also
describes the architecture of a robot that can participate in conversational,
collaborative interactions with engagement gestures. Finally, the paper reports
on findings of experiments with human participants who interacted with a robot
when it either performed or did not perform engagement gestures. Results of the
human-robot studies indicate that people become engaged with robots: they
direct their attention to the robot more often in interactions where engagement
gestures are present, and they find interactions more appropriate when
engagement gestures are present than when they are not.Comment: 31 pages, 5 figures, 3 table
Analyzing Input and Output Representations for Speech-Driven Gesture Generation
This paper presents a novel framework for automatic speech-driven gesture
generation, applicable to human-agent interaction including both virtual agents
and robots. Specifically, we extend recent deep-learning-based, data-driven
methods for speech-driven gesture generation by incorporating representation
learning. Our model takes speech as input and produces gestures as output, in
the form of a sequence of 3D coordinates. Our approach consists of two steps.
First, we learn a lower-dimensional representation of human motion using a
denoising autoencoder neural network, consisting of a motion encoder MotionE
and a motion decoder MotionD. The learned representation preserves the most
important aspects of the human pose variation while removing less relevant
variation. Second, we train a novel encoder network SpeechE to map from speech
to a corresponding motion representation with reduced dimensionality. At test
time, the speech encoder and the motion decoder networks are combined: SpeechE
predicts motion representations based on a given speech signal and MotionD then
decodes these representations to produce motion sequences. We evaluate
different representation sizes in order to find the most effective
dimensionality for the representation. We also evaluate the effects of using
different speech features as input to the model. We find that mel-frequency
cepstral coefficients (MFCCs), alone or combined with prosodic features,
perform the best. The results of a subsequent user study confirm the benefits
of the representation learning.Comment: Accepted at IVA '19. Shorter version published at AAMAS '19. The code
is available at
https://github.com/GestureGeneration/Speech_driven_gesture_generation_with_autoencode
Motion Generation during Vocalized Emotional Expressions and Evaluation in Android Robots
Vocalized emotional expressions such as laughter and surprise often occur in natural dialogue interactions and are important factors to be considered in order to achieve smooth robot-mediated communication. Miscommunication may be caused if there is a mismatch between audio and visual modalities, especially in android robots, which have a highly humanlike appearance. In this chapter, motion generation methods are introduced for laughter and vocalized surprise events, based on analysis results of human behaviors during dialogue interactions. The effectiveness of controlling different modalities of the face, head, and upper body (eyebrow raising, eyelid widening/narrowing, lip corner/cheek raising, eye blinking, head motion, and torso motion control) and different motion control levels are evaluated using an android robot. Subjective experiments indicate the importance of each modality in the perception of motion naturalness (humanlikeness) and the degree of emotional expression
ANGELICA : choice of output modality in an embodied agent
The ANGELICA project addresses the problem of modality choice in information presentation by embodied, humanlike agents. The output modalities available to such agents include both language and various nonverbal signals such as pointing and gesturing. For each piece of information to be presented by the agent it must be decided whether it should be expressed using language, a nonverbal signal, or both. In the ANGELICA project a model of the different factors influencing this choice will be developed and integrated in a natural language generation system. The application domain is the presentation of route descriptions by an embodied agent in a 3D environment. Evaluation and testing form an integral part of the project. In particular, we will investigate the effect of different modality choices on the effectiveness and naturalness of the generated presentations and on the user's perception of the agent's personality
Structuring information through gesture and intonation
Face-to-face communication is multimodal. In unscripted spoken discourse we can observe the interaction of several “semiotic layers”, modalities of information such as syntax, discourse structure, gesture, and intonation. We explore the role of gesture and intonation in structuring and aligning information in spoken discourse through a study of the co-occurrence of pitch accents and gestural apices. Metaphorical spatialization through gesture also plays a role in conveying the contextual relationships between the speaker, the government and other external forces in a naturally-occurring political speech setting
Human-Robot Interaction architecture for interactive and lively social robots
Mención Internacional en el título de doctorLa sociedad está experimentando un proceso de envejecimiento que puede provocar un desequilibrio
entre la población en edad de trabajar y aquella fuera del mercado de trabajo. Una de las soluciones
a este problema que se están considerando hoy en día es la introducción de robots en multiples
sectores, incluyendo el de servicios. Sin embargo, para que esto sea una solución viable, estos robots
necesitan ser capaces de interactuar con personas de manera satisfactoria, entre otras habilidades. En
el contexto de la aplicación de robots sociales al cuidado de mayores, esta tesis busca proporcionar
a un robot social las habilidades necesarias para crear interacciones entre humanos y robots que
sean naturales. En concreto, esta tesis se centra en tres problemas que deben ser solucionados: (i) el
modelado de interacciones entre humanos y robots; (ii) equipar a un robot social con las capacidades
expresivas necesarias para una comunicación satisfactoria; y (iii) darle al robot una apariencia vivaz.
La solución al problema de modelado de diálogos presentada en esta tesis propone diseñar estos
diálogos como una secuencia de elementos atómicos llamados Actos Comunicativos (CAs, por sus
siglas en inglés). Se pueden parametrizar en tiempo de ejecución para completar diferentes objetivos
comunicativos, y están equipados con mecanismos para manejar algunas de las imprecisiones que
pueden aparecer durante interacciones. Estos CAs han sido identificados a partir de la combinación
de dos dimensiones: iniciativa (si la tiene el robot o el usuario) e intención (si se pretende obtener o
proporcionar información). Estos CAs pueden ser combinados siguiendo una estructura jerárquica
para crear estructuras mas complejas que sean reutilizables. Esto simplifica el proceso para crear
nuevas interacciones, permitiendo a los desarrolladores centrarse exclusivamente en diseñar el flujo
del diálogo, sin tener que preocuparse de reimplementar otras funcionalidades que tienen que estar
presentes en todas las interacciones (como el manejo de errores, por ejemplo).
La expresividad del robot está basada en el uso de una librería de gestos, o expresiones,
multimodales predefinidos, modelados como estructuras similares a máquinas de estados. El
módulo que controla la expresividad recibe peticiones para realizar dichas expresiones, planifica
su ejecución para evitar cualquier conflicto que pueda aparecer, las carga, y comprueba que su
ejecución se complete sin problemas. El sistema es capaz también de generar estas expresiones en
tiempo de ejecución a partir de una lista de acciones unimodales (como decir una frase, o mover una
articulación). Una de las características más importantes de la arquitectura de expresividad propuesta
es la integración de una serie de métodos de modulación que pueden ser usados para modificar los
gestos del robot en tiempo de ejecución. Esto permite al robot adaptar estas expresiones en base
a circunstancias particulares (aumentando al mismo tiempo la variabilidad de la expresividad del robot), y usar un número limitado de gestos para mostrar diferentes estados internos (como el estado
emocional).
Teniendo en cuenta que ser reconocido como un ser vivo es un requisito para poder participar en
interacciones sociales, que un robot social muestre una apariencia de vivacidad es un factor clave
en interacciones entre humanos y robots. Para ello, esta tesis propone dos soluciones. El primer
método genera acciones a través de las diferentes interfaces del robot a intervalos. La frecuencia e
intensidad de estas acciones están definidas en base a una señal que representa el pulso del robot.
Dicha señal puede adaptarse al contexto de la interacción o al estado interno del robot. El segundo
método enriquece las interacciones verbales entre el robot y el usuario prediciendo los gestos no
verbales más apropiados en base al contenido del diálogo y a la intención comunicativa del robot.
Un modelo basado en aprendizaje automático recibe la transcripción del mensaje verbal del robot,
predice los gestos que deberían acompañarlo, y los sincroniza para que cada gesto empiece en el
momento preciso. Este modelo se ha desarrollado usando una combinación de un encoder diseñado
con una red neuronal Long-Short Term Memory, y un Conditional Random Field para predecir la
secuencia de gestos que deben acompañar a la frase del robot.
Todos los elementos presentados conforman el núcleo de una arquitectura de interacción
humano-robot modular que ha sido integrada en múltiples plataformas, y probada bajo diferentes
condiciones. El objetivo central de esta tesis es contribuir al área de interacción humano-robot
con una nueva solución que es modular e independiente de la plataforma robótica, y que se centra
en proporcionar a los desarrolladores las herramientas necesarias para desarrollar aplicaciones que
requieran interacciones con personas.Society is experiencing a series of demographic changes that can result in an unbalance between
the active working and non-working age populations. One of the solutions considered to mitigate
this problem is the inclusion of robots in multiple sectors, including the service sector. But for
this to be a viable solution, among other features, robots need to be able to interact with humans
successfully. This thesis seeks to endow a social robot with the abilities required for a natural
human-robot interactions. The main objective is to contribute to the body of knowledge on the area
of Human-Robot Interaction with a new, platform-independent, modular approach that focuses on
giving roboticists the tools required to develop applications that involve interactions with humans. In
particular, this thesis focuses on three problems that need to be addressed: (i) modelling interactions
between a robot and an user; (ii) endow the robot with the expressive capabilities required for a
successful communication; and (iii) endow the robot with a lively appearance.
The approach to dialogue modelling presented in this thesis proposes to model dialogues as a
sequence of atomic interaction units, called Communicative Acts, or CAs. They can be parametrized
in runtime to achieve different communicative goals, and are endowed with mechanisms oriented to
solve some of the uncertainties related to interaction. Two dimensions have been used to identify the
required CAs: initiative (the robot or the user), and intention (either retrieve information or to convey
it). These basic CAs can be combined in a hierarchical manner to create more re-usable complex
structures. This approach simplifies the creation of new interactions, by allowing developers to focus
exclusively on designing the flow of the dialogue, without having to re-implement functionalities
that are common to all dialogues (like error handling, for example).
The expressiveness of the robot is based on the use of a library of predefined multimodal gestures,
or expressions, modelled as state machines. The module managing the expressiveness receives requests
for performing gestures, schedules their execution in order to avoid any possible conflict that might
arise, loads them, and ensures that their execution goes without problems. The proposed approach
is also able to generate expressions in runtime based on a list of unimodal actions (an utterance,
the motion of a limb, etc...). One of the key features of the proposed expressiveness management
approach is the integration of a series of modulation techniques that can be used to modify the
robot’s expressions in runtime. This would allow the robot to adapt them to the particularities of a
given situation (which would also increase the variability of the robot expressiveness), and to display
different internal states with the same expressions. Considering that being recognized as a living being is a requirement for engaging in social
encounters, the perception of a social robot as a living entity is a key requirement to foster
human-robot interactions. In this dissertation, two approaches have been proposed. The first
method generates actions for the different interfaces of the robot at certain intervals. The frequency
and intensity of these actions are defined by a signal that represents the pulse of the robot, which can
be adapted to the context of the interaction or the internal state of the robot. The second method
enhances the robot’s utterance by predicting the appropriate non-verbal expressions that should
accompany them, according to the content of the robot’s message, as well as its communicative
intention. A deep learning model receives the transcription of the robot’s utterances, predicts
which expressions should accompany it, and synchronizes them, so each gesture selected starts at
the appropriate time. The model has been developed using a combination of a Long-Short Term
Memory network-based encoder and a Conditional Random Field for generating a sequence of
gestures that are combined with the robot’s utterance.
All the elements presented above conform the core of a modular Human-Robot Interaction
architecture that has been integrated in multiple platforms, and tested under different conditions.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Fernando Torres Medina.- Secretario: Concepción Alicia Monje Micharet.- Vocal: Amirabdollahian Farshi
- …