11,229 research outputs found
A Review of Verbal and Non-Verbal Human-Robot Interactive Communication
In this paper, an overview of human-robot interactive communication is
presented, covering verbal as well as non-verbal aspects of human-robot
interaction. Following a historical introduction, and motivation towards fluid
human-robot communication, ten desiderata are proposed, which provide an
organizational axis both of recent as well as of future research on human-robot
communication. Then, the ten desiderata are examined in detail, culminating to
a unifying discussion, and a forward-looking conclusion
Parameterized Action Representation and Natural Language Instructions for Dynamic Behavior Modification of Embodied Agents
We introduce a prototype for building a strategy game. A player can control and modify the behavior of all the characters in a game, and introduce new strategies, through the powerful medium of natural language instructions. We describe a Parameterized Action Representation (PAR) designed to bridge the gap between natural language instructions and the virtual agents who are to carry them out. We will illustrate PAR through an interactive demonstration of a multi-agent strategy game
Explorations in engagement for humans and robots
This paper explores the concept of engagement, the process by which
individuals in an interaction start, maintain and end their perceived
connection to one another. The paper reports on one aspect of engagement among
human interactors--the effect of tracking faces during an interaction. It also
describes the architecture of a robot that can participate in conversational,
collaborative interactions with engagement gestures. Finally, the paper reports
on findings of experiments with human participants who interacted with a robot
when it either performed or did not perform engagement gestures. Results of the
human-robot studies indicate that people become engaged with robots: they
direct their attention to the robot more often in interactions where engagement
gestures are present, and they find interactions more appropriate when
engagement gestures are present than when they are not.Comment: 31 pages, 5 figures, 3 table
A Planning Pipeline for Large Multi-Agent Missions
In complex multi-agent applications, human operators are often tasked with planning and managing large heterogeneous teams of humans and autonomous vehicles. Although the use of these autonomous vehicles broadens the scope of meaningful applications, many of their systems remain unintuitive and difficult to master for human operators whose expertise lies in the application domain and not at the platform level. Current research focuses on the development of individual capabilities necessary to plan multi-agent missions of this scope, placing little emphasis on the integration of these components in to a full pipeline. The work presented in this paper presents a complete and user-agnostic planning pipeline for large multiagent missions known as the HOLII GRAILLE. The system takes a holistic approach to mission planning by integrating capabilities in human machine interaction, flight path generation, and validation and verification. Components modules of the pipeline are explored on an individual level, as well as their integration into a whole system. Lastly, implications for future mission planning are discussed
A High-Fidelity Open Embodied Avatar with Lip Syncing and Expression Capabilities
Embodied avatars as virtual agents have many applications and provide
benefits over disembodied agents, allowing non-verbal social and interactional
cues to be leveraged, in a similar manner to how humans interact with each
other. We present an open embodied avatar built upon the Unreal Engine that can
be controlled via a simple python programming interface. The avatar has lip
syncing (phoneme control), head gesture and facial expression (using either
facial action units or cardinal emotion categories) capabilities. We release
code and models to illustrate how the avatar can be controlled like a puppet or
used to create a simple conversational agent using public application
programming interfaces (APIs). GITHUB link:
https://github.com/danmcduff/AvatarSimComment: International Conference on Multimodal Interaction (ICMI 2019
Recommended from our members
The threnoscope: a musical work for live coding performance
This paper introduces a new direction in the field of artistic live coding where musical works are presented as pieces in the form of a live coding system. The system itself and the code affordances become equivalent to score system in an open musical work for strong improvisation
Towards hybrid primary intersubjectivity: a neural robotics library for human science
Human-robot interaction is becoming an interesting area of research in
cognitive science, notably, for the study of social cognition. Interaction
theorists consider primary intersubjectivity a non-mentalist, pre-theoretical,
non-conceptual sort of processes that ground a certain level of communication
and understanding, and provide support to higher-level cognitive skills. We
argue this sort of low level cognitive interaction, where control is shared in
dyadic encounters, is susceptible of study with neural robots. Hence, in this
work we pursue three main objectives. Firstly, from the concept of active
inference we study primary intersubjectivity as a second person perspective
experience characterized by predictive engagement, where perception, cognition,
and action are accounted for an hermeneutic circle in dyadic interaction.
Secondly, we propose an open-source methodology named \textit{neural robotics
library} (NRL) for experimental human-robot interaction, and a demonstration
program for interacting in real-time with a virtual Cartesian robot (VCBot).
Lastly, through a study case, we discuss some ways human-robot (hybrid)
intersubjectivity can contribute to human science research, such as to the
fields of developmental psychology, educational technology, and cognitive
rehabilitation
- …