8,000 research outputs found
Arena: A General Evaluation Platform and Building Toolkit for Multi-Agent Intelligence
Learning agents that are not only capable of taking tests, but also
innovating is becoming a hot topic in AI. One of the most promising paths
towards this vision is multi-agent learning, where agents act as the
environment for each other, and improving each agent means proposing new
problems for others. However, existing evaluation platforms are either not
compatible with multi-agent settings, or limited to a specific game. That is,
there is not yet a general evaluation platform for research on multi-agent
intelligence. To this end, we introduce Arena, a general evaluation platform
for multi-agent intelligence with 35 games of diverse logics and
representations. Furthermore, multi-agent intelligence is still at the stage
where many problems remain unexplored. Therefore, we provide a building toolkit
for researchers to easily invent and build novel multi-agent problems from the
provided game set based on a GUI-configurable social tree and five basic
multi-agent reward schemes. Finally, we provide Python implementations of five
state-of-the-art deep multi-agent reinforcement learning baselines. Along with
the baseline implementations, we release a set of 100 best agents/teams that we
can train with different training schemes for each game, as the base for
evaluating agents with population performance. As such, the research community
can perform comparisons under a stable and uniform standard. All the
implementations and accompanied tutorials have been open-sourced for the
community at https://sites.google.com/view/arena-unity/
Speech-Gesture Mapping and Engagement Evaluation in Human Robot Interaction
A robot needs contextual awareness, effective speech production and
complementing non-verbal gestures for successful communication in society. In
this paper, we present our end-to-end system that tries to enhance the
effectiveness of non-verbal gestures. For achieving this, we identified
prominently used gestures in performances by TED speakers and mapped them to
their corresponding speech context and modulated speech based upon the
attention of the listener. The proposed method utilized Convolutional Pose
Machine [4] to detect the human gesture. Dominant gestures of TED speakers were
used for learning the gesture-to-speech mapping. The speeches by them were used
for training the model. We also evaluated the engagement of the robot with
people by conducting a social survey. The effectiveness of the performance was
monitored by the robot and it self-improvised its speech pattern on the basis
of the attention level of the audience, which was calculated using visual
feedback from the camera. The effectiveness of interaction as well as the
decisions made during improvisation was further evaluated based on the
head-pose detection and interaction survey.Comment: 8 pages, 9 figures, Under review in IRC 201
Evaluation of Using Semi-Autonomy Features in Mobile Robotic Telepresence Systems
Mobile robotic telepresence systems used for social interaction scenarios require that users steer robots in a remote environment. As a consequence, a heavy workload can be put on users if they are unfamiliar with using robotic telepresence units. One way to lessen this workload is to automate certain operations performed during a telepresence session in order to assist remote drivers in navigating the robot in new environments. Such operations include autonomous robot localization and navigation to certain points in the home and automatic docking of the robot to the charging station. In this paper we describe the implementation of such autonomous features along with user evaluation study. The evaluation scenario is focused on the first experience on using the system by novice users. Importantly, that the scenario taken in this study assumed that participants have as little as possible prior information about the system. Four different use-cases were identified from the user behaviour analysis.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. Plan Nacional de Investigación, proyecto DPI2011-25483
Analyzing Input and Output Representations for Speech-Driven Gesture Generation
This paper presents a novel framework for automatic speech-driven gesture
generation, applicable to human-agent interaction including both virtual agents
and robots. Specifically, we extend recent deep-learning-based, data-driven
methods for speech-driven gesture generation by incorporating representation
learning. Our model takes speech as input and produces gestures as output, in
the form of a sequence of 3D coordinates. Our approach consists of two steps.
First, we learn a lower-dimensional representation of human motion using a
denoising autoencoder neural network, consisting of a motion encoder MotionE
and a motion decoder MotionD. The learned representation preserves the most
important aspects of the human pose variation while removing less relevant
variation. Second, we train a novel encoder network SpeechE to map from speech
to a corresponding motion representation with reduced dimensionality. At test
time, the speech encoder and the motion decoder networks are combined: SpeechE
predicts motion representations based on a given speech signal and MotionD then
decodes these representations to produce motion sequences. We evaluate
different representation sizes in order to find the most effective
dimensionality for the representation. We also evaluate the effects of using
different speech features as input to the model. We find that mel-frequency
cepstral coefficients (MFCCs), alone or combined with prosodic features,
perform the best. The results of a subsequent user study confirm the benefits
of the representation learning.Comment: Accepted at IVA '19. Shorter version published at AAMAS '19. The code
is available at
https://github.com/GestureGeneration/Speech_driven_gesture_generation_with_autoencode
Tangible user interfaces : past, present and future directions
In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this field. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research
- …