6,667 research outputs found
An information assistant system for the prevention of tunnel vision in crisis management
In the crisis management environment, tunnel vision is a set of bias in decision makers’ cognitive process which often leads to incorrect understanding of the real crisis situation, biased perception of information, and improper decisions. The tunnel vision phenomenon is a consequence of both the challenges in the task and the natural limitation in a human being’s cognitive process. An information assistant system is proposed with the purpose of preventing tunnel vision. The system serves as a platform for monitoring the on-going crisis event. All information goes through the system before arrives at the user. The system enhances the data quality, reduces the data quantity and presents the crisis information in a manner that prevents or repairs the user’s cognitive overload. While working with such a system, the users (crisis managers) are expected to be more likely to stay aware of the actual situation, stay open minded to possibilities, and make proper decisions
Understanding of Object Manipulation Actions Using Human Multi-Modal Sensory Data
Object manipulation actions represent an important share of the Activities of
Daily Living (ADLs). In this work, we study how to enable service robots to use
human multi-modal data to understand object manipulation actions, and how they
can recognize such actions when humans perform them during human-robot
collaboration tasks. The multi-modal data in this study consists of videos,
hand motion data, applied forces as represented by the pressure patterns on the
hand, and measurements of the bending of the fingers, collected as human
subjects performed manipulation actions. We investigate two different
approaches. In the first one, we show that multi-modal signal (motion, finger
bending and hand pressure) generated by the action can be decomposed into a set
of primitives that can be seen as its building blocks. These primitives are
used to define 24 multi-modal primitive features. The primitive features can in
turn be used as an abstract representation of the multi-modal signal and
employed for action recognition. In the latter approach, the visual features
are extracted from the data using a pre-trained image classification deep
convolutional neural network. The visual features are subsequently used to
train the classifier. We also investigate whether adding data from other
modalities produces a statistically significant improvement in the classifier
performance. We show that both approaches produce a comparable performance.
This implies that image-based methods can successfully recognize human actions
during human-robot collaboration. On the other hand, in order to provide
training data for the robot so it can learn how to perform object manipulation
actions, multi-modal data provides a better alternative
End-user programming of a social robot by dialog
One of the main challenges faced by social robots is how to provide intuitive, natural and enjoyable usability for the end-user. In our ordinary environment, social robots could be important tools for education and entertainment (edutainment) in a variety of ways. This paper presents a Natural Programming System (NPS) that is geared to non-expert users. The main goal of such a system is to provide an enjoyable interactive platform for the users to build different programs within their social robot platform. The end-user can build a complex net of actions and conditions (a sequence) in a social robot via mixed-initiative dialogs and multimodal interaction. The system has been implemented and tested in Maggie, a real social robot with multiple skills, conceived as a general HRI researching platform. The robot's internal features (skills) have been implemented to be verbally accessible to the end-user, who can combine them into others that are more complex following a bottom-up model. The built sequence is internally implemented as a Sequence Function Chart (SFC), which allows parallel execution, modularity and re-use. A multimodal Dialog Manager System (DMS) takes charge of keeping the coherence of the interaction. This work is thought for bringing social robots closer to non-expert users, who can play the game of "teaching how to do things" with the robot.The research leading to these results has received funding from the RoboCity2030-II-CM project (S2009/DPI-1559), funded by Programas de Actividades I+D en la Comunidad de Madrid and cofunded by Structural Funds of the EU.
The authors also gratefully acknowledge the funds provided by the Spanish Ministry of Science and Innovation
(MICINN) through the project named “A New Approach to Social Robots” (AROS) DPI2008-01109
MULTI-MODAL TASK INSTRUCTIONS TO ROBOTS BY NAIVE USERS
This thesis presents a theoretical framework for the design of user-programmable
robots. The objective of the work is to investigate multi-modal unconstrained natural
instructions given to robots in order to design a learning robot. A corpus-centred
approach is used to design an agent that can reason, learn and interact with a human in a
natural unconstrained way. The corpus-centred design approach is formalised and
developed in detail. It requires the developer to record a human during interaction and
analyse the recordings to find instruction primitives. These are then implemented into a
robot. The focus of this work has been on how to combine speech and gesture using
rules extracted from the analysis of a corpus. A multi-modal integration algorithm is
presented, that can use timing and semantics to group, match and unify gesture and
language. The algorithm always achieves correct pairings on a corpus and initiates
questions to the user in ambiguous cases or missing information. The domain of card
games has been investigated, because of its variety of games which are rich in rules and
contain sequences. A further focus of the work is on the translation of rule-based
instructions. Most multi-modal interfaces to date have only considered sequential
instructions. The combination of frame-based reasoning, a knowledge base organised as
an ontology and a problem solver engine is used to store these rules. The understanding
of rule instructions, which contain conditional and imaginary situations require an agent
with complex reasoning capabilities. A test system of the agent implementation is also
described. Tests to confirm the implementation by playing back the corpus are
presented. Furthermore, deployment test results with the implemented agent and human
subjects are presented and discussed. The tests showed that the rate of errors that are
due to the sentences not being defined in the grammar does not decrease by an
acceptable rate when new grammar is introduced. This was particularly the case for
complex verbal rule instructions which have a large variety of being expressed
A Review of Verbal and Non-Verbal Human-Robot Interactive Communication
In this paper, an overview of human-robot interactive communication is
presented, covering verbal as well as non-verbal aspects of human-robot
interaction. Following a historical introduction, and motivation towards fluid
human-robot communication, ten desiderata are proposed, which provide an
organizational axis both of recent as well as of future research on human-robot
communication. Then, the ten desiderata are examined in detail, culminating to
a unifying discussion, and a forward-looking conclusion
- …