3,817 research outputs found
Neural Network Based Reinforcement Learning for Audio-Visual Gaze Control in Human-Robot Interaction
This paper introduces a novel neural network-based reinforcement learning
approach for robot gaze control. Our approach enables a robot to learn and to
adapt its gaze control strategy for human-robot interaction neither with the
use of external sensors nor with human supervision. The robot learns to focus
its attention onto groups of people from its own audio-visual experiences,
independently of the number of people, of their positions and of their physical
appearances. In particular, we use a recurrent neural network architecture in
combination with Q-learning to find an optimal action-selection policy; we
pre-train the network using a simulated environment that mimics realistic
scenarios that involve speaking/silent participants, thus avoiding the need of
tedious sessions of a robot interacting with people. Our experimental
evaluation suggests that the proposed method is robust against parameter
estimation, i.e. the parameter values yielded by the method do not have a
decisive impact on the performance. The best results are obtained when both
audio and visual information is jointly used. Experiments with the Nao robot
indicate that our framework is a step forward towards the autonomous learning
of socially acceptable gaze behavior.Comment: Paper submitted to Pattern Recognition Letter
Teaching robots parametrized executable plans through spoken interaction
While operating in domestic environments, robots will necessarily
face difficulties not envisioned by their developers at programming
time. Moreover, the tasks to be performed by a robot will often
have to be specialized and/or adapted to the needs of specific users
and specific environments. Hence, learning how to operate by interacting
with the user seems a key enabling feature to support the
introduction of robots in everyday environments.
In this paper we contribute a novel approach for learning, through
the interaction with the user, task descriptions that are defined as a
combination of primitive actions. The proposed approach makes
a significant step forward by making task descriptions parametric
with respect to domain specific semantic categories. Moreover, by
mapping the task representation into a task representation language,
we are able to express complex execution paradigms and to revise
the learned tasks in a high-level fashion. The approach is evaluated
in multiple practical applications with a service robot
On the design, development and experimentation of the ASTRO assistive robot integrated in smart environments
This paper presents the full experience of
designing, developing and testing ASTROMOBILE, a system
composed of an enhanced robotic platform integrated in an
Ambient Intelligent (AmI) infrastructure that was conceived to
provide favourable independent living, improved quality of life
and efficiency of care for senior citizens. The design and
implementation of ASTRO robot was sustained by a
multidisciplinary team in which technology developers,
designers and end-user representatives collaborated using a
user-centred design approach. The key point of this work is to
demonstrate the general feasibility and scientific/technical
effectiveness of a mobile robotic platform integrated in a smart
environment and conceived to provide useful services to humans
and in particular to elderly people in domestic environments.
The main aspects faced in this paper are related to the design of
the ASTRO’s appearance and functionalities by means of a
substantial analysis of users’ requirements, the improvement of
the ASTRO’s behaviour by means of a smart sensor network
able to share information with the robot (Ubiquitous Robotics)
and the development of advanced human robot interfaces based
on natural language
- …