69,056 research outputs found
Neural Network Based Reinforcement Learning for Audio-Visual Gaze Control in Human-Robot Interaction
This paper introduces a novel neural network-based reinforcement learning
approach for robot gaze control. Our approach enables a robot to learn and to
adapt its gaze control strategy for human-robot interaction neither with the
use of external sensors nor with human supervision. The robot learns to focus
its attention onto groups of people from its own audio-visual experiences,
independently of the number of people, of their positions and of their physical
appearances. In particular, we use a recurrent neural network architecture in
combination with Q-learning to find an optimal action-selection policy; we
pre-train the network using a simulated environment that mimics realistic
scenarios that involve speaking/silent participants, thus avoiding the need of
tedious sessions of a robot interacting with people. Our experimental
evaluation suggests that the proposed method is robust against parameter
estimation, i.e. the parameter values yielded by the method do not have a
decisive impact on the performance. The best results are obtained when both
audio and visual information is jointly used. Experiments with the Nao robot
indicate that our framework is a step forward towards the autonomous learning
of socially acceptable gaze behavior.Comment: Paper submitted to Pattern Recognition Letter
Overcoming Bandwidth Limitations in Wireless Sensor Networks by Exploitation of Cyclic Signal Patterns: An Event-triggered Learning Approach
Wireless sensor networks are used in a wide range of applications, many of which require real-time transmission of the measurements. Bandwidth limitations result in limitations on the sampling frequency and number of sensors. This problem can be addressed by reducing the communication load via data compression and event-based communication approaches. The present paper focuses on the class of applications in which the signals exhibit unknown and potentially time-varying cyclic patterns. We review recently proposed event-triggered learning (ETL) methods that identify and exploit these cyclic patterns, we show how these methods can be applied to the nonlinear multivariable dynamics of three-dimensional orientation data, and we propose a novel approach that uses Gaussian process models. In contrast to other approaches, all three ETL methods work in real time and assure a small upper bound on the reconstruction error. The proposed methods are compared to several conventional approaches in experimental data from human subjects walking with a wearable inertial sensor network. They are found to reduce the communication load by 60–70%, which implies that two to three times more sensor nodes could be used at the same bandwidth
Towards responsive Sensitive Artificial Listeners
This paper describes work in the recently started project SEMAINE, which aims to build a set of Sensitive Artificial Listeners – conversational agents designed to sustain an interaction with a human user despite limited verbal skills, through robust recognition and generation of non-verbal behaviour in real-time, both when the agent is speaking and listening. We report on data collection and on the design of a system architecture in view of real-time responsiveness
Explorations in engagement for humans and robots
This paper explores the concept of engagement, the process by which
individuals in an interaction start, maintain and end their perceived
connection to one another. The paper reports on one aspect of engagement among
human interactors--the effect of tracking faces during an interaction. It also
describes the architecture of a robot that can participate in conversational,
collaborative interactions with engagement gestures. Finally, the paper reports
on findings of experiments with human participants who interacted with a robot
when it either performed or did not perform engagement gestures. Results of the
human-robot studies indicate that people become engaged with robots: they
direct their attention to the robot more often in interactions where engagement
gestures are present, and they find interactions more appropriate when
engagement gestures are present than when they are not.Comment: 31 pages, 5 figures, 3 table
Development of a Semi-Autonomous Robotic System to Assist Children with Autism in Developing Visual Perspective Taking Skills
Robot-assisted therapy has been successfully used to help children with Autism Spectrum Condition (ASC) develop their social skills, but very often with the robot being fully controlled remotely by an adult operator. Although this method is reliable and allows the operator to conduct a therapy session in a customised child-centred manner, it increases the cognitive workload on the human operator since it requires them to divide their attention between the robot and the child to ensure that the robot is responding appropriately to the child's behaviour. In addition, a remote-controlled robot is not aware of the information regarding the interaction with children (e.g., body gesture and head pose, proximity etc) and consequently it does not have the ability to shape live HRIs. Further to this, a remote-controlled robot typically does not have the capacity to record this information and additional effort is required to analyse the interaction data. For these reasons, using a remote-controlled robot in robot-assisted therapy may be unsustainable for long-term interactions. To lighten the cognitive burden on the human operator and to provide a consistent therapeutic experience, it is essential to create some degrees of autonomy and enable the robot to perform some autonomous behaviours during interactions with children. Our previous research with the Kaspar robot either implemented a fully autonomous scenario involving pairs of children, which then lacked the often important input of the supervising adult, or, in most of our research, has used a remote control in the hand of the adult or the children to operate the robot. Alternatively, this paper provides an overview of the design and implementation of a robotic system called Sense- Think-Act which converts the remote-controlled scenarios of our humanoid robot into a semi-autonomous social agent with the capacity to play games autonomously (under human supervision) with children in the real-world school settings. The developed system has been implemented on the humanoid robot Kaspar and evaluated in a trial with four children with ASC at a local specialist secondary school in the UK where the data of 11 Child-Robot Interactions (CRIs) was collected. The results from this trial demonstrated that the system was successful in providing the robot with appropriate control signals to operate in a semi-autonomous manner without any latency, which supports autonomous CRIs, suggesting that the proposed architecture appears to have promising potential in supporting CRIs for real-world applications.Peer reviewe
- …