4,042 research outputs found

    Bioinspired auditory sound localisation for improving the signal to noise ratio of socially interactive robots

    Get PDF
    In this paper we describe a bioinspired hybrid architecture for acoustic sound source localisation and tracking to increase the signal to noise ratio (SNR) between speaker and background sources for a socially interactive robot's speech recogniser system. The model presented incorporates the use of Interaural Time Differ- ence for azimuth estimation and Recurrent Neural Net- works for trajectory prediction. The results are then pre- sented showing the difference in the SNR of a localised and non-localised speaker source, in addition to presenting the recognition rates between a localised and non-localised speaker source. From the results presented in this paper it can be seen that by orientating towards the sound source of interest the recognition rates of that source can be in- creased

    Neural Network Based Reinforcement Learning for Audio-Visual Gaze Control in Human-Robot Interaction

    Get PDF
    This paper introduces a novel neural network-based reinforcement learning approach for robot gaze control. Our approach enables a robot to learn and to adapt its gaze control strategy for human-robot interaction neither with the use of external sensors nor with human supervision. The robot learns to focus its attention onto groups of people from its own audio-visual experiences, independently of the number of people, of their positions and of their physical appearances. In particular, we use a recurrent neural network architecture in combination with Q-learning to find an optimal action-selection policy; we pre-train the network using a simulated environment that mimics realistic scenarios that involve speaking/silent participants, thus avoiding the need of tedious sessions of a robot interacting with people. Our experimental evaluation suggests that the proposed method is robust against parameter estimation, i.e. the parameter values yielded by the method do not have a decisive impact on the performance. The best results are obtained when both audio and visual information is jointly used. Experiments with the Nao robot indicate that our framework is a step forward towards the autonomous learning of socially acceptable gaze behavior.Comment: Paper submitted to Pattern Recognition Letter

    Towards modelling group-robot interactions using a qualitative spatial representation

    Get PDF
    This paper tackles the problem of finding a suitable qualitative representation for robots to reason about activity spaces where they carry out tasks interacting with a group of people. The Qualitative Spatial model for Group Robot Interaction (QS-GRI) defines Kendon-formations depending on: (i) the relative location of the robot with respect to other individuals involved in that interaction; (ii) the individuals' orientation; (iii) the shared peri-personal distance; and (iv) the role of the individuals (observer, main character or interactive). The evolution of Kendon-formations between is studied, that is, how one formation is transformed into another. These transformations can depend on the role that the robot have, and on the amount of people involved.Postprint (author's final draft

    Development of a Semi-Autonomous Robotic System to Assist Children with Autism in Developing Visual Perspective Taking Skills

    Get PDF
    Robot-assisted therapy has been successfully used to help children with Autism Spectrum Condition (ASC) develop their social skills, but very often with the robot being fully controlled remotely by an adult operator. Although this method is reliable and allows the operator to conduct a therapy session in a customised child-centred manner, it increases the cognitive workload on the human operator since it requires them to divide their attention between the robot and the child to ensure that the robot is responding appropriately to the child's behaviour. In addition, a remote-controlled robot is not aware of the information regarding the interaction with children (e.g., body gesture and head pose, proximity etc) and consequently it does not have the ability to shape live HRIs. Further to this, a remote-controlled robot typically does not have the capacity to record this information and additional effort is required to analyse the interaction data. For these reasons, using a remote-controlled robot in robot-assisted therapy may be unsustainable for long-term interactions. To lighten the cognitive burden on the human operator and to provide a consistent therapeutic experience, it is essential to create some degrees of autonomy and enable the robot to perform some autonomous behaviours during interactions with children. Our previous research with the Kaspar robot either implemented a fully autonomous scenario involving pairs of children, which then lacked the often important input of the supervising adult, or, in most of our research, has used a remote control in the hand of the adult or the children to operate the robot. Alternatively, this paper provides an overview of the design and implementation of a robotic system called Sense- Think-Act which converts the remote-controlled scenarios of our humanoid robot into a semi-autonomous social agent with the capacity to play games autonomously (under human supervision) with children in the real-world school settings. The developed system has been implemented on the humanoid robot Kaspar and evaluated in a trial with four children with ASC at a local specialist secondary school in the UK where the data of 11 Child-Robot Interactions (CRIs) was collected. The results from this trial demonstrated that the system was successful in providing the robot with appropriate control signals to operate in a semi-autonomous manner without any latency, which supports autonomous CRIs, suggesting that the proposed architecture appears to have promising potential in supporting CRIs for real-world applications.Peer reviewe

    Conceptual spatial representations for indoor mobile robots

    Get PDF
    We present an approach for creating conceptual representations of human-made indoor environments using mobile robots. The concepts refer to spatial and functional properties of typical indoor environments. Following findings in cognitive psychology, our model is composed of layers representing maps at different levels of abstraction. The complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition. The system also incorporates a linguistic framework that actively supports the map acquisition process, and which is used for situated dialogue. Finally, we discuss the capabilities of the integrated system

    Evaluation of Using Semi-Autonomy Features in Mobile Robotic Telepresence Systems

    Get PDF
    Mobile robotic telepresence systems used for social interaction scenarios require that users steer robots in a remote environment. As a consequence, a heavy workload can be put on users if they are unfamiliar with using robotic telepresence units. One way to lessen this workload is to automate certain operations performed during a telepresence session in order to assist remote drivers in navigating the robot in new environments. Such operations include autonomous robot localization and navigation to certain points in the home and automatic docking of the robot to the charging station. In this paper we describe the implementation of such autonomous features along with user evaluation study. The evaluation scenario is focused on the first experience on using the system by novice users. Importantly, that the scenario taken in this study assumed that participants have as little as possible prior information about the system. Four different use-cases were identified from the user behaviour analysis.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. Plan Nacional de Investigación, proyecto DPI2011-25483
    corecore