53,662 research outputs found

    Human Detection And Tracking For Human-Robot Interaction On The REEM-C Humanoid Robot

    Get PDF
    The interactions between humanoid robots and humans is a growing area of research, as frameworks and models are being continuously developed to improving the ways in which humanoids may integrate into society. These humanoids often require intelligence beyond what they are originally endowed with in order to handle more complex human-robot interaction scenarios. This intelligence can come from the use of additional sensors, including microphones and cameras, which can allow the robot to better perceive its environment. This thesis explores the scenarios of moving conversational partners, and the ways in which the REEM-C Humanoid Robot may interact with them. The additional developed intelligence focuses on external microphones deployed to the robot, with a consideration for computer vision algorithms built using the camera in the REEM-C's head. The first topic of this thesis explores how binaural acoustic intelligence can be used to estimate the direction of arrival of human speech on the REEM-C Humanoid. This includes the development of audio signal processing techniques, their optimization, and their deployment for real-time use on the REEM-C. The second topic highlights the computer vision approaches that can be used for a robotic system that may allow better human-robot interaction. This section describes the relevant algorithms and their development, in a way that is efficient and accurate for real-time robot usage. The third topic explores the natural behaviours of humans in conversation with moving interlocutors. This is measured via a motion capture study and modeled with mathematical formulations, which are then used on the REEM-C Humanoid Robot. The REEM-C uses this tracking model to follow detected human speakers using the intelligence outlined in previous sections. The final topic focuses on how the acoustic intelligence, vision algorithms and tracking model can be used in tandem for human-robot interaction with potentially multiple human subjects. This includes sensor fusion approaches that help correct for limitations in the audio and video algorithms, synchronization and evaluation of behaviour in the form of a short user study. Applications of this framework are discussed, and relevant quantitative and qualitative results are presented. A chapter to introduce the work done to establish a chatbot conversational system is also included. The final thesis work is an amalgamation of the above topics, and presents a complete and robust human-robot interaction framework with the REEM-C based on tracking moving conversational partners with audio and video intelligence

    Assistive technology design and development for acceptable robotics companions for ageing years

    Get PDF
    © 2013 Farshid Amirabdollahian et al., licensee Versita Sp. z o. o. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs license, which means that the text may be used for non-commercial purposes, provided credit is given to the author.A new stream of research and development responds to changes in life expectancy across the world. It includes technologies which enhance well-being of individuals, specifically for older people. The ACCOMPANY project focuses on home companion technologies and issues surrounding technology development for assistive purposes. The project responds to some overlooked aspects of technology design, divided into multiple areas such as empathic and social human-robot interaction, robot learning and memory visualisation, and monitoring persons’ activities at home. To bring these aspects together, a dedicated task is identified to ensure technological integration of these multiple approaches on an existing robotic platform, Care-O-Bot®3 in the context of a smart-home environment utilising a multitude of sensor arrays. Formative and summative evaluation cycles are then used to assess the emerging prototype towards identifying acceptable behaviours and roles for the robot, for example role as a butler or a trainer, while also comparing user requirements to achieved progress. In a novel approach, the project considers ethical concerns and by highlighting principles such as autonomy, independence, enablement, safety and privacy, it embarks on providing a discussion medium where user views on these principles and the existing tension between some of these principles, for example tension between privacy and autonomy over safety, can be captured and considered in design cycles and throughout project developmentsPeer reviewe

    Translating Videos to Commands for Robotic Manipulation with Deep Recurrent Neural Networks

    Full text link
    We present a new method to translate videos to commands for robotic manipulation using Deep Recurrent Neural Networks (RNN). Our framework first extracts deep features from the input video frames with a deep Convolutional Neural Networks (CNN). Two RNN layers with an encoder-decoder architecture are then used to encode the visual features and sequentially generate the output words as the command. We demonstrate that the translation accuracy can be improved by allowing a smooth transaction between two RNN layers and using the state-of-the-art feature extractor. The experimental results on our new challenging dataset show that our approach outperforms recent methods by a fair margin. Furthermore, we combine the proposed translation module with the vision and planning system to let a robot perform various manipulation tasks. Finally, we demonstrate the effectiveness of our framework on a full-size humanoid robot WALK-MAN

    Towards Active Event Recognition

    No full text
    Directing robot attention to recognise activities and to anticipate events like goal-directed actions is a crucial skill for human-robot interaction. Unfortunately, issues like intrinsic time constraints, the spatially distributed nature of the entailed information sources, and the existence of a multitude of unobservable states affecting the system, like latent intentions, have long rendered achievement of such skills a rather elusive goal. The problem tests the limits of current attention control systems. It requires an integrated solution for tracking, exploration and recognition, which traditionally have been seen as separate problems in active vision.We propose a probabilistic generative framework based on a mixture of Kalman filters and information gain maximisation that uses predictions in both recognition and attention-control. This framework can efficiently use the observations of one element in a dynamic environment to provide information on other elements, and consequently enables guided exploration.Interestingly, the sensors-control policy, directly derived from first principles, represents the intuitive trade-off between finding the most discriminative clues and maintaining overall awareness.Experiments on a simulated humanoid robot observing a human executing goal-oriented actions demonstrated improvement on recognition time and precision over baseline systems

    Development of a Semi-Autonomous Robotic System to Assist Children with Autism in Developing Visual Perspective Taking Skills

    Get PDF
    Robot-assisted therapy has been successfully used to help children with Autism Spectrum Condition (ASC) develop their social skills, but very often with the robot being fully controlled remotely by an adult operator. Although this method is reliable and allows the operator to conduct a therapy session in a customised child-centred manner, it increases the cognitive workload on the human operator since it requires them to divide their attention between the robot and the child to ensure that the robot is responding appropriately to the child's behaviour. In addition, a remote-controlled robot is not aware of the information regarding the interaction with children (e.g., body gesture and head pose, proximity etc) and consequently it does not have the ability to shape live HRIs. Further to this, a remote-controlled robot typically does not have the capacity to record this information and additional effort is required to analyse the interaction data. For these reasons, using a remote-controlled robot in robot-assisted therapy may be unsustainable for long-term interactions. To lighten the cognitive burden on the human operator and to provide a consistent therapeutic experience, it is essential to create some degrees of autonomy and enable the robot to perform some autonomous behaviours during interactions with children. Our previous research with the Kaspar robot either implemented a fully autonomous scenario involving pairs of children, which then lacked the often important input of the supervising adult, or, in most of our research, has used a remote control in the hand of the adult or the children to operate the robot. Alternatively, this paper provides an overview of the design and implementation of a robotic system called Sense- Think-Act which converts the remote-controlled scenarios of our humanoid robot into a semi-autonomous social agent with the capacity to play games autonomously (under human supervision) with children in the real-world school settings. The developed system has been implemented on the humanoid robot Kaspar and evaluated in a trial with four children with ASC at a local specialist secondary school in the UK where the data of 11 Child-Robot Interactions (CRIs) was collected. The results from this trial demonstrated that the system was successful in providing the robot with appropriate control signals to operate in a semi-autonomous manner without any latency, which supports autonomous CRIs, suggesting that the proposed architecture appears to have promising potential in supporting CRIs for real-world applications.Peer reviewe
    corecore