48,252 research outputs found

    Facial expressions emotional recognition with NAO robot

    Get PDF
    Human-robot interaction research is diverse and covers a wide range of topics. All aspects of human factors and robotics are within the purview of HRI research so far as they provide insight into how to improve our understanding in developing effective tools, protocols, and systems to enhance HRI. For example, a significant research effort is being devoted to designing human-robot interface that makes it easier for the people to interact with robots. HRI is an extremely active research field where new and important work is being published at a fast pace. It is crucial for humanoid robots to understand the emotions of people for efficient human robot interaction. Initially, the robot detects human face by Viola- Jones technique. Later, facial distance measurements are accumulated by geometric based facial distance measurement method. Then facial action coding system is used to detect movements of measured facial points. Finally, measured facial movements are evaluated to get instant emotional properties of human face in this research; it has been specifically applied to NAO humanoid robot

    A robot swarm assisting a human fire-fighter

    Get PDF
    Emergencies in industrial warehouses are a major concern for fire-fighters. The large dimensions, together with the development of dense smoke that drastically reduces visibility, represent major challenges. The GUARDIANS robot swarm is designed to assist fire-fighters in searching a large warehouse. In this paper we discuss the technology developed for a swarm of robots assisting fire-fighters. We explain the swarming algorithms that provide the functionality by which the robots react to and follow humans while no communication is required. Next we discuss the wireless communication system, which is a so-called mobile ad-hoc network. The communication network provides also the means to locate the robots and humans. Thus, the robot swarm is able to provide guidance information to the humans. Together with the fire-fighters we explored how the robot swarm should feed information back to the human fire-fighter. We have designed and experimented with interfaces for presenting swarm-based information to human beings

    Temporal-Difference Learning to Assist Human Decision Making during the Control of an Artificial Limb

    Full text link
    In this work we explore the use of reinforcement learning (RL) to help with human decision making, combining state-of-the-art RL algorithms with an application to prosthetics. Managing human-machine interaction is a problem of considerable scope, and the simplification of human-robot interfaces is especially important in the domains of biomedical technology and rehabilitation medicine. For example, amputees who control artificial limbs are often required to quickly switch between a number of control actions or modes of operation in order to operate their devices. We suggest that by learning to anticipate (predict) a user's behaviour, artificial limbs could take on an active role in a human's control decisions so as to reduce the burden on their users. Recently, we showed that RL in the form of general value functions (GVFs) could be used to accurately detect a user's control intent prior to their explicit control choices. In the present work, we explore the use of temporal-difference learning and GVFs to predict when users will switch their control influence between the different motor functions of a robot arm. Experiments were performed using a multi-function robot arm that was controlled by muscle signals from a user's body (similar to conventional artificial limb control). Our approach was able to acquire and maintain forecasts about a user's switching decisions in real time. It also provides an intuitive and reward-free way for users to correct or reinforce the decisions made by the machine learning system. We expect that when a system is certain enough about its predictions, it can begin to take over switching decisions from the user to streamline control and potentially decrease the time and effort needed to complete tasks. This preliminary study therefore suggests a way to naturally integrate human- and machine-based decision making systems.Comment: 5 pages, 4 figures, This version to appear at The 1st Multidisciplinary Conference on Reinforcement Learning and Decision Making, Princeton, NJ, USA, Oct. 25-27, 201

    Exploring haptic interfacing with a mobile robot without visual feedback

    Get PDF
    Search and rescue scenarios are often complicated by low or no visibility conditions. The lack of visual feedback hampers orientation and causes significant stress for human rescue workers. The Guardians project [1] pioneered a group of autonomous mobile robots assisting a human rescue worker operating within close range. Trials were held with fire fighters of South Yorkshire Fire and Rescue. It became clear that the subjects by no means were prepared to give up their procedural routine and the feel of security they provide: they simply ignored instructions that contradicted their routines

    Mission Specialist Human-Robot Interaction in Micro Unmanned Aerial Systems

    Get PDF
    This research investigated the Mission Specialist role in micro unmanned aerial systems (mUAS) and was informed by human-robot interaction (HRI) and technology findings, resulting in the design of an interface that increased the individual performance of 26 untrained CBRN (chemical, biological, radiological, nuclear) responders during two field studies, and yielded formative observations for HRI in mUAS. Findings from the HRI literature suggested a Mission Specialist requires a role-specific interface that shares visual common ground with the Pilot role and allows active control of the unmanned aerial vehicle (UAV) payload camera. Current interaction technology prohibits this as responders view the same interface as the Pilot and give verbal directions for navigation and payload control. A review of interaction principles resulted in a synthesis of five design guidelines and a system architecture that were used to implement a Mission Specialist interface on an Apple iPad. The Shared Roles Model was used to model the mUAS human-robot team using three formal role descriptions synthesized from the literature (Flight Director, Pilot, and Mission Specialist). The Mission Specialist interface was evaluated through two separate field studies involving 26 CBRN experts who did not have mUAS experience. The studies consisted of 52 mission trials to surveil, evaluate, and capture imagery of a chemical train derailment incident staged at Disaster City. Results from the experimental study showed that when a Mission Specialist was able to actively control the UAV payload camera and verbally coordinate with the Pilot, greater role empowerment (confidence, comfort, and perceived best individual and team performance) was reported by a majority of participants for similar tasks; thus, a role-specific interface is preferred and should be used by untrained responders instead of viewing the same interface as the Pilot in mUAS. Formative observations made during this research suggested: i) establishing common ground in mUAS is both verbal and visual, ii) type of coordination (active or passive) preferred by the Mission Specialist is affected by command-level experience and perceived responsibility for the robot, and iii) a separate Pilot role is necessary regardless of preferred coordination type in mUAS. This research is of importance to HRI and CBRN researchers and practitioners, as well as those in the fields of robotics, human-computer interaction, and artificial intelligence, because it found that a human Pilot role is necessary for assistance and understanding, and that there are hidden dependencies in the human-robot team that affect Mission Specialist performance

    Incremental Learning for Robot Perception through HRI

    Full text link
    Scene understanding and object recognition is a difficult to achieve yet crucial skill for robots. Recently, Convolutional Neural Networks (CNN), have shown success in this task. However, there is still a gap between their performance on image datasets and real-world robotics scenarios. We present a novel paradigm for incrementally improving a robot's visual perception through active human interaction. In this paradigm, the user introduces novel objects to the robot by means of pointing and voice commands. Given this information, the robot visually explores the object and adds images from it to re-train the perception module. Our base perception module is based on recent development in object detection and recognition using deep learning. Our method leverages state of the art CNNs from off-line batch learning, human guidance, robot exploration and incremental on-line learning

    Mixed Initiative Systems for Human-Swarm Interaction: Opportunities and Challenges

    Full text link
    Human-swarm interaction (HSI) involves a number of human factors impacting human behaviour throughout the interaction. As the technologies used within HSI advance, it is more tempting to increase the level of swarm autonomy within the interaction to reduce the workload on humans. Yet, the prospective negative effects of high levels of autonomy on human situational awareness can hinder this process. Flexible autonomy aims at trading-off these effects by changing the level of autonomy within the interaction when required; with mixed-initiatives combining human preferences and automation's recommendations to select an appropriate level of autonomy at a certain point of time. However, the effective implementation of mixed-initiative systems raises fundamental questions on how to combine human preferences and automation recommendations, how to realise the selected level of autonomy, and what the future impacts on the cognitive states of a human are. We explore open challenges that hamper the process of developing effective flexible autonomy. We then highlight the potential benefits of using system modelling techniques in HSI by illustrating how they provide HSI designers with an opportunity to evaluate different strategies for assessing the state of the mission and for adapting the level of autonomy within the interaction to maximise mission success metrics.Comment: Author version, accepted at the 2018 IEEE Annual Systems Modelling Conference, Canberra, Australi
    • …
    corecore