10,707 research outputs found

    Prediction of Human Trajectory Following a Haptic Robotic Guide Using Recurrent Neural Networks

    Full text link
    Social intelligence is an important requirement for enabling robots to collaborate with people. In particular, human path prediction is an essential capability for robots in that it prevents potential collision with a human and allows the robot to safely make larger movements. In this paper, we present a method for predicting the trajectory of a human who follows a haptic robotic guide without using sight, which is valuable for assistive robots that aid the visually impaired. We apply a deep learning method based on recurrent neural networks using multimodal data: (1) human trajectory, (2) movement of the robotic guide, (3) haptic input data measured from the physical interaction between the human and the robot, (4) human depth data. We collected actual human trajectory and multimodal response data through indoor experiments. Our model outperformed the baseline result while using only the robot data with the observed human trajectory, and it shows even better results when using additional haptic and depth data.Comment: 6 pages, Submitted to IEEE World Haptics Conference 201

    A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction

    Full text link
    Picking up objects requested by a human user is a common task in human-robot interaction. When multiple objects match the user's verbal description, the robot needs to clarify which object the user is referring to before executing the action. Previous research has focused on perceiving user's multimodal behaviour to complement verbal commands or minimising the number of follow up questions to reduce task time. In this paper, we propose a system for reference disambiguation based on visualisation and compare three methods to disambiguate natural language instructions. In a controlled experiment with a YuMi robot, we investigated real-time augmentations of the workspace in three conditions -- mixed reality, augmented reality, and a monitor as the baseline -- using objective measures such as time and accuracy, and subjective measures like engagement, immersion, and display interference. Significant differences were found in accuracy and engagement between the conditions, but no differences were found in task time. Despite the higher error rates in the mixed reality condition, participants found that modality more engaging than the other two, but overall showed preference for the augmented reality condition over the monitor and mixed reality conditions

    Robotic Rabbit Companions: amusing or a nuisance?

    Get PDF
    Most of the studies in human-robot interaction involve controlled experiments in a laboratory and only a limited number of studies have put robotic companions into people’s home. Introducing robots into a real-life environment does not only pose many technical challenges but also raises several methodological issues. And even though there might be a gain in ecological validity of the findings, there are other drawbacks that limit the validity of the results. In this paper we reflect on some of these issues based on the experience we gained in the SERA project where a robotic companion was put in the homes of a few people for ten days. We try to draw some general lessons from this experience

    Towards Assistive Feeding with a General-Purpose Mobile Manipulator

    Get PDF
    General-purpose mobile manipulators have the potential to serve as a versatile form of assistive technology. However, their complexity creates challenges, including the risk of being too difficult to use. We present a proof-of-concept robotic system for assistive feeding that consists of a Willow Garage PR2, a high-level web-based interface, and specialized autonomous behaviors for scooping and feeding yogurt. As a step towards use by people with disabilities, we evaluated our system with 5 able-bodied participants. All 5 successfully ate yogurt using the system and reported high rates of success for the system's autonomous behaviors. Also, Henry Evans, a person with severe quadriplegia, operated the system remotely to feed an able-bodied person. In general, people who operated the system reported that it was easy to use, including Henry. The feeding system also incorporates corrective actions designed to be triggered either autonomously or by the user. In an offline evaluation using data collected with the feeding system, a new version of our multimodal anomaly detection system outperformed prior versions.Comment: This short 4-page paper was accepted and presented as a poster on May. 16, 2016 in ICRA 2016 workshop on 'Human-Robot Interfaces for Enhanced Physical Interactions' organized by Arash Ajoudani, Barkan Ugurlu, Panagiotis Artemiadis, Jun Morimoto. It was peer reviewed by one reviewe

    Empowering and assisting natural human mobility: The simbiosis walker

    Get PDF
    This paper presents the complete development of the Simbiosis Smart Walker. The device is equipped with a set of sensor subsystems to acquire user-machine interaction forces and the temporal evolution of user's feet during gait. The authors present an adaptive filtering technique used for the identification and separation of different components found on the human-machine interaction forces. This technique allowed isolating the components related with the navigational commands and developing a Fuzzy logic controller to guide the device. The Smart Walker was clinically validated at the Spinal Cord Injury Hospital of Toledo - Spain, presenting great acceptability by spinal chord injury patients and clinical staf

    Rehabilitation robot cell for multimodal standing-up motion augmentation

    Get PDF
    The paper presents a robot cell for multimodal standing-up motion augmentation. The robot cell is aimed at augmenting the standing-up capabilities of impaired or paraplegic subjects. The setup incorporates the rehabilitation robot device, functional electrical stimulation system, measurement instrumentation and cognitive feedback system. For controlling the standing-up process a novel approach was developed integrating the voluntary activity of a person in the control scheme of the rehabilitation robot. The simulation results demonstrate the possibility of “patient-driven” robot-assisted standing-up training. Moreover, to extend the system capabilities, the audio cognitive feedback is aimed to guide the subject throughout rising. For the feedback generation a granular synthesis method is utilized displaying high-dimensional, dynamic data. The principle of operation and example sonification in standing-up are presented. In this manner, by integrating the cognitive feedback and “patient-driven” actuation systems, an effective motion augmentation system is proposed in which the motion coordination is under the voluntary control of the user
    corecore