13,622 research outputs found

    Development of gesture-controlled robotic arm for upper limb hemiplegia therapy

    Get PDF
    Human-computer interactions using hand gesture recognition has emerge as a current approach in recent rehabilitation studies. The introduction of a vision-based system such as the Microsoft Kinect and the Leap Motion sensor (LMS) provides a very informative description of hand pose that can be exploited for tracking applications. Compared to the Kinect depth camera, the LMS produces a more limited amount of information and interaction zone, but the output data is more accurate. Thus, this study aims to explore the LMS system as an effective method for hand gesture recognition controlled robotic arm in improving upper-extremity motor function therapy. Many engineering challenges are addressed to develop a viable system for the therapy application: a real-time and accurate system for hand movement detection, limitation of robot workspace and hand-robot coordination, and development of hand motion-based robot position algorithm. EMU HS4 robot arm and controller have been retrofitted to allow 3 degrees of freedom (DOF) moment and directly controlled by LMS-based gesture recognition. A series of wrist revolving rehabilitation exercises are conducted that provides a good agreement where the robot can move according to hand movement. The potential of the proposed system has been further illustrated and verified through comprehensive rehabilitation training exercises with around 90% accuracy for flexion-extension training. In conclusion, these findings have significant implications for the understanding of hand recognition application towards robotic-based upper limb assistive and rehabilitation procedures

    A real-time human-robot interaction system based on gestures for assistive scenarios

    Get PDF
    Natural and intuitive human interaction with robotic systems is a key point to develop robots assisting people in an easy and effective way. In this paper, a Human Robot Interaction (HRI) system able to recognize gestures usually employed in human non-verbal communication is introduced, and an in-depth study of its usability is performed. The system deals with dynamic gestures such as waving or nodding which are recognized using a Dynamic Time Warping approach based on gesture specific features computed from depth maps. A static gesture consisting in pointing at an object is also recognized. The pointed location is then estimated in order to detect candidate objects the user may refer to. When the pointed object is unclear for the robot, a disambiguation procedure by means of either a verbal or gestural dialogue is performed. This skill would lead to the robot picking an object in behalf of the user, which could present difficulties to do it by itself. The overall system — which is composed by a NAO and Wifibot robots, a KinectTM v2 sensor and two laptops — is firstly evaluated in a structured lab setup. Then, a broad set of user tests has been completed, which allows to assess correct performance in terms of recognition rates, easiness of use and response times.Postprint (author's final draft

    Multimodal Signal Processing and Learning Aspects of Human-Robot Interaction for an Assistive Bathing Robot

    Full text link
    We explore new aspects of assistive living on smart human-robot interaction (HRI) that involve automatic recognition and online validation of speech and gestures in a natural interface, providing social features for HRI. We introduce a whole framework and resources of a real-life scenario for elderly subjects supported by an assistive bathing robot, addressing health and hygiene care issues. We contribute a new dataset and a suite of tools used for data acquisition and a state-of-the-art pipeline for multimodal learning within the framework of the I-Support bathing robot, with emphasis on audio and RGB-D visual streams. We consider privacy issues by evaluating the depth visual stream along with the RGB, using Kinect sensors. The audio-gestural recognition task on this new dataset yields up to 84.5%, while the online validation of the I-Support system on elderly users accomplishes up to 84% when the two modalities are fused together. The results are promising enough to support further research in the area of multimodal recognition for assistive social HRI, considering the difficulties of the specific task. Upon acceptance of the paper part of the data will be publicly available
    • …
    corecore