5,880 research outputs found

    Gesture Recognition for Human-Robot Interaction for Service Robots

    Get PDF
    Robots are quickly becoming an intrinsic part of our daily lives and it is becoming important to provide the users a simple and intuitive way to interact with them. In this thesis, we present a multimodal Human-Robot interface for an existing service robot, mostly addressed to people with reduced mobility on the shopping process in dynamic and crowded environments ( eg. supermarkets). This interface was created in order to recognize the "Start", "Stop" and "Pause" commands.The proposed Human-Robot Interface includes two types of interaction: verbal and non-verbal. Regarding verbal interaction, four state of the art implementations (Google Speech Recognition, Houndify, Microsoft Bing Voice Recognition, CMUsphinx) were tested and compared. The Houndify proved to be the more suitable for our project.Relatively to the non-verbal interaction, a novel method for hand gesture recognition based on depth information was implemented and tested. The software was developed to be used by a robot equipped with a RGB-D camera. This camera captures images in real time where the robot user's position is obtained. Taking as input the information already processed by the robot, the arm/hand is obtained by a depth based segmentation approach. A principal component analysis is then computed to each object where its center of mass and eigen vectors are calculated in order to extract the hand's tip and orientation. A Kalman filter is then applied for tracking the hand and get its position through time. Given this information and based on finite state machines that were implemented to describe gestures (start, stop, pause) we perform gesture recognition

    User evaluation of an interactive learning framework for single-arm and dual-arm robots

    Get PDF
    The final publication is available at link.springer.comSocial robots are expected to adapt to their users and, like their human counterparts, learn from the interaction. In our previous work, we proposed an interactive learning framework that enables a user to intervene and modify a segment of the robot arm trajectory. The framework uses gesture teleoperation and reinforcement learning to learn new motions. In the current work, we compared the user experience with the proposed framework implemented on the single-arm and dual-arm Barrett’s 7-DOF WAM robots equipped with a Microsoft Kinect camera for user tracking and gesture recognition. User performance and workload were measured in a series of trials with two groups of 6 participants using two robot settings in different order for counterbalancing. The experimental results showed that, for the same task, users required less time and produced shorter robot trajectories with the single-arm robot than with the dual-arm robot. The results also showed that the users who performed the task with the single-arm robot first experienced considerably less workload in performing the task with the dual-arm robot while achieving a higher task success rate in a shorter time.Peer ReviewedPostprint (author's final draft

    A real-time human-robot interaction system based on gestures for assistive scenarios

    Get PDF
    Natural and intuitive human interaction with robotic systems is a key point to develop robots assisting people in an easy and effective way. In this paper, a Human Robot Interaction (HRI) system able to recognize gestures usually employed in human non-verbal communication is introduced, and an in-depth study of its usability is performed. The system deals with dynamic gestures such as waving or nodding which are recognized using a Dynamic Time Warping approach based on gesture specific features computed from depth maps. A static gesture consisting in pointing at an object is also recognized. The pointed location is then estimated in order to detect candidate objects the user may refer to. When the pointed object is unclear for the robot, a disambiguation procedure by means of either a verbal or gestural dialogue is performed. This skill would lead to the robot picking an object in behalf of the user, which could present difficulties to do it by itself. The overall system — which is composed by a NAO and Wifibot robots, a KinectTM v2 sensor and two laptops — is firstly evaluated in a structured lab setup. Then, a broad set of user tests has been completed, which allows to assess correct performance in terms of recognition rates, easiness of use and response times.Postprint (author's final draft
    • …
    corecore