5 research outputs found

    Qualitative Research in HRI: A Review and Taxonomy

    Get PDF
    The field of human–robot interaction (HRI) is young and highly inter-disciplinary, and the approaches, standards and methods proper to it are still in the process of negotiation. This paper reviews the use of qualitative methods and approaches in the HRI literature in order to contribute to the development of a foundation of approaches and methodologies for these new research areas. In total, 73 papers that use qualitative methods were systematically reviewed. The review reveals that there is widespread use of qualitative methods in HRI, but very different approaches to reporting on it, and high variance in the rigour with which the approaches are applied. We also identify the key qualitative methods used. A major contribution of this paper is a taxonomy categorizing qualitative research in HRI in two dimensions: by ’study type’ and based on the specific qualitative method use

    Real-time Target Tracking and Following with UR5 Collaborative Robot Arm

    Get PDF
    The rise of the camera usage and their availability give opportunities for developing robotics applications and computer vision applications. Especially, recent development in depth sensing (e.g., Microsoft Kinect) allows development of new methods for Human Robot Interaction (HRI) field. Moreover, Collaborative robots (co-bots) are adapted for the manufacturing industry. This thesis focuses on HRI using the capabilities of Microsoft Kinect, Universal Robot-5 (UR5) and Robot Operating System (ROS). In this particular study, the movement of a fingertip is perceived and the same movement is repeated on the robot side. Seamless cooperation, accurate trajectory and safety during the collaboration are the most important parts of the HRI. The study aims to recognize and track the fingertip accurately and to transform it as the motion of UR5. It also aims to improve the motion performance of UR5 and interaction efficiency during collaboration. In the experimental part, nearest-point approach is used via Kinect sensor's depth image (RGB-D). The approach is based on the Euclidean distance which has robust properties against different environments. Moreover, Point Cloud Library (PCL) and its built-in filters are used for processing the depth data. After the depth data provided via Microsoft Kinect have been processed, the difference of the nearest points is transmitted to the robot via ROS. On the robot side, MoveIt! motion planner is used for the smooth trajectory. Once the data has been processed successfully and the motion code has been implemented without bugs, 84.18% total accuracy was achieved. After the improvements in motion planning and data processing, the total accuracy was increased to 94.14%. Lastly, the latency was reduced from 3-4 seconds to 0.14 seconds

    Challenges to grounding in human-robot interaction

    No full text

    Challenges to grounding in human-robot interaction: Sources of errors and miscommunications in remote exploration robotics

    No full text
    We report a study of a human-robot system composed of
    corecore