12,329 research outputs found

    CobotTouch: AR-based Interface with Fingertip-worn Tactile Display for Immersive Operation/Control of Collaborative Robots

    Full text link
    Complex robotic tasks require human collaboration to benefit from their high dexterity. Frequent human-robot interaction is mentally demanding and time-consuming. Intuitive and easy-to-use robot control interfaces reduce the negative influence on workers, especially inexperienced users. In this paper, we present CobotTouch, a novel intuitive robot control interface with fingertip haptic feedback. The proposed interface consists of a projected Graphical User Interface on the robotic arm to control the position of the robot end-effector based on gesture recognition, and a wearable haptic interface to deliver tactile feedback on the user's fingertips. We evaluated the user's perception of the designed tactile patterns presented by the haptic interface and the intuitiveness of the proposed system for robot control in a use case. The results revealed a high average recognition rate of 75.25\% for the tactile patterns. An average NASA Task Load Index (TLX) indicated small mental and temporal demands proving a high level of the intuitiveness of CobotTouch for interaction with collaborative robots.Comment: 12 pages, 11 figures, Accepted paper in AsiaHaptics 202

    Gestures in Machine Interaction

    Full text link
    Vnencumbered-gesture-interaction (VGI) describes the use of unrestricted gestures in machine interaction. The development of such technology will enable users to interact with machines and virtual environments by performing actions like grasping, pinching or waving without the need of peripherals. Advances in image-processing and pattern recognition make such interaction viable and in some applications more practical than current modes of keyboard, mouse and touch-screen interaction provide. VGI is emerging as a popular topic amongst Human-Computer Interaction (HCI), Computer-vision and gesture research; and is developing into a topic with potential to significantly impact the future of computer-interaction, robot-control and gaming. This thesis investigates whether an ergonomic model of VGI can be developed and implemented on consumer devices by considering some of the barriers currently preventing such a model of VGI from being widely adopted. This research aims to address the development of freehand gesture interfaces and accompanying syntax. Without the detailed consideration of the evolution of this field the development of un-ergonomic, inefficient interfaces capable of placing undue strain on interface users becomes more likely. In the course of this thesis some novel design and methodological assertions are made. The Gesture in Machine Interaction (GiMI) syntax model and the Gesture-Face Layer (GFL), developed in the course of this research, have been designed to facilitate ergonomic gesture interaction. The GiMI is an interface syntax model designed to enable cursor control, browser navigation commands and steering control for remote robots or vehicles. Through applying state-of-the-art image processing that facilitates three-dimensional (3D) recognition of human action, this research investigates how interface syntax can incorporate the broadest range of human actions. By advancing our understanding of ergonomic gesture syntax, this research aims to assist future developers evaluate the efficiency of gesture interfaces, lexicons and syntax

    A kinect-based gesture recognition approach for a natural human robot interface

    Get PDF
    In this paper, we present a gesture recognition system for the development of a human-robot interaction (HRI) interface. Kinect cameras and the OpenNI framework are used to obtain real-time tracking of a human skeleton. Ten different gestures, performed by different persons, are defined. Quaternions of joint angles are first used as robust and significant features. Next, neural network (NN) classifiers are trained to recognize the different gestures. This work deals with different challenging tasks, such as the real-time implementation of a gesture recognition system and the temporal resolution of gestures. The HRI interface developed in this work includes three Kinect cameras placed at different locations in an indoor environment and an autonomous mobile robot that can be remotely controlled by one operator standing in front of one of the Kinects. Moreover, the system is supplied with a people re-identification module which guarantees that only one person at a time has control of the robot. The system's performance is first validated offline, and then online experiments are carried out, proving the real-time operation of the system as required by a HRI interface

    Human-robot coexistence and interaction in open industrial cells

    Get PDF
    Recent research results on human\u2013robot interaction and collaborative robotics are leaving behind the traditional paradigm of robots living in a separated space inside safety cages, allowing humans and robot to work together for completing an increasing number of complex industrial tasks. In this context, safety of the human operator is a main concern. In this paper, we present a framework for ensuring human safety in a robotic cell that allows human\u2013robot coexistence and dependable interaction. The framework is based on a layered control architecture that exploits an effective algorithm for online monitoring of relative human\u2013robot distance using depth sensors. This method allows to modify in real time the robot behavior depending on the user position, without limiting the operative robot workspace in a too conservative way. In order to guarantee redundancy and diversity at the safety level, additional certified laser scanners monitor human\u2013robot proximity in the cell and safe communication protocols and logical units are used for the smooth integration with an industrial software for safe low-level robot control. The implemented concept includes a smart human-machine interface to support in-process collaborative activities and for a contactless interaction with gesture recognition of operator commands. Coexistence and interaction are illustrated and tested in an industrial cell, in which a robot moves a tool that measures the quality of a polished metallic part while the operator performs a close evaluation of the same workpiece

    Kinect-Based Vision System of Mine Rescue Robot for Low Illuminous Environment

    Get PDF
    This paper presents Kinect-based vision system of mine rescue robot working in illuminous underground environment. The somatosensory system of Kinect is used to realize the hand gesture recognition involving static hand gesture and action. A K-curvature based convex detection method is proposed to fit the hand contour with polygon. In addition, the hand action is completed by using the NiTE library with the framework of hand gesture recognition. In addition, the proposed method is compared with BP neural network and template matching. Furthermore, taking advantage of the information of the depth map, the interface of hand gesture recognition is established for human machine interaction of rescue robot. Experimental results verify the effectiveness of Kinect-based vision system as a feasible and alternative technology for HMI of mine rescue robot
    • …
    corecore