13,814 research outputs found

    Kinect-Based Vision System of Mine Rescue Robot for Low Illuminous Environment

    Get PDF
    This paper presents Kinect-based vision system of mine rescue robot working in illuminous underground environment. The somatosensory system of Kinect is used to realize the hand gesture recognition involving static hand gesture and action. A K-curvature based convex detection method is proposed to fit the hand contour with polygon. In addition, the hand action is completed by using the NiTE library with the framework of hand gesture recognition. In addition, the proposed method is compared with BP neural network and template matching. Furthermore, taking advantage of the information of the depth map, the interface of hand gesture recognition is established for human machine interaction of rescue robot. Experimental results verify the effectiveness of Kinect-based vision system as a feasible and alternative technology for HMI of mine rescue robot

    Biologically inspired vision for human-robot interaction

    Get PDF
    Human-robot interaction is an interdisciplinary research area that is becoming more and more relevant as robots start to enter our homes, workplaces, schools, etc. In order to navigate safely among us, robots must be able to understand human behavior, to communicate, and to interpret instructions from humans, either by recognizing their speech or by understanding their body movements and gestures. We present a biologically inspired vision system for human-robot interaction which integrates several components: visual saliency, stereo vision, face and hand detection and gesture recognition. Visual saliency is computed using color, motion and disparity. Both the stereo vision and gesture recognition components are based on keypoints coded by means of cortical V1 simple, complex and end-stopped cells. Hand and face detection is achieved by using a linear SVM classifier. The system was tested on a child-sized robot.Postprin

    Development of gesture-controlled robotic arm for upper limb hemiplegia therapy

    Get PDF
    Human-computer interactions using hand gesture recognition has emerge as a current approach in recent rehabilitation studies. The introduction of a vision-based system such as the Microsoft Kinect and the Leap Motion sensor (LMS) provides a very informative description of hand pose that can be exploited for tracking applications. Compared to the Kinect depth camera, the LMS produces a more limited amount of information and interaction zone, but the output data is more accurate. Thus, this study aims to explore the LMS system as an effective method for hand gesture recognition controlled robotic arm in improving upper-extremity motor function therapy. Many engineering challenges are addressed to develop a viable system for the therapy application: a real-time and accurate system for hand movement detection, limitation of robot workspace and hand-robot coordination, and development of hand motion-based robot position algorithm. EMU HS4 robot arm and controller have been retrofitted to allow 3 degrees of freedom (DOF) moment and directly controlled by LMS-based gesture recognition. A series of wrist revolving rehabilitation exercises are conducted that provides a good agreement where the robot can move according to hand movement. The potential of the proposed system has been further illustrated and verified through comprehensive rehabilitation training exercises with around 90% accuracy for flexion-extension training. In conclusion, these findings have significant implications for the understanding of hand recognition application towards robotic-based upper limb assistive and rehabilitation procedures

    Hand Gestures Replicating Robot Arm based on MediaPipe

    Get PDF
    A robotic arm is any variety of programmable mechanical devices designed to operate items like a human arm and is one of the most beneficial innovations of the 20th century, quickly becoming a cornerstone of many industries. It can perform a variety of tasks and duties that may be time-consuming, difficult, or dangerous to humans. The gesture-based control interface offers many opportunities for more natural, configurable, and easy human-machine interaction. It can expand the capabilities of the GUI and command line interfaces that we use today with the mouse and keyboard. This work proposed changing the concept of remote controls for operating a hand-operated robotic arm to get rid of buttons and joysticks by replacing them with a more intuitive approach to controlling a robotic arm via the hand gestures of the user. The proposed system performs vision-based hand gesture recognition and a robot arm that can replicate the user's hand gestures using image processing. The system detects and recognizes hand gestures using Python and sends a command to the microcontroller which is the Arduino board connected to the robot arm to replicate the recognized gesture. Five servo motors are connected to the Arduino Nano to control the fingers of the robot arm; These servos are related to the robot arm prototype. It is worth noting that this system was able to repeat the user's hand gestures with an accuracy of up to 96%

    A vision-based teleoperation system for robotic systems

    Get PDF
    Despite advances in robotic perception are increasing autonomous capabilities, human intelligence is still considered a necessity in unstructured or unpredictable environments. Hence, also according to the Industry 4.0 paradigm, humans and robots are encouraged to achieve mutual Human-Robot Interaction (HRI). HRI can be physical (pHRI) or not, depending on the assigned task. For example, when the robot is constrained in a dangerous environment or must handle hazardous materials, pHRI is not recommended. In these cases, robot teleoperation may be necessary. A teleoperation system concerns with the exploration and exploitation of spaces where the user presence is not allowed. Therefore, the operator needs to move the robot remotely. Although plenty of human-machine interfaces for teleoperation have been developed considering a mechanical device, vision-based interfaces do not require physical contact with external devices. This grants a more natural and intuitive interaction, which is reflected in task performance. Our proposed system is a novel robot teleoperation system that exploits RGB cameras, which are easy to use and commonly available on the market at a reduced price. A ROS-based framework has been developed to supply hand tracking and hand-gesture recognition features, exploiting the OpenPose software based on the Deep Learning framework Caffe. This, in combination with the ease of availability of an RGB camera, leads the framework to be strongly open-source-oriented and highly replicable on all ROS-based platforms. It is worth noting that the system does not include the Z-axis control in this first version. This is due to the high precision and sensitivity required to robustly control the third axis, a precision that 3D vision systems are not able to provide unless very expensive devices are adopted. Our aim is to further develop the system to include the third axis control in a future release

    A real-time human-robot interaction system based on gestures for assistive scenarios

    Get PDF
    Natural and intuitive human interaction with robotic systems is a key point to develop robots assisting people in an easy and effective way. In this paper, a Human Robot Interaction (HRI) system able to recognize gestures usually employed in human non-verbal communication is introduced, and an in-depth study of its usability is performed. The system deals with dynamic gestures such as waving or nodding which are recognized using a Dynamic Time Warping approach based on gesture specific features computed from depth maps. A static gesture consisting in pointing at an object is also recognized. The pointed location is then estimated in order to detect candidate objects the user may refer to. When the pointed object is unclear for the robot, a disambiguation procedure by means of either a verbal or gestural dialogue is performed. This skill would lead to the robot picking an object in behalf of the user, which could present difficulties to do it by itself. The overall system — which is composed by a NAO and Wifibot robots, a KinectTM v2 sensor and two laptops — is firstly evaluated in a structured lab setup. Then, a broad set of user tests has been completed, which allows to assess correct performance in terms of recognition rates, easiness of use and response times.Postprint (author's final draft

    Human robot interaction in a crowded environment

    No full text
    Human Robot Interaction (HRI) is the primary means of establishing natural and affective communication between humans and robots. HRI enables robots to act in a way similar to humans in order to assist in activities that are considered to be laborious, unsafe, or repetitive. Vision based human robot interaction is a major component of HRI, with which visual information is used to interpret how human interaction takes place. Common tasks of HRI include finding pre-trained static or dynamic gestures in an image, which involves localising different key parts of the human body such as the face and hands. This information is subsequently used to extract different gestures. After the initial detection process, the robot is required to comprehend the underlying meaning of these gestures [3]. Thus far, most gesture recognition systems can only detect gestures and identify a person in relatively static environments. This is not realistic for practical applications as difficulties may arise from people‟s movements and changing illumination conditions. Another issue to consider is that of identifying the commanding person in a crowded scene, which is important for interpreting the navigation commands. To this end, it is necessary to associate the gesture to the correct person and automatic reasoning is required to extract the most probable location of the person who has initiated the gesture. In this thesis, we have proposed a practical framework for addressing the above issues. It attempts to achieve a coarse level understanding about a given environment before engaging in active communication. This includes recognizing human robot interaction, where a person has the intention to communicate with the robot. In this regard, it is necessary to differentiate if people present are engaged with each other or their surrounding environment. The basic task is to detect and reason about the environmental context and different interactions so as to respond accordingly. For example, if individuals are engaged in conversation, the robot should realize it is best not to disturb or, if an individual is receptive to the robot‟s interaction, it may approach the person. Finally, if the user is moving in the environment, it can analyse further to understand if any help can be offered in assisting this user. The method proposed in this thesis combines multiple visual cues in a Bayesian framework to identify people in a scene and determine potential intentions. For improving system performance, contextual feedback is used, which allows the Bayesian network to evolve and adjust itself according to the surrounding environment. The results achieved demonstrate the effectiveness of the technique in dealing with human-robot interaction in a relatively crowded environment [7]
    • …
    corecore