42,863 research outputs found

    Real time hand gesture recognition including hand segmentation and tracking

    Get PDF
    In this paper we present a system that performs automatic gesture recognition. The system consists of two main components: (i) A unified technique for segmentation and tracking of face and hands using a skin detection algorithm along with handling occlusion between skin objects to keep track of the status of the occluded parts. This is realized by combining 3 useful features, namely, color, motion and position. (ii) A static and dynamic gesture recognition system. Static gesture recognition is achieved using a robust hand shape classification, based on PCA subspaces, that is invariant to scale along with small translation and rotation transformations. Combining hand shape classification with position information and using DHMMs allows us to accomplish dynamic gesture recognition

    Toward natural interaction in the real world: real-time gesture recognition

    Get PDF
    Using a new hand tracking technology capable of tracking 3D hand postures in real-time, we developed a recognition system for continuous natural gestures. By natural gestures, we mean those encountered in spontaneous interaction, rather than a set of artificial gestures chosen to simplify recognition. To date we have achieved 95.6% accuracy on isolated gesture recognition, and 73% recognition rate on continuous gesture recognition, with data from 3 users and twelve gesture classes. We connected our gesture recognition system to Google Earth, enabling real time gestural control of a 3D map. We describe the challenges of signal accuracy and signal interpretation presented by working in a real-world environment, and detail how we overcame them.National Science Foundation (U.S.) (award IIS-1018055)Pfizer Inc.Foxconn Technolog

    Hand Tracking and Gesture Recognition for Human-Computer Interaction

    Get PDF
    The proposed work is part of a project that aims for the control of a videogame based on hand gesture recognition. This goal implies the restriction of real-time response and unconstrained environments. In this paper we present a real-time algorithm to track and recognise hand gestures for interacting with the videogame. This algorithm is based on three main steps: hand segmentation, hand tracking and gesture recognition from hand features. For the hand segmentation step we use the colour cue due to the characteristic colour values of human skin, its invariant properties and its computational simplicity. To prevent errors from hand segmentation we add a second step, hand tracking. Tracking is performed assuming a constant velocity model and using a pixel labeling approach. From the tracking process we extract several hand features that are fed to a finite state classifier which identifies the hand configuration. The hand can be classified into one of the four gesture classes or one of the four different movement directions. Finally, using the system's performance evaluation results we show the usability of the algorithm in a videogame environment

    Hand Tracking based on Hierarchical Clustering of Range Data

    Full text link
    Fast and robust hand segmentation and tracking is an essential basis for gesture recognition and thus an important component for contact-less human-computer interaction (HCI). Hand gesture recognition based on 2D video data has been intensively investigated. However, in practical scenarios purely intensity based approaches suffer from uncontrollable environmental conditions like cluttered background colors. In this paper we present a real-time hand segmentation and tracking algorithm using Time-of-Flight (ToF) range cameras and intensity data. The intensity and range information is fused into one pixel value, representing its combined intensity-depth homogeneity. The scene is hierarchically clustered using a GPU based parallel merging algorithm, allowing a robust identification of both hands even for inhomogeneous backgrounds. After the detection, both hands are tracked on the CPU. Our tracking algorithm can cope with the situation that one hand is temporarily covered by the other hand.Comment: Technical Repor

    Performance Improvement of Data Fusion Based Real-Time Hand Gesture Recognition by Using 3-D Convolution Neural Networks With Kinect V2

    Get PDF
    Hand gesture recognition is one of the most active areas of research in computer vision. It provides an easy way to interact with a machine without using any extra devices. Hand gestures are natural and intuitive communication way for the human being to interact with his environment. In this paper, we propose Data Fusion Based Real-Time Hand Gesture Recognition using 3-D Convolutional Neural Networks and Kinect V2. To achieve the accurate segmentation and tracking with Kinect V2. Convolution neural network to improve the validity and robustness of the system. Based on the experimental results, the proposed model is accurate, robust and performance with very low processor utilization. The performance of our proposed system in real life application, which is controlling various devices using Kinect V2. Keywords: Hand gesture recognition, Kinect V2, data fusion, Convolutional Neural Networks DOI: 10.7176/IKM/9-1-02

    Vision-based hand gesture interaction using particle filter, principle component analysis and transition network

    Get PDF
    Vision-based human-computer interaction is becoming important nowadays. It offers natural interaction with computers and frees users from mechanical interaction devices, which is favourable especially for wearable computers. This paper presents a human-computer interaction system based on a conventional webcam and hand gesture recognition. This interaction system works in real time and enables users to control a computer cursor with hand motions and gestures instead of a mouse. Five hand gestures are designed on behalf of five mouse operations: moving, left click, left-double click, right click and no-action. An algorithm based on Particle Filter is used for tracking the hand position. PCA-based feature selection is used for recognizing the hand gestures. A transition network is also employed for improving the accuracy and reliability of the interaction system. This interaction system shows good performance in the recognition and interaction test

    Hand gesture extraction by active shape models

    Get PDF
    The paper applied active statistical model for hand gesture extraction and recognition. After the hand contours are found out by a real-time segmenting and tracking system, a set of feature points (Landmarks) are marked out automatically and manually along the contour. A set of feature vectors will be normalized and aligned and then trained by Principal Component Analysis (PCA). Mean shape, eigenvalues and eigenvectors are computed out and composed of active shape model. When the model parameter is adjusted continually, various shape contours are generated to match the hand edges extracted from the original images. The gesture is finally recognized after well matching

    Vision-based gesture recognition system for human-computer interaction

    Get PDF
    Hand gesture recognition, being a natural way of human computer interaction, is an area of active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them to convey information or for device control. This work intends to study and implement a solution, generic enough, able to interpret user commands, composed of a set of dynamic and static gestures, and use those solutions to build an application able to work in a realtime human-computer interaction systems. The proposed solution is composed of two modules controlled by a FSM (Finite State Machine): a real time hand tracking and feature extraction system, supported by a SVM (Support Vector Machine) model for static hand posture classification and a set of HMMs (Hidden Markov Models) for dynamic single stroke hand gesture recognition. The experimental results showed that the system works very reliably, being able to recognize the set of defined commands in real-time. The SVM model for hand posture classification, trained with the selected hand features, achieved an accuracy of 99,2%. The proposed solution as the advantage of being computationally simple to train and use, and at the same time generic enough, allowing its application in any robot/system command interface

    A novel competitive neural classifier for gesture recognition with small training sets

    Get PDF
    Gesture recognition is a major area of interest in human-computer interaction. Recent advances in sensor technology and Computer power has allowed us to perform real-time joint tracking with com-modity hardware, but robust, adaptable, user-independent usable hand gesture classification remains an open problem. Since it is desirable that users can record their own gestures to expand their gesture vocabulary, a method that performs well on small training sets is required. We propose a novel competitive neural classifier (CNC) that recognizes arabic numbers hand gestures with a 98% success rate, even when trained with a small sample set (3 gestures per class). The approach uses the direction of movement between gesture sampling points as features and is time, scale and translation invariant. By using a technique borrowed from ob-ject and speaker recognition methods, it is also starting-point invariant, a new property we define for closed gestures. We found its performance to be on par with standard classifiers for temporal pattern recognition.XIV Workshop Agentes y Sistemas Inteligentes.Red de Universidades con Carreras en Informática (RedUNCI
    corecore