7 research outputs found

    A hybrid method using kinect depth and color data stream for hand blobs segmentation

    Get PDF
    The recently developed depth sensors such as Kinect have provided new potential for human-computer interaction (HCI) and hand gesture are one of main parts in recent researches. Hand segmentation procedure is performed to acquire hand gesture from a captured image. In this paper, a method is produced to segment hand blobs using both depth and color data frames. This method applies a body segmentation and an image threshold techniques to depth data frame using skeleton data and concurrently it uses SLIC super-pixel segmentation method to extract hand blobs from color data frame with the help of skeleton data. The proposed method has low computation time and shows significant results when basic assumption are fulfilled

    Cascading Neural Networks for Upper-body Gesture Recognition

    Get PDF
    Abstract -Gesture recognition has many applications ranging from health care to entertainment. However for it to be a feasible method of human-computer interaction it is essential that only intentional movements are interpreted and that the system can work for a wide variety of users. To date very few systems have been tested for the realworld where users are inexperienced in gesture performance resulting in data which is noisier in terms of gesturestarts, gesture motion and gesture-ends. In addition, few systems have taken into consideration the dominant hand used when performing gestures. The work presented in this paper takes this into consideration by firstly selecting key-frames from a gesture sequence then cascading neural networks for left and right gesture classification. The first neural network determines which hand is being used for gesture performance and the second neural network then recognises the gesture. The performance of the system is tested using the VisApp2013 gesture dataset which consists of four left and right hand gestures. This dataset is unique in that the test gesture samples have been performed by untrained users to simulate a real-world environment. By key-frame selection and cascading neural networks the system accuracy improves from 79.8% to 95.6%

    Reconocimiento de gestos mediante redes neuronales convolucionales, utilizando imágenes de rango capturadas con el dispositivo Leap Motion

    Full text link
    [ES] En el presente trabajo final de grado se desarrolla un reconocedor de gestos estáticos realizados con la mano, basado en redes neuronales convolucionales y empleando Leap Motion como dispositivo de toma de imágenes, debido a su buena relación de compromiso entre prestaciones y precio. En el diseño del clasificador, después de analizar las posibles alternativas, se proponen distintas arquitecturas de red de entre las cuales poder seleccionar la mejor. Para ello, se realiza su entrenamiento y test a partir de distintas bases de datos gestuales preparadas para tal fin. Se cuenta con un corpus de datos formado por 16 clases de gestos a partir del cuál se derivan distintas variantes cuya expresividad se valorará también en los experimentos. Por último, a fin de mostrar el resultado obtenido en el trabajo, se presenta un demostrador del reconocedor de gestos a partir de imágenes capturadas en tiempo real, que alimentarán la red convolucional seleccionada.[EN] In this final degree project, a static hand gesture recognizer is developed, based on convolutional neural networks and using Leap Motion as an imaging device, due to its good compromise between performance and price. In the design of the classifier, after analyzing the possible alternatives, different network architectures are proposed from which the best one can be selected. To do this, their training and tests are carried out from different gesture databases prepared for this purpose. There is a data corpus made up of 16 classes of gestures from which different variants are derived whose expressiveness will also be assessed in the experiments. Finally, in order to show the results obtained in the work, a gesture recognizer demonstrator is presented from images captured in real time, which will feed the selected convolutional network.[CA] En el present treball final de grau és desenvolupa un reconeixedor de gestos estátics realitzats amb la mà, basat en xarxes neuronals convolucionals i utilitzant Leap Motion com a dispositiu de presa d’imatges, a causa de la seua bona relació de compromis entre prestacions i preu. En el disseny del classificador, després d’analitzar les possibles alternatives, és proposen diferents arquitectures de xarxa d’entre les quals poder seleccionar la millor. Per això, és realitza el seu entrenament i test a partir de diferents bases de dades gestuals preparades per a tal fi. Es compta amb un corpus de dades format per 16 classes de gestos a partir dels quals es deriven diferents variants, l’expresivitat del quals també será evaluada en els experiments. Finalment, a fi de mostrar el resultat obtingut en el treball, és presenta un demostrador del reconeixedor de gestos a partir d’imatges capturades en temps real, que alimentaran la xarxa convolucional seleccionada.Rodas Lorente, M. (2021). Reconocimiento de gestos mediante redes neuronales convolucionales, utilizando imágenes de rango capturadas con el dispositivo Leap Motion. Universitat Politècnica de València. http://hdl.handle.net/10251/174692TFG

    Human movements evaluation using depth sensors

    Get PDF
    ABSTRACT: This works explore the use of depth sensor data to determine whether a human being is executing a movement according to an specification. The sensor chosen to collect data was a kinect V1. Several different techniques are explored: Finite State Machines, Multi-Dimensional Dynamic Time Warping, discrete Hidden Markov Models and Continuous Hidden Markov Models. A set of activities chosen according to an expert’s criteria is used to test the applicability of the different approaches to the task at hand. Results are presented for each technique

    Gesture recognition with application to human-robot interaction

    Get PDF
    Gestures are a natural form of communication, often transcending language barriers. Recently, much research has been focused on achieving natural human-machine interaction using gestures. This dissertation presents the design of a gestural interface that can be used to control a robot. The system consists of two modes: far-mode and near-mode. In far-mode interaction, upper-body gestures are used to control the motion of a robot. Near-mode interaction uses static hand poses to control a graphical user interface. For upper-body gesture recognition, features are extracted from skeletal data. The extracted features consist of joint angles and relative joint positions and are extracted for each frame of the gesture sequence. A novel key-frame selection algorithm is used to align the gesture sequences temporally. A neural network and hidden Markov model are then used to classify the gestures. The framework was tested on three different datasets, the CMU Military dataset of 3 users, 15 gestures and 10 repetitions per gesture, the VisApp2013 dataset with 28 users, 8 gestures and 1 repetition/gesture and a recorded dataset of 15 users, 10 gestures and 3 repetitions per gesture. The system is shown to achieve a recognition rate of 100% across the three different datasets, using the key-frame selection and a neural network for gesture identification. Static hand-gesture recognition is achieved by first retrieving the 24-DOF hand model. The hand is segmented from the image using both depth and colour information. A novel calibration method is then used to automatically obtain the anthropometric measurements of the user’s hand. The k-curvature algorithm, depth-based and parallel border-based methods are used to detect fingertips in the image. An average detection accuracy of 88% is achieved. A neural network and k-means classifier are then used to classify the static hand gestures. The framework was tested on a dataset of 15 users, 12 gestures and 3 repetitions per gesture. A correct classification rate of 75% is achieved using the neural network. It is shown that the proposed system is robust to changes in skin colour and user hand size

    A robust gesture recognition based on depth data

    No full text
    corecore