6 research outputs found

    3D hand posture recognition using multicam

    Get PDF
    This paper presents the hand posture recognition in 3D using the MultiCam, a monocular 2D/3D camera developed by Center of Sensorsystems (ZESS). The :VlultiCam is a camera which is capable to provide high resolution of color data acquired from CMOS sensors and low resolution of distance (or range) data calculated based on timeof- flight (ToF) technology using Photonic Mixer Device (PMD) sensors. The availability of the distance data allows the hand posture to be recognized in z-axis direction without complex computational algorithms which also enables the program to work in real-time processing as well as eliminates the background effectively. The hand posture recognition will employ a simple but robust algorithm by checking the number of fingers detected around virtually created circle centered at the Center of Mass (CoM) of the hand and therefore classifies the class associated with a particular hand posture. At the end of this paper, the technique that uses intersection between the circle and fingers as the method to classify the hand posture which entails the MultiCam capability is proposed. This technique is able to solve the problem of orientation, size and distance invariants by utilizing the distance data

    Vision-based portuguese sign language recognition system

    Get PDF
    Vision-based hand gesture recognition is an area of active current research in computer vision and machine learning. Being a natural way of human interaction, it is an area where many researchers are working on, with the goal of making human computer interaction (HCI) easier and natural, without the need for any extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them, for example, to convey information. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. Hand gestures are a powerful human communication modality with lots of potential applications and in this context we have sign language recognition, the communication method of deaf people. Sign lan- guages are not standard and universal and the grammars differ from country to coun- try. In this paper, a real-time system able to interpret the Portuguese Sign Language is presented and described. Experiments showed that the system was able to reliably recognize the vowels in real-time, with an accuracy of 99.4% with one dataset of fea- tures and an accuracy of 99.6% with a second dataset of features. Although the im- plemented solution was only trained to recognize the vowels, it is easily extended to recognize the rest of the alphabet, being a solid foundation for the development of any vision-based sign language recognition user interface system

    A comparative study of different image features for hand gesture machine learning

    Get PDF
    Vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition. Hand gesture recognition for human computer interaction is an area of active research in computer vision and machine learning. The primary goal of gesture recognition research is to create a system, which can identify specific human gestures and use them to convey information or for device control. In this paper we present a comparative study of seven different algorithms for hand feature extraction, for static hand gesture classification, analysed with RapidMiner in order to find the best learner. We defined our own gesture vocabulary, with 10 gestures, and we have recorded videos from 20 persons performing the gestures for later processing. Our goal in the present study is to learn features that, isolated, respond better in various situations in human-computer interaction. Results show that the radial signature and the centroid distance are the features that when used separately obtain better results, being at the same time simple in terms of computational complexity.(undefined

    Hand gesture recognition for human computer interaction: a comparative study of different image features

    Get PDF
    Hand gesture recognition for human computer interaction, being a natural way of human computer interaction, is an area of active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them to convey information or for device control. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. In this study we try to identify hand features that, isolated, respond better in various situations in human-computer interaction. The extracted features are used to train a set of classifiers with the help of RapidMiner in order to find the best learner. A dataset with our own gesture vocabulary consisted of 10 gestures, recorded from 20 users was created for later processing. Experimental results show that the radial signature and the centroid distance are the features that when used separately obtain better results, with an accuracy of 91% and 90,1% respectively obtained with a Neural Network classifier. These to methods have also the advantage of being simple in terms of computational complexity, which make them good candidates for real-time hand gesture recognition

    Hand gesture recognition in uncontrolled environments

    Get PDF
    Human Computer Interaction has been relying on mechanical devices to feed information into computers with low efficiency for a long time. With the recent developments in image processing and machine learning methods, the computer vision community is ready to develop the next generation of Human Computer Interaction methods, including Hand Gesture Recognition methods. A comprehensive Hand Gesture Recognition based semantic level Human Computer Interaction framework for uncontrolled environments is proposed in this thesis. The framework contains novel methods for Hand Posture Recognition, Hand Gesture Recognition and Hand Gesture Spotting. The Hand Posture Recognition method in the proposed framework is capable of recognising predefined still hand postures from cluttered backgrounds. Texture features are used in conjunction with Adaptive Boosting to form a novel feature selection scheme, which can effectively detect and select discriminative texture features from the training samples of the posture classes. A novel Hand Tracking method called Adaptive SURF Tracking is proposed in this thesis. Texture key points are used to track multiple hand candidates in the scene. This tracking method matches texture key points of hand candidates within adjacent frames to calculate the movement directions of hand candidates. With the gesture trajectories provided by the Adaptive SURF Tracking method, a novel classi�er called Partition Matrix is introduced to perform gesture classification for uncontrolled environments with multiple hand candidates. The trajectories of all hand candidates extracted from the original video under different frame rates are used to analyse the movements of hand candidates. An alternative gesture classifier based on Convolutional Neural Network is also proposed. The input images of the Neural Network are approximate trajectory images reconstructed from the tracking results of the Adaptive SURF Tracking method. For Hand Gesture Spotting, a forward spotting scheme is introduced to detect the starting and ending points of the prede�ned gestures in the continuously signed gesture videos. A Non-Sign Model is also proposed to simulate meaningless hand movements between the meaningful gestures. The proposed framework can perform well with unconstrained scene settings, including frontal occlusions, background distractions and changing lighting conditions. Moreover, it is invariant to changing scales, speed and locations of the gesture trajectories

    Traitement automatique de vidéos en LSF. Modélisation et exploitation des contraintes phonologiques du mouvement

    Get PDF
    Dans le domaine du Traitement automatique des langues naturelles, l'exploitation d'énoncés en langues des signes occupe une place à part. En raison des spécificités propres à la Langue des Signes Française (LSF) comme la simultanéité de plusieurs paramètres, le fort rôle de l'expression du visage, le recours massif à des unités gestuelles iconiques et l'utilisation de l'espace pour structurer l'énoncé, de nouvelles méthodes de traitement doivent êtres adaptées à cette langue. Nous exposons d'abord une méthode de suivi basée sur un filtre particulaire, permettant de déterminer à tout moment la position de la tête, des coudes, du buste et des mains d'un signeur dans une vidéo monovue. Cette méthode a été adaptée à la LSF pour la rendre plus robuste aux occultations, aux sorties de cadre et aux inversions des mains du signeur. Ensuite, l'analyse de données issues de capture de mouvements nous permet d'aboutir à une catégorisation de différents mouvements fréquemment utilisés dans la production de signes. Nous en proposons un modèle paramétrique que nous utilisons dans le cadre de la recherche de signes dans une vidéo, à partir d'un exemple vidéo de signe. Ces modèles de mouvement sont enfin réutilisés dans des applications permettant d'assister un utilisateur dans la création d'images de signe et la segmentation d'une vidéo en signes.There are a lot of differences between sign languages and vocal languages. Among them, we can underline the simultaneity of several parameters, the important role of the face expression, the recurrent use of iconic gestures and the use of signing space to structure utterances. As a consequence, new methods have to be developed and adapted to those languages. At first, we detail a method based on a particle filter to estimate at any time, the position of the signer's head, hands, elbows and shoulders in a monoview video. This method has been adapted to the French Sign Language in order to make it more robust to occlusion, inversion of the signer's hands or disappearance of hands from the video frame. Then, we propose a classification of the motion patterns that are frequently involved in the sign of production, thanks to the analysis of motion capture data. The parametric models associated to each sign pattern are used in the frame of automatic signe retrieval in a video from a filmed sign example. We finally include those models in two applications. The first one helps an user in creating sign pictures. The second one is dedicated to computer aided sign segmentation
    corecore