490 research outputs found

    Computational Models for the Automatic Learning and Recognition of Irish Sign Language

    Get PDF
    This thesis presents a framework for the automatic recognition of Sign Language sentences. In previous sign language recognition works, the issues of; user independent recognition, movement epenthesis modeling and automatic or weakly supervised training have not been fully addressed in a single recognition framework. This work presents three main contributions in order to address these issues. The first contribution is a technique for user independent hand posture recognition. We present a novel eigenspace Size Function feature which is implemented to perform user independent recognition of sign language hand postures. The second contribution is a framework for the classification and spotting of spatiotemporal gestures which appear in sign language. We propose a Gesture Threshold Hidden Markov Model (GT-HMM) to classify gestures and to identify movement epenthesis without the need for explicit epenthesis training. The third contribution is a framework to train the hand posture and spatiotemporal models using only the weak supervision of sign language videos and their corresponding text translations. This is achieved through our proposed Multiple Instance Learning Density Matrix algorithm which automatically extracts isolated signs from full sentences using the weak and noisy supervision of text translations. The automatically extracted isolated samples are then utilised to train our spatiotemporal gesture and hand posture classifiers. The work we present in this thesis is an important and significant contribution to the area of natural sign language recognition as we propose a robust framework for training a recognition system without the need for manual labeling

    A random forest approach to segmenting and classifying gestures

    Full text link
    This thesis investigates a gesture segmentation and recognition scheme that employs a random forest classification model. A complete gesture recognition system should localize and classify each gesture from a given gesture vocabulary, within a continuous video stream. Thus, the system must determine the start and end points of each gesture in time, as well as accurately recognize the class label of each gesture. We propose a unified approach that performs the tasks of temporal segmentation and classification simultaneously. Our method trains a random forest classification model to recognize gestures from a given vocabulary, as presented in a training dataset of video plus 3D body joint locations, as well as out-of-vocabulary (non-gesture) instances. Given an input video stream, our trained model is applied to candidate gestures using sliding windows at multiple temporal scales. The class label with the highest classifier confidence is selected, and its corresponding scale is used to determine the segmentation boundaries in time. We evaluated our formulation in segmenting and recognizing gestures from two different benchmark datasets: the NATOPS dataset of 9,600 gesture instances from a vocabulary of 24 aircraft handling signals, and the CHALEARN dataset of 7,754 gesture instances from a vocabulary of 20 Italian communication gestures. The performance of our method compares favorably with state-of-the-art methods that employ Hidden Markov Models or Hidden Conditional Random Fields on the NATOPS dataset. We conclude with a discussion of the advantages of using our model

    Towards Developing an Effective Hand Gesture Recognition System for Human Computer Interaction: A Literature Survey

    Get PDF
    Gesture recognition is a mathematical analysis of movement of body parts (hand / face) done with the help of computing device. It helps computers to understand human body language and build a more powerful link between humans and machines. Many research works are developed in the field of hand gesture recognition. Each works have achieved different recognition accuracies with different hand gesture datasets, however most of the firms are having insufficient insight to develop necessary achievements to meet their development in real time datasets. Under such circumstances, it is very essential to have a complete knowledge of recognition methods of hand gesture recognition, its strength and weakness and the development criteria as well. Lots of reports declare its work to be better but a complete relative analysis is lacking in these works. In this paper, we provide a study of representative techniques for hand gesture recognition, recognition methods and also presented a brief introduction about hand gesture recognition. The main objective of this work is to highlight the position of various recognition techniqueswhich can indirectly help in developing new techniques for solving the issues in the hand gesture recognition systems. Moreover we present a concise description about the hand gesture recognition systems recognition methods and the instructions for future research

    Gesture spotting and recognition using salience detection and concatenated hidden markov models

    Get PDF
    We developed a gesture salience based hand tracking method, and a gesture spotting and recognition method based on concatenated hidden Markov models. A 3-fold cross validation using the ChAirGest development data set with 10 users gives an F1 score of 0.907 and an accurate temporal segmentation rate (ATSR) of 0.923. The average final score is 0.9116. Compared with using the hand joint position from the Kinect SDK, using our hand tracking method gives a 3.7% absolute increase in the recognition F1 score

    Hand Gesture Recognition Using Particle Swarm Movement

    Get PDF
    We present a gesture recognition method derived from particle swarm movement for free-air hand gesture recognition. Online gesture recognition remains a difficult problem due to uncertainty in vision-based gesture boundary detection methods. We suggest an automated process of segmenting meaningful gesture trajectories based on particle swarm movement. A subgesture detection and reasoning method is incorporated in the proposed recognizer to avoid premature gesture spotting. Evaluation of the proposed method shows promising recognition results: 97.6% on preisolated gestures, 94.9% on stream gestures with assistive boundary indicators, and 94.2% for blind gesture spotting on digit gesture vocabulary. The proposed recognizer requires fewer computation resources; thus it is a good candidate for real-time applications
    • …
    corecore