66 research outputs found

    Sign Language Recognition

    Get PDF
    This chapter covers the key aspects of sign-language recognition (SLR), starting with a brief introduction to the motivations and requirements, followed by a précis of sign linguistics and their impact on the field. The types of data available and the relative merits are explored allowing examination of the features which can be extracted. Classifying the manual aspects of sign (similar to gestures) is then discussed from a tracking and non-tracking viewpoint before summarising some of the approaches to the non-manual aspects of sign languages. Methods for combining the sign classification results into full SLR are given showing the progression towards speech recognition techniques and the further adaptations required for the sign specific case. Finally the current frontiers are discussed and the recent research presented. This covers the task of continuous sign recognition, the work towards true signer independence, how to effectively combine the different modalities of sign, making use of the current linguistic research and adapting to larger more noisy data set

    Recognition of sign language subwords based on boosted hidden Markov models

    Full text link
    Sign language recognition (SLR) plays an important role in human-computer interaction (HCI), especially for the convenient communication between deaf and hearing society. How to enhance the traditional hidden Markov models (HMM) based SLR is an important issue in the SLR community. And how to refine the boundaries of the classifiers to effectively characterize the property of spread-out of the training samples is another significant issue. In this paper, a new classification framework applying adaptive boosting (AdaBoost) strategy to continuous HMM (CHMM) training procedure at the subwords classification level for SLR is presented. The ensemble of multiple composite CHMMs for each subword trained in boosting iterations tends to concentrate more on the hard-to-classify samples so as to generate more complex decision boundary than that of the single HMM classifier. Experimental results on the vocabulary of frequently used Chinese sign language (CSL) subwords show that the proposed boosted CHMM outperforms the conventional CHMM for SLR

    A comparison of machine learning techniques for hand shape recognition

    Get PDF
    >Magister Scientiae - MScThere are five fundamental parameters that characterize any sign language gesture. They are hand shape, orientation, motion and location, and facial expressions. The SASL group at the University of the Western Cape has created systems to recognize each of these parameters in an input video stream. Most of these systems make use of the Support Vector Machine technique for the classification of data due to its high accuracy. It is, however, unknown how other machine learning techniques compare to Support Vector Machines in the recognition of each of these parameters. This research lays the foundation for the process of determining optimum machine learning techniques for each parameter by comparing Support Vector Machines to Artificial Neural Networks and Random Forests in the context of South African Sign Language hand shape recognition. Li, a previous researcher at the SASL group, created a state-of-the-art hand shape recognition system that uses Support Vector Machines to classify hand shapes. This research re-implements Li’s feature extraction procedure but investigates the use of Artificial Neural Networks and Random Forests in the place of Support Vector Machines as a comparison. The machine learning techniques are optimized and trained to recognize ten SASL hand shapes and compared in terms of classification accuracy, training time, optimization time and classification time

    Development in Signer-Independent Sign Language Recognition and the Ideas of Solving Some Key Problems

    Full text link
    • …
    corecore