5 research outputs found

    The impact of geometric and motion features on sign language translators

    Get PDF
    Malaysian Sign Language (MSL) recognition system is a choice of augmenting communication between the hearing-impaired and hearing communities in Malaysia. Automatic translators can play an important role as alternative communication method for the hearing people to understand the hearing impaired ones. Automatic Translation using bare hands with natural gesture signing is a challenge in the field of machine learning. Researchers have used electronic and coloured gloves to solve mainly three issues during the preprocessing steps before the singings’ recognition stage. First issue is to differentiate the two hands from other objects. This is referred to as hand detection. The second issue is to describe the detected hand and its motion trajectory in very descriptive details which is referred to as feature extraction stage. The third issue is to find the starting and ending duration of the sign (transitions between signs). This paper focuses on the second issue, the feature extraction by studying the impact of the vector dimensions of the features. At the same time, signs with similar attributes have been chosen to highlight the importance of features’ extraction stage. The study also includes Hidden Markov Model (HMM) capability to differentiate between signs which have similar attributes

    Motion divergence fields for dynamic hand gesture recognition

    Full text link
    Although it is in general difficult to track articulated hand motion, exemplar-based approaches provide a robust solution for hand gesture recognition. Presumably, a rich set of dynamic hand gestures are needed for a meaningful recognition system. How to build the visual representation for the motion patterns is the key for scalable recognition. We propose a novel representation based on the divergence map of the gestural motion field, which transforms motion patterns into spatial patterns. Given the motion divergence maps, we leverage modern image feature detectors to ex-tract salient spatial patterns, such as Maximum Stable Ex-tremal Regions (MSER). A local descriptor is extracted from each region to capture the local motion pattern. The de-scriptors from gesture exemplars are subsequently indexed using a pre-trained vocabulary tree. New gestures are then matched efficiently with the database gestures with a TF-IDF scheme. Our extensive experiments on a large hand gesture database with 10 categories and 1050 video sam-ples validate the efficacy of the extracted motion patterns for gesture recognition. The proposed approach achieves an overall recognition rate of 97.62%, while the average recognition time is only 34.53 ms. 1

    Prosody and Kinesics Based Co-analysis Towards Continuous Gesture Recognition

    Get PDF
    The aim of this study is to develop a multimodal co-analysis framework for continuous gesture recognition by exploiting prosodic and kinesics manifestation of natural communication. Using this framework, a co-analysis pattern between correlating components is obtained. The co-analysis pattern is clustered using K-means clustering to determine how well the pattern distinguishes the gestures. Features of the proposed approach that differentiate it from the other models are its less susceptibility to idiosyncrasies, its scalability, and simplicity. The experiment was performed on Multimodal Annotated Gesture Corpus (MAGEC) that we created for research on understanding non-verbal communication community, particularly the gestures

    Generating realistic, animated human gestures in order to model, analyse and recognize Irish Sign Language

    Get PDF
    The aim of this thesis is to generate a gesture recognition system which can recognize several signs of Irish Sign Language (ISL). This project is divided into three parts. The first part provides background information on ISL. An overview of the ISL structure is a prerequisite to identifying and understanding the difficulties encountered in the development of a recognition system. The second part involves the generation of a data repository: synthetic and real-time video. Initially the synthetic data is created in a 3D animation package in order to simplify the creation of motion variations of the animated signer. The animation environment in our implementation allows for the generation of different versions of the same gesture with slight variations in the parameters of the motion. Secondly a database of ISL real-time video was created. This database contains 1400 different signs, including motion variation in each gesture. The third part details step by step my novel classification system and the associated prototype recognition system. The classification system is constructed as a decision tree to identify each sign uniquely. The recognition system is based on only one component of the classification system and has been implemented as a Hidden Markov Model (HMM)
    corecore