50 research outputs found

    Computational Models for the Automatic Learning and Recognition of Irish Sign Language

    Get PDF
    This thesis presents a framework for the automatic recognition of Sign Language sentences. In previous sign language recognition works, the issues of; user independent recognition, movement epenthesis modeling and automatic or weakly supervised training have not been fully addressed in a single recognition framework. This work presents three main contributions in order to address these issues. The first contribution is a technique for user independent hand posture recognition. We present a novel eigenspace Size Function feature which is implemented to perform user independent recognition of sign language hand postures. The second contribution is a framework for the classification and spotting of spatiotemporal gestures which appear in sign language. We propose a Gesture Threshold Hidden Markov Model (GT-HMM) to classify gestures and to identify movement epenthesis without the need for explicit epenthesis training. The third contribution is a framework to train the hand posture and spatiotemporal models using only the weak supervision of sign language videos and their corresponding text translations. This is achieved through our proposed Multiple Instance Learning Density Matrix algorithm which automatically extracts isolated signs from full sentences using the weak and noisy supervision of text translations. The automatically extracted isolated samples are then utilised to train our spatiotemporal gesture and hand posture classifiers. The work we present in this thesis is an important and significant contribution to the area of natural sign language recognition as we propose a robust framework for training a recognition system without the need for manual labeling

    A novel set of features for continuous hand gesture recognition

    Get PDF
    Applications requiring the natural use of the human hand as a human–computer interface motivate research on continuous hand gesture recognition. Gesture recognition depends on gesture segmentation to locate the starting and end points of meaningful gestures while ignoring unintentional movements. Unfortunately, gesture segmentation remains a formidable challenge because of unconstrained spatiotemporal variations in gestures and the coarticulation and movement epenthesis of successive gestures. Furthermore, errors in hand image segmentation cause the estimated hand motion trajectory to deviate from the actual one. This research moves toward addressing these problems. Our approach entails using gesture spotting to distinguish meaningful gestures from unintentional movements. To avoid the effects of variations in a gesture’s motion chain code (MCC), we propose instead to use a novel set of features: the (a) orientation and (b) length of an ellipse least-squares fitted to motion-trajectory points and (c) the position of the hand. The features are designed to support classification using conditional random fields. To evaluate the performance of the system, 10 participants signed 10 gestures several times each, providing a total of 75 instances per gesture. To train the system, 50 instances of each gesture served as training data and 25 as testing data. For isolated gestures, the recognition rate using the MCC as a feature vector was only 69.6 % but rose to 96.0 % using the proposed features, a 26.1 % improvement. For continuous gestures, the recognition rate for the proposed features was 88.9 %. These results show the efficacy of the proposed method

    A Real-time Sign Language Recognition System for Hearing and Speaking Challengers

    Get PDF
    [[abstract]]Sign language is the primary means of communication between deaf people and hearing/speaking challengers. There are many varieties of sign language in different challenger community, just like an ethnic community within society. Unfortunately, few people have knowledge of sign language in our daily life. In general, interpreters can help us to communicate with these challengers, but they only can be found in Government Agencies, Hospital, and etc. Moreover, it is expensive to employ interpreter on personal behalf and inconvenient when privacy is required. It is very important to develop a robust Human Machine Interface (HMI) system that can support challengers to enter our society. A novel sign language recognition system is proposed. This system is composed of three parts. First, initial coordinate locations of hands are obtained by using joint skeleton information of Kinect. Next, we extract features from joints of hands that have depth information and translate handshapes. Then we train Hidden Markov Model-based Threshold Model by three feature sets. Finally, we use Hidden Markov Model-based Threshold Model to segment and recognize sign language. Experimental results show, average recognition rate for signer-dependent and signer-independent are 95% and 92%, respectively. We also find that feature sets including handshape can achieve better recognition result.[[sponsorship]]Asia-Pacific Education & Research Association[[conferencetype]]國際[[conferencedate]]20140711~20140713[[booktype]]紙本[[iscallforpapers]]Y[[conferencelocation]]普吉島, 泰

    SIGNER-INDEPENDENT SIGN LANGUAGE RECOGNITION BASED ON HMMs AND DEPTH INFORMATION

    Get PDF
    [[abstract]]In this paper, we use the depth information to effectively locate the 3D position of hands in sign language recognition system. But the information will be changed by different signers and we can’t do recognition well. Here, we use the incremental changes of the three- dimensional coordinates on a unit time as feature set to fix the above problem. And we use hidden Markov models(HMMs) as time-varying classifier to recognize the moving change of sign language on time domain. We also include HMMs with scaling factor to solve the underflow effect of HMMs. Experiments verify that the proposed method is superior then traditional one.[[sponsorship]]中華民國影像處理與圖形識別學會; 宜蘭大學圖書資訊館; 宜蘭大學圖書資訊館[[conferencetype]]國際[[conferencedate]]20130818~20130820[[booktype]]紙本[[iscallforpapers]]Y[[conferencelocation]]宜蘭, 臺

    Towards Subject Independent Sign Language Recognition : A Segment-Based Probabilistic Approach

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Two Hand Gesture Based 3D Navigation in Virtual Environments

    Get PDF
    Natural interaction is gaining popularity due to its simple, attractive, and realistic nature, which realizes direct Human Computer Interaction (HCI). In this paper, we presented a novel two hand gesture based interaction technique for 3 dimensional (3D) navigation in Virtual Environments (VEs). The system used computer vision techniques for the detection of hand gestures (colored thumbs) from real scene and performed different navigation (forward, backward, up, down, left, and right) tasks in the VE. The proposed technique also allow users to efficiently control speed during navigation. The proposed technique is implemented via a VE for experimental purposes. Forty (40) participants performed the experimental study. Experiments revealed that the proposed technique is feasible, easy to learn and use, having less cognitive load on users. Finally gesture recognition engines were used to assess the accuracy and performance of the proposed gestures. kNN achieved high accuracy rates (95.7%) as compared to SVM (95.3%). kNN also has high performance rates in terms of training time (3.16 secs) and prediction speed (6600 obs/sec) as compared to SVM with 6.40 secs and 2900 obs/sec

    GCTW Alignment for isolated gesture recognition

    Get PDF
    In recent years, there has been increasing interest in developing automatic Sign Language Recognition (SLR) systems because Sign Language (SL) is the main mode of communication between deaf people all over the world. However, most people outside the deaf community do not understand SL, generating a communication problem, between both communities. Recognizing signs is a challenging problem because manual signing (not taking into account facial gestures) has four components that have to be recognized, namely, handshape, movement, location and palm orientation. Even though the appearance and meaning of basic signs are well-defined in sign language dictionaries, in practice, many variations arise due to different factors like gender, age, education or regional, social and ethnic factors which can lead to significant variations making hard to develop a robust SL recognition system. This project attempts to introduce the alignment of videos into isolated SLR, given that this approach has not been studied deeply, even though it presents a great potential for correctly recognize isolated gestures. We also aim for a user-independent recognition, which means that the system should give have a good recognition accuracy for the signers that were not represented in the data set. The main features used for the alignment are the wrists coordinates that we extracted from the videos by using OpenPose. These features will be aligned by using Generalized Canonical Time Warping. The resultant videos will be classified by making use of a 3D CNN. Our experimental results show that the proposed method has obtained a 65.02% accuracy, which places us 5th in the 2017 Chalearn LAP isolated gesture recognition challenge, only 2.69% away from the first place.Trabajo de investigació

    Review of constraints on vision-based gesture recognition for human–computer interaction

    Get PDF
    The ability of computers to recognise hand gestures visually is essential for progress in human-computer interaction. Gesture recognition has applications ranging from sign language to medical assistance to virtual reality. However, gesture recognition is extremely challenging not only because of its diverse contexts, multiple interpretations, and spatio-temporal variations but also because of the complex non-rigid properties of the hand. This study surveys major constraints on vision-based gesture recognition occurring in detection and pre-processing, representation and feature extraction, and recognition. Current challenges are explored in detail
    corecore