146 research outputs found

    A survey on mouth modeling and analysis for Sign Language recognition

    Get PDF
    © 2015 IEEE.Around 70 million Deaf worldwide use Sign Languages (SLs) as their native languages. At the same time, they have limited reading/writing skills in the spoken language. This puts them at a severe disadvantage in many contexts, including education, work, usage of computers and the Internet. Automatic Sign Language Recognition (ASLR) can support the Deaf in many ways, e.g. by enabling the development of systems for Human-Computer Interaction in SL and translation between sign and spoken language. Research in ASLR usually revolves around automatic understanding of manual signs. Recently, ASLR research community has started to appreciate the importance of non-manuals, since they are related to the lexical meaning of a sign, the syntax and the prosody. Nonmanuals include body and head pose, movement of the eyebrows and the eyes, as well as blinks and squints. Arguably, the mouth is one of the most involved parts of the face in non-manuals. Mouth actions related to ASLR can be either mouthings, i.e. visual syllables with the mouth while signing, or non-verbal mouth gestures. Both are very important in ASLR. In this paper, we present the first survey on mouth non-manuals in ASLR. We start by showing why mouth motion is important in SL and the relevant techniques that exist within ASLR. Since limited research has been conducted regarding automatic analysis of mouth motion in the context of ALSR, we proceed by surveying relevant techniques from the areas of automatic mouth expression and visual speech recognition which can be applied to the task. Finally, we conclude by presenting the challenges and potentials of automatic analysis of mouth motion in the context of ASLR

    SIGNER-INDEPENDENT SIGN LANGUAGE RECOGNITION BASED ON HMMs AND DEPTH INFORMATION

    Get PDF
    [[abstract]]In this paper, we use the depth information to effectively locate the 3D position of hands in sign language recognition system. But the information will be changed by different signers and we can’t do recognition well. Here, we use the incremental changes of the three- dimensional coordinates on a unit time as feature set to fix the above problem. And we use hidden Markov models(HMMs) as time-varying classifier to recognize the moving change of sign language on time domain. We also include HMMs with scaling factor to solve the underflow effect of HMMs. Experiments verify that the proposed method is superior then traditional one.[[sponsorship]]中華民國影像處理與圖形識別學會; 宜蘭大學圖書資訊館; 宜蘭大學圖書資訊館[[conferencetype]]國際[[conferencedate]]20130818~20130820[[booktype]]紙本[[iscallforpapers]]Y[[conferencelocation]]宜蘭, 臺

    Automatic recognition of fingerspelled words in British Sign Language

    Get PDF
    We investigate the problem of recognizing words from video, fingerspelled using the British Sign Language (BSL) fingerspelling alphabet. This is a challenging task since the BSL alphabet involves both hands occluding each other, and contains signs which are ambiguous from the observer’s viewpoint. The main contributions of our work include: (i) recognition based on hand shape alone, not requiring motion cues; (ii) robust visual features for hand shape recognition; (iii) scalability to large lexicon recognition with no re-training. We report results on a dataset of 1,000 low quality webcam videos of 100 words. The proposed method achieves a word recognition accuracy of 98.9%

    Continuous sign recognition of brazilian sign language in a healthcare setting

    Get PDF
    Communication is the basis of human society. The majority of people communicate using spoken language in oral or written form. However, sign language is the primary mode of communication for deaf people. In general, understanding spoken information is a major challenge for the deaf and hard of hearing. Access to basic information and essential services is challenging for these individuals. For example, without translation support, carrying out simple tasks in a healthcare center such as asking for guidance or consulting with a doctor, can be hopelessly difficult. Computer-based sign language recognition technologies offer an alternative to mitigate the communication barrier faced by the deaf and hard of hearing. Despite much effort, research in this field is still in its infancy and automatic recognition of continuous signing remains a major challenge. This paper presents an ongoing research project designed to recognize continuous signing of Brazilian Sign Language (Libras) in healthcare settings. Health emergency situations and dialogues inspire the vocabulary of the signs and sentences we are using to contribute to the field301Vision-based human activity recognition8289COORDENAÇÃO DE APERFEIÇOAMENTO DE PESSOAL DE NÍVEL SUPERIOR - CAPESnão te

    Sign Spotting using Hierarchical Sequential Patterns with Temporal Intervals

    Get PDF
    This paper tackles the problem of spotting a set of signs occuring in videos with sequences of signs. To achieve this, we propose to model the spatio-temporal signatures of a sign using an extension of sequential patterns that contain temporal intervals called Sequential Interval Patterns (SIP). We then propose a novel multi-class classifier that organises different sequential interval patterns in a hierarchical tree structure called a Hierarchical SIP Tree (HSP-Tree). This allows one to exploit any subsequence sharing that exists between different SIPs of different classes. Multiple trees are then combined together into a forest of HSP-Trees resulting in a strong classifier that can be used to spot signs. We then show how the HSP-Forest can be used to spot sequences of signs that occur in an input video. We have evaluated the method on both concatenated sequences of isolated signs and continuous sign sequences. We also show that the proposed method is superior in robustness and accuracy to a state of the art sign recogniser when applied to spotting a sequence of signs.This work was funded by the UK government
    corecore