2,004 research outputs found

    A survey on mouth modeling and analysis for Sign Language recognition

    Get PDF
    © 2015 IEEE.Around 70 million Deaf worldwide use Sign Languages (SLs) as their native languages. At the same time, they have limited reading/writing skills in the spoken language. This puts them at a severe disadvantage in many contexts, including education, work, usage of computers and the Internet. Automatic Sign Language Recognition (ASLR) can support the Deaf in many ways, e.g. by enabling the development of systems for Human-Computer Interaction in SL and translation between sign and spoken language. Research in ASLR usually revolves around automatic understanding of manual signs. Recently, ASLR research community has started to appreciate the importance of non-manuals, since they are related to the lexical meaning of a sign, the syntax and the prosody. Nonmanuals include body and head pose, movement of the eyebrows and the eyes, as well as blinks and squints. Arguably, the mouth is one of the most involved parts of the face in non-manuals. Mouth actions related to ASLR can be either mouthings, i.e. visual syllables with the mouth while signing, or non-verbal mouth gestures. Both are very important in ASLR. In this paper, we present the first survey on mouth non-manuals in ASLR. We start by showing why mouth motion is important in SL and the relevant techniques that exist within ASLR. Since limited research has been conducted regarding automatic analysis of mouth motion in the context of ALSR, we proceed by surveying relevant techniques from the areas of automatic mouth expression and visual speech recognition which can be applied to the task. Finally, we conclude by presenting the challenges and potentials of automatic analysis of mouth motion in the context of ASLR

    On The Linguistic Effects Of Articulatory Ease, With A Focus On Sign Languages

    Get PDF
    Spoken language has a well-known drive for ease of articulation, which Kirchner (1998, 2004) analyzes as reduction of the total magnitude of all biomechanical forces involved. We extend Kirchner\u27s insights from vocal articulation to manual articulation, with a focus on joint usage, and we discuss ways that articulatory ease might be realized in sign languages. In particular, moving more joints and/or joints more proximal to the torso results in greater mass being moved, and thus more articulatory force being expended, than moving fewer joints or moving more distal joints. We predict that in casual conversation, where articulatory ease is prized, moving fewer joints should be favored over moving more, and moving distal joints should be favored over moving proximal joints. We report on the results of our study of the casual signing of fluent signers of American Sign Language, which confirm our predictions: in comparison to citation forms of signs, the casual variants produced by the signers in our experiment exhibit an overall decrease in average joint usage, as well as a general preference for more distal articulation than is used in citation form. We conclude that all language, regardless of modality, is shaped by a fundamental drive for ease of articulation. Our work advances a cross-modality approach for considering ease of articulation, develops a potentially important vocabulary for describing variations in signs, and demonstrates that American Sign Language exhibits variation that can be accounted for in terms of ease of articulation. We further suggest that the linguistic drive for ease of articulation is part of a broader tendency for the human body to reduce biomechanical effort in all physical activities

    Use of Key Points and Transfer Learning Techniques in Recognition of Handedness Indian Sign Language

    Get PDF
    The most expressive way of communication for individuals who have trouble speaking or hearing is sign language. Normal people are unable to comprehend sign language. As a result, communication barriers are put up. Majority of people are right-handed. Statistics say that, an average population of left-handed person in the world is about 10%, where they use left hand as their dominating hand. In case of hand written text recognition, if the text is written by left-handed or right-handed person, then there would not be any problem in recognition neither for human and nor for computer. But same thing is not true for sign language and its detection using computer. When the detection is performed using computer vision and if it falls into the category of detection by appearance, then it might not detect correctly. In machine and deep learning, if the model is trained using just one dominating hand, let’s say right hand, then the predictions can go wrong if same sign is performed by left-handed person. This paper addresses this issue. It takes into account the signs performed by any type of signer: left-handed, right-handed or ambidexter. In proposed work is on Indian Sign Language (ISL). Two models are trained: Model I, is trained on one dominating hand and Model II, is trained on both the hands. Model II gives correct predictions regardless of any type of signer. It recognizes alphabets and numbers in ISL. We used the concept of Key points and Transfer Learning techniques for implementation. Using this approach, models get trained quickly and we could achieve validation accuracy of 99%

    Development in Signer-Independent Sign Language Recognition and the Ideas of Solving Some Key Problems

    Full text link

    Resemblance-Oriented Communication Strategies: Understanding The Role Of Resemblance In Signed And Spoken Languages

    Get PDF
    The goal of this thesis is to propose that resemblance plays an important role in human communication. Saussure proposed a characteristic principle of the linguistic sign: that connections between linguistic codes and the objects they signify are arbitrary; however, I intend to show that resemblance, which I define as the visual or aural similarity between a stimulus, the thought it is intended to activate, and the real world target that utterance is about, is an important part of human communication and should be taken into consideration when defining language and proposing theories of human communication. I have chosen Relevance Theory as the framework for this analysis because it highlights the importance of inferential communication. According to Relevance Theory, human communication is guided by expectations of relevance, a balance between cognitive effects (information the addressee finds worthwhile) and processing effort (the amount of work required to understand that information). Human communication reduces the amount of processing effort through conventionalization; words signify concepts, starting points from which inference can be used to arrive at a communicator\u27s intended meaning. I suggest that the range of human perception and experience acts as common ground between communicators, providing a shared context between communicator and addressee and reducing what must be explicitly communicated. Essentially, resemblance between an utterance and an intended thought performs a similar function to conventionalization, activating concepts from shared context and providing a starting point for inferential communication, guiding addresses to the communicator\u27s intended meaning. My claim that resemblance has a role to play in human communication raises significant questions about the widely held stance that language is inherently arbitrary. I have proposed that signs can meaningfully resemble the things they signify; if this is true, we must consider the implications for modern linguistic analysis and adjust linguistic theory to accurately account for the use of resemblance in human communication
    • …
    corecore