5,000 research outputs found

    A survey on mouth modeling and analysis for Sign Language recognition

    Get PDF
    © 2015 IEEE.Around 70 million Deaf worldwide use Sign Languages (SLs) as their native languages. At the same time, they have limited reading/writing skills in the spoken language. This puts them at a severe disadvantage in many contexts, including education, work, usage of computers and the Internet. Automatic Sign Language Recognition (ASLR) can support the Deaf in many ways, e.g. by enabling the development of systems for Human-Computer Interaction in SL and translation between sign and spoken language. Research in ASLR usually revolves around automatic understanding of manual signs. Recently, ASLR research community has started to appreciate the importance of non-manuals, since they are related to the lexical meaning of a sign, the syntax and the prosody. Nonmanuals include body and head pose, movement of the eyebrows and the eyes, as well as blinks and squints. Arguably, the mouth is one of the most involved parts of the face in non-manuals. Mouth actions related to ASLR can be either mouthings, i.e. visual syllables with the mouth while signing, or non-verbal mouth gestures. Both are very important in ASLR. In this paper, we present the first survey on mouth non-manuals in ASLR. We start by showing why mouth motion is important in SL and the relevant techniques that exist within ASLR. Since limited research has been conducted regarding automatic analysis of mouth motion in the context of ALSR, we proceed by surveying relevant techniques from the areas of automatic mouth expression and visual speech recognition which can be applied to the task. Finally, we conclude by presenting the challenges and potentials of automatic analysis of mouth motion in the context of ASLR

    FlyLimbTracker: An active contour based approach for leg segment tracking in unmarked, freely behaving Drosophila.

    Get PDF
    Understanding the biological underpinnings of movement and action requires the development of tools for quantitative measurements of animal behavior. Drosophila melanogaster provides an ideal model for developing such tools: the fly has unparalleled genetic accessibility and depends on a relatively compact nervous system to generate sophisticated limbed behaviors including walking, reaching, grooming, courtship, and boxing. Here we describe a method that uses active contours to semi-automatically track body and leg segments from video image sequences of unmarked, freely behaving D. melanogaster. We show that this approach yields a more than 6-fold reduction in user intervention when compared with fully manual annotation and can be used to annotate videos with low spatial or temporal resolution for a variety of locomotor and grooming behaviors. FlyLimbTracker, the software implementation of this method, is open-source and our approach is generalizable. This opens up the possibility of tracking leg movements in other species by modifications of underlying active contour models
    corecore