5 research outputs found

    Activity detection in conversational sign language video for mobile telecommunication

    Get PDF
    The goal of the MobileASL project is to increase accessibility by making the mobile telecommunications network available to the signing Deaf community. Video cell phones enable Deaf users to communicate in their native language, American Sign Language (ASL). However, encoding and transmission of real-time video over cell phones is a powerintensive task that can quickly drain the battery. By recognizing activity in the conversational video, we can drop the frame rate during less important segments without significantly harming intelligibility, thus reducing the computational burden. This recognition must take place from video in real-time on a cell phone processor, on users that wear no special clothing. In this work, we quantify the power savings from droppin

    Recognition of Local Features for Camera-based Sign Language Recognition System

    No full text
    15th International Conference on Pattern Recognition : 3-7 Sept. 2000 : SpainA sign language recognition system is required to use information from both global features, such as hand movement and location, and local features, such as hand shape and orientation. In this paper, we present an adequate local feature recognizer for a sign language recognition system. Our basic approach is to represent the hand images extracted from sign-language images as symbols which correspond to clusters by a clustering technique. The clusters are created from a training set of extracted hand images so that a similar appearance can be classified into the same cluster on an eigenspace. The experimental results indicate that our system can recognize a sign language word even in two-handed and hand-to-hand contact cases

    Recognition of Local Features for Camera-based Sign-Language Recognition System

    No full text
    <小特集>ヒューマンインタフェースA sign-language recognition system should use information from both global features, such as hand movement and location, and local features, such as hand shape and orientation. We designed a system that first selects possible words by using the detected global features, then narrows the choices down to one by using the detected local features. In this paper, we describe an adequate local feature recognizer for a sign-language recognition system. Our basic approach is to represent the hand images extracted from sign-language images as symbols corresponding to clusters by using a clustering technique. The clusters are created from a training set of extracted hand images so that images with a similar appearance can be classified into the same cluster in an eigenspace. Experimental results showed that our system can recognize a signed word even in two-handed and hand-to-hand contact cases

    Human Interface. Recognition of Local Features for Camera-based Sign-Language Recognition System.

    No full text
    corecore