8,134 research outputs found

    Rancang Bangun Alat Penerjemah American Sign Language (ASL) dengan Sensor Flex dan MPU- 6050 Berbasis Mikrokontroler ATmega2560

    Full text link
    Deaf or hard-of-hearing people have been using The American Sign Language (ASL) to communicate with others. Unfortunately, most of the people having normal hearing do not learn such a sign language; therefore, they do not understand persons with such disability. However, the rapid development of science and technology can facilitate people to translate body or part of the body formation more easily. This research is preceded with literature study surveying the need of sensors embedded in a glove. This research employs five flex sensors as well as accelerator and gyroscope to recognize ASL language having similar fingers formation. An Arduino Mega 2560 board as the central controller is employed to read the flex sensors' output and process the information. With 1Sheeld module, the output of the interpreter is presented on a smartphone both in text and voice. The result of this research is a flex glove system capable of translating the ASL from the hand formation that can be seen and be heard. Limitations were found when translating sign for letter N and M as the accuracy reached only 60%; therefore, the total performance of this system to recognize letter A to Z is 96.9%

    American Sign Language Interpreters and their Influence on the Hearing World

    Get PDF
    This honors thesis is going to discuss the hearing community’s perception of American Sign Language and by association the hearing community’s perception of the Deaf community. For most of the hearing community their only interaction with American Sign Language is through watching an interpreter perform at their job. They personally have no physical interactions with the language. Even though they have never personally used the language or attempted to interact with the Deaf community they will draw their own conclusions about sign language and the Deaf community. The conclusions that are assumed tend to be incorrect. Early on in the field of interpreting these misunderstandings are encountered. The small nature of the Deaf community makes it hard for these false perceptions to be dismantled because the Deaf community and the hearing population with the misconceptions rarely intersect. This thesis will delve into the extent of these misconceptions and just how much of the hearing world’s perspective they influence. To first understand the potential hazard of the interpreter language model it is important to understand a brief history of American Sign Language and Deaf culture. The paper when then apply these principles to the Deaf community, the interpreter, and the hearing community. The end of the paper will then dispel many of the false perceptions that the hearing community has of Deaf culture. This section is included to show that the misconceptions exist

    DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation

    Full text link
    There is an undeniable communication barrier between deaf people and people with normal hearing ability. Although innovations in sign language translation technology aim to tear down this communication barrier, the majority of existing sign language translation systems are either intrusive or constrained by resolution or ambient lighting conditions. Moreover, these existing systems can only perform single-sign ASL translation rather than sentence-level translation, making them much less useful in daily-life communication scenarios. In this work, we fill this critical gap by presenting DeepASL, a transformative deep learning-based sign language translation technology that enables ubiquitous and non-intrusive American Sign Language (ASL) translation at both word and sentence levels. DeepASL uses infrared light as its sensing mechanism to non-intrusively capture the ASL signs. It incorporates a novel hierarchical bidirectional deep recurrent neural network (HB-RNN) and a probabilistic framework based on Connectionist Temporal Classification (CTC) for word-level and sentence-level ASL translation respectively. To evaluate its performance, we have collected 7,306 samples from 11 participants, covering 56 commonly used ASL words and 100 ASL sentences. DeepASL achieves an average 94.5% word-level translation accuracy and an average 8.2% word error rate on translating unseen ASL sentences. Given its promising performance, we believe DeepASL represents a significant step towards breaking the communication barrier between deaf people and hearing majority, and thus has the significant potential to fundamentally change deaf people's lives

    Promising Practices: Reaching Out to Rhode Island\u27s Deaf and Hard of Hearing Community

    Get PDF
    Protection and Advocacy (PABSS) staff are responsible for providing legal services to social security recipients who are facing barriers in their efforts to return to work. Benefits Specialists are responsible for reaching out to all recipient communities within their territory to provide information and planning services when a recipient is considering a work effort. In September of 2004, the Rhode Island BPA&O project noted that deaf and hard of hearing individuals were not utilizing benefits counseling services. A work group was created to address this situation and develop a strategy. The strategy included outreach to community groups and agencies serving deaf and hard of hearing individuals and aggressive referrals of deaf and hard of hearing individuals to benefits planners by the state’s vocational rehabilitation workers. In preparation for this work, benefits planners received training in using a TTY, placing calls through the Rhode Island Relay Service and effectively utilizing sign language interpreters

    American Sign Language Interpreting for Deaf Individuals with Disabilities

    Get PDF
    Undergraduate Theoretical Proposa
    corecore