216,092 research outputs found

    Spanish generation from Spanish Sign Language using a phrase-based translation system

    Get PDF
    This paper describes the development of a Spoken Spanish generator from Spanish Sign Language (LSE – Lengua de Signos Española) in a specific domain: the renewal of Identity Document and Driver’s license. The system is composed of three modules. The first one is an interface where a deaf person can specify a sign sequence in sign-writing. The second one is a language translator for converting the sign sequence into a word sequence. Finally, the last module is a text to speech converter. Also, the paper describes the generation of a parallel corpus for the system development composed of more than 4,000 Spanish sentences and their LSE translations in the application domain. The paper is focused on the translation module that uses a statistical strategy with a phrase-based translation model, and this paper analyses the effect of the alignment configuration used during the process of word based translation model generation. Finally, the best configuration gives a 3.90% mWER and a 0.9645 BLEU

    Multimedia Dictionary and Synthesis of Sign Language

    Get PDF
    We developed a multimedia dictionary of the Slovenian Sign Language (SSL) which consists of words, illustrations and video clips. We describe the structure of the dictionary and give examples of its user interface. Based on our sign language dictionary, we developed a method of synthesizing the sign language by intelligent joining of video clips, which makes possible a translation of written texts or, in connection with a speech recognition system, of spoken words to the sign language

    Signaling for Signing for the Deaf and Hard of Hearing

    Get PDF
    The invention defines mechanisms and fields that describe the sign-language carried or embedded in a video stream. The invention related to describing a video asset as having the singing language of American Sign Language or British Sign Language , etc.. Generally, only audio or captions have a language - not video. Discoverable attributes will allow the user interface to announce availability and configurability of the service to the viewer

    Advanced Speech Communication System for Deaf People

    Get PDF
    This paper describes the development of an Advanced Speech Communication System for Deaf People and its field evaluation in a real application domain: the renewal of Driver’s License. The system is composed of two modules. The first one is a Spanish into Spanish Sign Language (LSE: Lengua de Signos Española) translation module made up of a speech recognizer, a natural language translator (for converting a word sequence into a sequence of signs), and a 3D avatar animation module (for playing back the signs). The second module is a Spoken Spanish generator from sign writing composed of a visual interface (for specifying a sequence of signs), a language translator (for generating the sequence of words in Spanish), and finally, a text to speech converter. For language translation, the system integrates three technologies: an example based strategy, a rule based translation method and a statistical translator. This paper also includes a detailed description of the evaluation carried out in the Local Traffic Office in the city of Toledo (Spain) involving real government employees and deaf people. This evaluation includes objective measurements from the system and subjective information from questionnaire

    An on-line system adding subtitles and sign language to Spanish audio-visual content

    Full text link
    Deaf people cannot properly access the speech information stored in any kind of recording format (audio, video, etc). We present a system that provides with subtitling and Spanish Sign Language representation capabilities to allow Spanish Deaf population can access to such speech content. The system is composed by a speech recognition module, a machine translation module from Spanish to Spanish Sign Language and a Spanish Sign Language synthesis module. On the deaf person side, a user-friendly interface with subtitle and avatar components allows him/her to access the speech information

    GyGSLA: A portable glove system for learning sign language alphabet

    Get PDF
    The communication between people with normal hearing with those having hearing or speech impairment is difficult. Learning a new alphabet is not always easy, especially when it is a sign language alphabet, which requires both hand skills and practice. This paper presents the GyGSLA system, standing as a completely portable setup created to help inexperienced people in the process of learning a new sign language alphabet. To achieve it, a computer/mobile game-interface and an hardware device, a wearable glove, were developed. When interacting with the computer or mobile device, using the wearable glove, the user is asked to represent alphabet letters and digits, by replicating the hand and fingers positions shown in a screen. The glove then sends the hand and fingers positions to the computer/mobile device using a wireless interface, which interprets the letter or digit that is being done by the user, and gives it a corresponding score. The system was tested with three completely inexperience sign language subjects, achieving a 76% average recognition ratio for the Portuguese sign language alphabet.info:eu-repo/semantics/publishedVersio

    The ASL-CDI 2.0: an updated, normed adaptation of the MacArthur Bates Communicative Development Inventory for American Sign Language

    Full text link
    Vocabulary is a critical early marker of language development. The MacArthur Bates Communicative Development Inventory has been adapted to dozens of languages, and provides a bird’s-eye view of children’s early vocabularies which can be informative for both research and clinical purposes. We present an update to the American Sign Language Communicative Development Inventory (the ASL-CDI 2.0, https://www.aslcdi.org), a normed assessment of early ASL vocabulary that can be widely administered online by individuals with no formal training in sign language linguistics. The ASL-CDI 2.0 includes receptive and expressive vocabulary, and a Gestures and Phrases section; it also introduces an online interface that presents ASL signs as videos. We validated the ASL-CDI 2.0 with expressive and receptive in-person tasks administered to a subset of participants. The norming sample presented here consists of 120 deaf children (ages 9 to 73 months) with deaf parents. We present an analysis of the measurement properties of the ASL-CDI 2.0. Vocabulary increases with age, as expected. We see an early noun bias that shifts with age, and a lag between receptive and expressive vocabulary. We present these findings with indications for how the ASL-CDI 2.0 may be used in a range of clinical and research settingsAccepted manuscrip

    Hast Mudra: Hand Sign Gesture Recognition Using LSTM

    Get PDF
    Even though using the most natural way of communication is sign language, deaf and mute people find it challenging to socialize. A language barrier is erected between regular people and D&M individuals due to the structure of sign language, which is distinct from text. They converse by using vision-based communication as a result. The gestures can be easily understood by others if there is a standard interface that transforms sign language to visible text. As a result, R&D has been done on a vision-based interface system that will allow D&M persons to communicate without understanding one another's languages. In this project, first gathered     and acquired data and created a dataset, after which extracting useful data from the images. Keywords After verification and trained data and model using the (LSTM) algorithm, TensorFlow, and Keras technology, classified the gestures according to alphabet. Using our own dataset, this system achieved an accuracy of around 86.75% in an experimental test. system uses the (LSTM) algorithm to process images and data
    • …
    corecore