8,614 research outputs found

    Joining hands: developing a sign language machine translation system with and for the deaf community

    Get PDF
    This paper discusses the development of an automatic machine translation (MT) system for translating spoken language text into signed languages (SLs). The motivation for our work is the improvement of accessibility to airport information announcements for D/deaf and hard of hearing people. This paper demonstrates the involvement of Deaf colleagues and members of the D/deaf community in Ireland in three areas of our research: the choice of a domain for automatic translation that has a practical use for the D/deaf community; the human translation of English text into Irish Sign Language (ISL) as well as advice on ISL grammar and linguistics; and the importance of native ISL signers as manual evaluators of our translated output

    Hand in hand: automatic sign Language to English translation

    Get PDF
    In this paper, we describe the first data-driven automatic sign-language-to- speech translation system. While both sign language (SL) recognition and translation techniques exist, both use an intermediate notation system not directly intelligible for untrained users. We combine a SL recognizing framework with a state-of-the-art phrase-based machine translation (MT) system, using corpora of both American Sign Language and Irish Sign Language data. In a set of experiments we show the overall results and also illustrate the importance of including a vision-based knowledge source in the development of a complete SL translation system

    Combining data-driven MT systems for improved sign language translation

    Get PDF
    In this paper, we investigate the feasibility of combining two data-driven machine translation (MT) systems for the translation of sign languages (SLs). We take the MT systems of two prominent data-driven research groups, the MaTrEx system developed at DCU and the Statistical Machine Translation (SMT) system developed at RWTH Aachen University, and apply their respective approaches to the task of translating Irish Sign Language and German Sign Language into English and German. In a set of experiments supported by automatic evaluation results, we show that there is a definite value to the prospective merging of MaTrEx’s Example-Based MT chunks and distortion limit increase with RWTH’s constraint reordering

    Deaf epistemologies as a critique and alternative to the practice of science: an anthropological perspective

    Get PDF
    IN THE LAST DECADE, and responding to the criticism of orientalism, anthropology has engaged in a self-critical practice, working toward a postcolonial perspective on science and an epistemological stance of partial and situated knowledge (Pinxten, 2006; Pinxten & Note, 2005). In deaf studies, anthropological and sociological studies employing qualitative and ethnographic methods have introduced a paradigm shift. Concepts of deaf culture and deaf identity have been employed as political tools, contributing to the emancipation process of deaf people. However, recent anthropological studies in diverse local contexts indicate the cultural construction of these notions. From this viewpoint, deaf studies faces a challenge to reflect on the notions of culture, emancipation, and education from a nonexclusive, noncolonial perspective. Deaf studies research in a global context needs to deal with cultural and linguistic diversity in human beings and academia. This calls for epistemological reflection and new research methods

    Wireless data gloves Malay sign language recognition system

    Get PDF
    This paper describes the structure and algorithm of the whole Wireless Bluetooth Data Gloves Sign Language Recognition System, which is defined as a Human-Computer Interaction (HCI) system. This project is based on the need of developing an electronic device that can translate sign language into speech (sound) in order to make the communication take place between the mute & deaf community with the general public possible. Hence, the main objective of this project is to develop a system that can convert sign language into speech so that deaf people are able to communicate efficiently with normal people. This Human-Computer Interaction system is able to recognize 25 common words signing in Bahasa Isyarat Malaysia (BIM) by using Hidden Markov Models (HMM) methods. Both hands are involved in performing the BIM with all the sensor connecting wirelessly to PC with Bluetooth module. In the future, the system can be shrunk to become a stand alone system without any interaction with PC

    “Don’t, never no”: Negotiating meaning in ESL among hearing/speaking-impaired netizens

    Get PDF
    Negotiating meaning can be difficult for the deaf-mute people when being in the hearing and speaking world. Social media offers a platform where the deaf and the mute can engage in meaningful conversations among themselves and between people with hearing and speaking abilities. This paper determined the paralinguistic signals that the deaf-mute students employed in their Facebook posts. Using descriptive-qualitative research design, the study analyzed the lexico-semantic features of their language and how both paralinguistic and linguistic aspects contribute to the negotiation of conceptual meaning. The results revealed that paralinguistic signals are found in emojis, punctuation mark repeats, onomatopoeic spelling, accent stylization, intensification, hashtag and combinations. These signals function to give emphasis or intensify intonation. An emoji is the predominant paralinguistic signal used to compensate the lack of words to express feelings. In addition, distinct lexico-semantic features observed in the data include the incorrect position of words, incorrect lexical choice, redundancy, and insertion of prepositions or the lack thereof. These features do not carry a specific function in negotiating meaning because understanding the semantic content of a message is possible either with or without comprehension of the syntax. Semantic comprehension is not expected to help in the acquisition of the syntactic system because it may be accomplished through the recognition of isolated lexical items and interpretation of non-linguistic cues. Finally, paralinguistic signals and computer-mediated communication for the deaf-mute across generation and race can be considered for future directions of the study and appropriate technological tools may be designed to automate errors found in the social media posts of the deaf-mute

    Hast Mudra: Hand Sign Gesture Recognition Using LSTM

    Get PDF
    Even though using the most natural way of communication is sign language, deaf and mute people find it challenging to socialize. A language barrier is erected between regular people and D&M individuals due to the structure of sign language, which is distinct from text. They converse by using vision-based communication as a result. The gestures can be easily understood by others if there is a standard interface that transforms sign language to visible text. As a result, R&D has been done on a vision-based interface system that will allow D&M persons to communicate without understanding one another's languages. In this project, first gathered     and acquired data and created a dataset, after which extracting useful data from the images. Keywords After verification and trained data and model using the (LSTM) algorithm, TensorFlow, and Keras technology, classified the gestures according to alphabet. Using our own dataset, this system achieved an accuracy of around 86.75% in an experimental test. system uses the (LSTM) algorithm to process images and data

    The Legal Capacity of Deaf Persons in the Decisions of the Imperial Court of Justice between 1880 and 1900

    Get PDF
    The inclusion of deaf persons in a judicial setting raised questions about their ability to bear witness, be convicted, conclude a marriage, make a will and, of course, about the ability of the court to communicate with them. In their decisions, the judges of the Imperial Court of Justice in Leipzig shed light on their interpretation of the capacity of deaf persons to participate in the legal realm. The motivation of their judgments drew comparisons with different categories of citizens to compensate for incomplete laws. They also took into account developments in the education of deaf persons regarding their communication skills and mental capacity. The decisions illustrate that legal and scientific knowledge was closely linked to the effect that deaf persons were granted full legal capacity

    American Sign Language Assistant

    Get PDF
    Our implementation of a prototype computer vision system to help the deaf and mute communicate in a shopping setting. Our system uses live video feeds to recognize American Sign Language (ASL) gestures and notify shop clerks of deaf and mute patrons’ intents. It generates a video dataset in the Unity Game Engine of 3D humanoid models in a shop setting performing ASL signs. Our system uses OpenPose to detect and recognize the bone points of the human body from the live feed. The system then represents the motion sequences as high dimensional skeleton joint point trajectories followed by a time-warping technique to generate a temporal RGB image using the Seq2Im technique. This image is then fed to the image classification algorithms that classify the gesture performed to the shop clerk. We carried out experiments to analyze the performance of this methodology on the Leap Motion Controller dataset and NTU RGB+D dataset using the SVM and LeNet-5 models. We also tested 3D vs 2D bone point dataset performance and found 90% accuracy for the 2D skeleton dataset
    • 

    corecore