1,190 research outputs found

    Joining hands: developing a sign language machine translation system with and for the deaf community

    Get PDF
    This paper discusses the development of an automatic machine translation (MT) system for translating spoken language text into signed languages (SLs). The motivation for our work is the improvement of accessibility to airport information announcements for D/deaf and hard of hearing people. This paper demonstrates the involvement of Deaf colleagues and members of the D/deaf community in Ireland in three areas of our research: the choice of a domain for automatic translation that has a practical use for the D/deaf community; the human translation of English text into Irish Sign Language (ISL) as well as advice on ISL grammar and linguistics; and the importance of native ISL signers as manual evaluators of our translated output

    Hand in hand: automatic sign Language to English translation

    Get PDF
    In this paper, we describe the first data-driven automatic sign-language-to- speech translation system. While both sign language (SL) recognition and translation techniques exist, both use an intermediate notation system not directly intelligible for untrained users. We combine a SL recognizing framework with a state-of-the-art phrase-based machine translation (MT) system, using corpora of both American Sign Language and Irish Sign Language data. In a set of experiments we show the overall results and also illustrate the importance of including a vision-based knowledge source in the development of a complete SL translation system

    Combining data-driven MT systems for improved sign language translation

    Get PDF
    In this paper, we investigate the feasibility of combining two data-driven machine translation (MT) systems for the translation of sign languages (SLs). We take the MT systems of two prominent data-driven research groups, the MaTrEx system developed at DCU and the Statistical Machine Translation (SMT) system developed at RWTH Aachen University, and apply their respective approaches to the task of translating Irish Sign Language and German Sign Language into English and German. In a set of experiments supported by automatic evaluation results, we show that there is a definite value to the prospective merging of MaTrEx’s Example-Based MT chunks and distortion limit increase with RWTH’s constraint reordering

    Protecting Deaf Suspects\u27 Right to Understand Criminal Proceedings

    Get PDF

    A Machine Learning Based Full Duplex System Supporting Multiple Sign Languages for the Deaf and Mute

    Get PDF
    This manuscript presents a full duplex communication system for the Deaf and Mute (D-M) based on Machine Learning (ML). These individuals, who generally communicate through sign language, are an integral part of our society, and their contribution is vital. They face communication difficulties mainly because others, who generally do not know sign language, are unable to communicate with them. The work presents a solution to this problem through a system enabling the non-deaf and mute (ND-M) to communicate with the D-M individuals without the need to learn sign language. The system is low-cost, reliable, easy to use, and based on a commercial-off-the-shelf (COTS) Leap Motion Device (LMD). The hand gesture data of D-M individuals is acquired using an LMD device and processed using a Convolutional Neural Network (CNN) algorithm. A supervised ML algorithm completes the processing and converts the hand gesture data into speech. A new dataset for the ML-based algorithm is created and presented in this manuscript. This dataset includes three sign language datasets, i.e., American Sign Language (ASL), Pakistani Sign Language (PSL), and Spanish Sign Language (SSL). The proposed system automatically detects the sign language and converts it into an audio message for the ND-M. Similarities between the three sign languages are also explored, and further research can be carried out in order to help create more datasets, which can be a combination of multiple sign languages. The ND-M can communicate by recording their speech, which is then converted into text and hand gesture images. The system can be upgraded in the future to support more sign language datasets. The system also provides a training mode that can help D-M individuals improve their hand gestures and also understand how accurately the system is detecting these gestures. The proposed system has been validated through a series of experiments resulting in hand gesture detection accuracy exceeding 95%Funding for open access charge: Universidad de Málag

    Hybrid paradigm for Spanish Sign Language synthesis

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s10209-011-0245-9This work presents a hybrid approach to sign language synthesis. This approach allows the hand-tuning of the phonetic description of the signs, which focuses on the time aspect of the sign. Therefore, the approach retains the capacity for the performing of morpho-phonological operations, like notation-based approaches, and improves the synthetic signing performance, such as the hand-tuned animations approach. The proposed approach simplifies the input message description using a new high-level notation and storage of sign phonetic descriptions in a relational database. Such relational database allows for more flexible sign phonetic descriptions; it also allows for a description of sign timing and the synchronization between sign phonemes. The new notation, named HLSML, is a gloss-based notation focusing on message description in it. HLSML introduces several tags that allow for the modification of the signs in the message that defines dialect and mood variations, both of which are defined in the relational database, and message timing, including transition durations and pauses. A new avatar design is also proposed that simplifies the development of the synthesizer and avoids any interference with the independence of the sign language phonemes during animation. The obtained results showed an increase of the sign recognition rate compared to other approaches. This improvement was based on the active role that the sign language experts had in the description of signs, which was the result of the flexibility of the sign storage approach. The approach will simplify the description of synthesizable signed messages, thus facilitating the creation of multimedia-signed contents

    Confronting Silence: The Constitution, Deaf Criminal Defendants, and the Right to Interpretation During Trial

    Get PDF
    For most deaf people, interactions with the hearing community in the absence of interpretation or technological assistance consist of communications that are, at most, only partly comprehensible. Criminal proceedings, with the defendant\u27s liberty interest directly at stake, are occasions in which the need for deaf people to have a full understanding of what is said and done around them is most urgent. Ironically, the legal “right to interpretation” has not been clearly defined in either statutory or case law. Although the federal and state constitutions do not provide a separate or lesser set of rights for deaf defendants, their situation remains unique. The complete reliance on spoken and written English in the criminal justice process systematically excludes full participation of almost all deaf people. This factor, compounded by the general ignorance about deafness among hearing people, places deaf defendants at a serious disadvantage. The federal government and most state legislatures recognize the injustice that results from a defendant\u27s inability to understand the proceedings that may result in punishment. These bodies have passed legislation allowing or requiring interpretation for defendants who cannot hear or understand English. This Comment argues, however, that the right to interpretation at criminal proceedings is already embodied in protections afforded all defendants through the Sixth and Fourteenth Amendments of the United States Constitution. The rights to effective assistance of counsel, to confront witnesses against the defense, to be present at trial and assist in the defense, and to understand the nature and cause of the charges, impose a duty on the government to provide a defendant the means to understand the proceedings. Although many courts have referred to the right to interpretation as having a basis in the Constitution, they nonetheless fail to treat it as such. By expecting defendants to secure trial rights for themselves and by granting the trial court judges broad discretion in ensuring these rights, appellate courts have allowed deaf defendants\u27 rights to fall below the constitutionally guaranteed minimum. The increasing amount of legislation addressing the need for interpretation has led many modern courts to focus on the statutory, rather than the constitutional, requirements of the right to interpretation. This approach usually results in less protection for deaf defendants. These courts analyze the need for interpretation as a “special right” for deaf people rather than viewing the statutes as legislated procedures to ensure and protect, but not supplant, the constitutional protection. The distinction between statutory and constitutional rights is significant. For example, habeas corpus relief for state court prisoners, requirements for waiver of a constitutional right, and the standard of review in appellate courts all depend on the characterization of the right as constitutional or statutory. The need of deaf defendants for interpretation provides an unfortunate example of how the failure to recognize the constitutional basis of the right to interpretation has resulted in disparate treatment of defendants in the courts. A recent Maine Supreme Judicial Court case, State v. Green, reveals many of the problems deaf defendants face in trial and appellate courts

    A late 19th-Century British perspective on modern foreign language learning, teaching, and reform: the legacy of Prendergast’s “Mastery System”

    Get PDF
    The late 19th century saw a great rise in private foreign language learning and increasing provision of Modern foreign language teaching in schools. Evidence is presented to document the uptake of innovations in Thomas Prendergast’s (1807–1886) “Mastery System” by both individual language learners and educationalists. Although it has previously been suggested that Prendergast’s method failed to have much impact, this study clearly demonstrates the major influence he had on approaches to language learning and teaching in Britain and around the world both with his contemporaries and long after his death. This detailed case study illuminates the landscape of modern language pedagogy in Victorian Britain

    Deaf students and spoken languages

    Get PDF
    Esta monografía apunta a permitir al profesor de lenguas habladas comprender el mundo del alumno sordo y las técnicas de base para entrar en su universo y así cumplir su misión. Para lograr esto se mira la anatomía y fisiología del oído siguiendo el libro de Peter Alberti (1995) y de esta forma comprender lo que es la sordera. Igualmente se entrará en la participación del oído en la formación del pensamiento y por lo tanto como influye la sordera en la constitución de una cultura paralela en nuestra sociedad. Las bases para este estudio son las primeras definiciones del pensamiento, que no se encontraron más profundas en autores posteriores a John Locke (1690). Igualmente se analiza los medios de comunicación de los sordos, con su propia lengua, la Lengua de Señas y sus especificaciones. Este análisis utiliza tanto las bases de la lingüística según Ferdinand de Saussure (1916), los primeros estudios de esta lengua que pueden ir desde el Abad de l'Épée (1776) hasta William Stokoe (1960) con su estudio lingüístico del método usual de comunicación de los no oyentes. Para entrar en las aplicaciones pedagógicas, qué implican qué enseñar y con qué medios, se mirarán tanto la sicología del sordo, su simpatía o rechazo del mundo oral y las posibles soluciones que se encuentra actualmente para la enseñanza de los sordos
    corecore