12 research outputs found

    Joining hands: developing a sign language machine translation system with and for the deaf community

    Get PDF
    This paper discusses the development of an automatic machine translation (MT) system for translating spoken language text into signed languages (SLs). The motivation for our work is the improvement of accessibility to airport information announcements for D/deaf and hard of hearing people. This paper demonstrates the involvement of Deaf colleagues and members of the D/deaf community in Ireland in three areas of our research: the choice of a domain for automatic translation that has a practical use for the D/deaf community; the human translation of English text into Irish Sign Language (ISL) as well as advice on ISL grammar and linguistics; and the importance of native ISL signers as manual evaluators of our translated output

    The ATIS sign language corpus

    Get PDF
    Systems that automatically process sign language rely on appropriate data. We therefore present the ATIS sign language corpus that is based on the domain of air travel information. It is available for five languages, English, German, Irish sign language, German sign language and South African sign language. The corpus can be used for different tasks like automatic statistical translation and automatic sign language recognition and it allows the specific modelling of spatial references in signing space

    Hand in hand: automatic sign Language to English translation

    Get PDF
    In this paper, we describe the first data-driven automatic sign-language-to- speech translation system. While both sign language (SL) recognition and translation techniques exist, both use an intermediate notation system not directly intelligible for untrained users. We combine a SL recognizing framework with a state-of-the-art phrase-based machine translation (MT) system, using corpora of both American Sign Language and Irish Sign Language data. In a set of experiments we show the overall results and also illustrate the importance of including a vision-based knowledge source in the development of a complete SL translation system

    A Study on Techniques and Challenges in Sign Language Translation

    Get PDF
    Sign Language Translation (SLT) plays a pivotal role in enabling effective communication for the Deaf and Hard of Hearing (DHH) community. This review delves into the state-of-the-art techniques and methodologies in SLT, focusing on its significance, challenges, and recent advancements. The review provides a comprehensive analysis of various SLT approaches, ranging from rule-based systems to deep learning models, highlighting their strengths and limitations. Datasets specifically tailored for SLT research are explored, shedding light on the diversity and complexity of Sign Languages across the globe. The review also addresses critical issues in SLT, such as the expressiveness of generated signs, facial expressions, and non-manual signals. Furthermore, it discusses the integration of SLT into assistive technologies and educational tools, emphasizing the transformative potential in enhancing accessibility and inclusivity. Finally, the review outlines future directions, including the incorporation of multimodal inputs and the imperative need for co-creation with the Deaf community, paving the way for more accurate, expressive, and culturally sensitive Sign Language Generation systems

    Beyong lexical meaning : probabilistic models for sign language recognition

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Data mining and modelling for sign language

    Get PDF
    Sign languages have received significantly less attention than spoken languages in the research areas of corpus analysis, machine translation, recognition, synthesis and social signal processing, amongst others. This is mainly due to signers being in a clear minority and there being a strong prior belief that sign languages are simply arbitrary gestures. To date, this manifests in the insufficiency of sign language resources available for computational modelling and analysis, with no agreed standards and relatively stagnated advancements compared to spoken language interaction research. Fortunately, the machine learning community has developed methods, such as transfer learning, for dealing with sparse resources, while data mining techniques, such as clustering can provide insights into the data. The work described here utilises such transfer learning techniques to apply neural language model to signed utterances and to compare sign language phonemes, which allows for clustering of similar signs, leading to automated annotation of sign language resources. This thesis promotes the idea that sign language research in computing should rely less on hand-annotated data thus opening up the prospect of using readily available online data (e.g. signed song videos) through the computational modelling and automated annotation techniques presented in this thesis

    Acoustical measurements on stages of nine U.S. concert halls

    Get PDF
    corecore