11,202 research outputs found

    Eye Movements of Deaf and Hard of Hearing Viewers of Automatic Captions

    Get PDF
    To compare methods of displaying speech-recognition confidence of automatic captions, we analyzed eye-tracking and response data from deaf or hard of hearing participants viewing videos

    The listening talker: A review of human and algorithmic context-induced modifications of speech

    Get PDF
    International audienceSpeech output technology is finding widespread application, including in scenarios where intelligibility might be compromised - at least for some listeners - by adverse conditions. Unlike most current algorithms, talkers continually adapt their speech patterns as a response to the immediate context of spoken communication, where the type of interlocutor and the environment are the dominant situational factors influencing speech production. Observations of talker behaviour can motivate the design of more robust speech output algorithms. Starting with a listener-oriented categorisation of possible goals for speech modification, this review article summarises the extensive set of behavioural findings related to human speech modification, identifies which factors appear to be beneficial, and goes on to examine previous computational attempts to improve intelligibility in noise. The review concludes by tabulating 46 speech modifications, many of which have yet to be perceptually or algorithmically evaluated. Consequently, the review provides a roadmap for future work in improving the robustness of speech output

    The Electronic interpreter for the deaf

    Get PDF
    None provided

    Artificial Intelligence for Sign Language Translation – A Design Science Research Study

    Get PDF
    Although our digitalized society is able to foster social inclusion and integration, there are still numerous communities having unequal opportunities. This is also the case with deaf people. About 750,000 deaf people only in the European Union and over 4 million people in the United States face daily challenges in terms of communication and participation, such as in leisure activities but more importantly in emergencies too. To provide equal environments and allow people with hearing handicaps to communicate in their native language, this paper presents an AI-based sign language translator. We adopted a transformer neural network capable of analyzing over 500 data points from a person’s gestures and face to translate sign language into text. We have designed a machine learning pipeline that enables the translator to evolve, build new datasets, and train sign language recognition models. As proof of concept, we instantiated a sign language interpreter for an emergency call with over 200 phrases. The overall goal is to support people with hearing inabilities by enabling them to participate in economic, social, political, and cultural life

    Waiting to Be Heard: Fairness, Legal Rights, and Injustices the Deaf Community Faces in Our Modern, Technological World

    Get PDF
    This note will examine the existing access to legal aid, employment, recourse, and education in various deaf cultures and societies. The goal is a comparative study into how the DHH communities are accepted, valued, and prioritized in different countries, and how that translates into legal infrastructure, in the form of governmentally-mandated statues, regulations, public accommodations, and legal education. This will consist of a brief history into the recognition, labeling, and acceptance of deaf citizens in ancient and modern cultures, the path to a society’s awareness and eventual recognition of deaf citizens, and how the various levels of awareness differ among regions and countries. The glimpse into varying cultures will also reveal the differences in legal systems, the effects those systems have on deaf culture, and how accessible those legal remedies are for deaf citizens. This note will focus on analyzing existing judicial infrastructures, potential barriers to justice, and the basic legal rights of a deaf person, in our modern, technological, and digital world

    Improving Deaf Accessibility to Web-based Multimedia

    Get PDF
    Internet technologies have expanded rapidly over the past two decades, making information of all sorts more readily available. Not only are they more cost-effective than traditional media, these new media have contributed to quality and convenience. However, proliferation of video and audio media on the internet creates an inadvertent disadvantage for deaf Internet users. Despite technological and legislative milestones in recent decades in making television and movies more accessible, there has been little progress with online access. A major obstacle to providing captions for internet media is the high cost of captioning and transcribing services. To respond to this problem, a possible solution lies in automatic speech recognition (ASR). This research investigates possible solutions to Web accessibility through utilization of ASR technologies. It surveys previous studies that employ visualization and ASR to determine their effectiveness in the context of deaf accessibility. Since there was no existing literature indicating the area of greatest need, a preliminary study identified an application that would serve as a case study for applying and evaluating speech visualization technology. A total of 20 deaf and hard-of-hearing participants were interviewed via video phone and their responses in American Sign Language were transcribed to English. The most common theme was concern over a lack of accessibility for online news. The second study evaluated different presentation strategies for making online news videos more accessible. A total of 95 participants viewed four different caption styles. Each style was presented on different news stories with control for content level and delivery. In addition to pre-test and post-test questionnaires, both performance and preference measures were conducted. Results from the study offer emphatic support for the hypothesis that captioning the online videos makes the Internet more accessible to the deaf users. Furthermore, the findings lend strong evidence to the idea of utilizing automatic captions to make videos comprehensible to the deaf viewers at a fraction of the cost. The color-coded captions that used highlighting to reflect the accuracy ratings were found neither to be beneficial nor detrimental; however, when asked directly about the benefit of color-coding there was support for the concept. Further development and research will be necessary to find the appropriate solution

    User Experiences When Testing a Messaging App for Communication Between Individuals who are Hearing and Deaf or Hard of Hearing

    Get PDF
    This study investigated user experiences of participants testing a prototype messaging app with automatic speech recognition (ASR). Twelve pairs of participants, where one individual was deaf or hard-of-hearing (DHH), and the other one was hearing used the app, with the hearing individual using speech and ASR and the DHH one using typing. Participants completed a standardized decision making task to test the app. Regardless of hearing status of the participants or the type of device used, participants were generally satisfied with the app. These findings indicate that ASR has potential to facilitate communication between DHH and hearing individuals in small groups and that the technology merits further investigation
    corecore