3,146 research outputs found

    Automatic Speech Recognition Services: Deaf and Hard-of-Hearing Usability

    Full text link
    Nowadays, speech is becoming a more common, if not standard, interface to technology. This can be seen in the trend of technology changes over the years. Increasingly, voice is used to control programs, appliances and personal devices within homes, cars, workplaces, and public spaces through smartphones and home assistant devices using Amazon's Alexa, Google's Assistant and Apple's Siri, and other proliferating technologies. However, most speech interfaces are not accessible for Deaf and Hard-of-Hearing (DHH) people. In this paper, performances of current Automatic Speech Recognition (ASR) with voices of DHH speakers are evaluated. ASR has improved over the years, and is able to reach Word Error Rates (WER) as low as 5-6% [1][2][3], with the help of cloud-computing and machine learning algorithms that take in custom vocabulary models. In this paper, a custom vocabulary model is used, and the significance of the improvement is evaluated when using DHH speech.Comment: 6 pages, 4 figure

    Developing a Prototype to Translate Pakistan Sign Language into Text and Speech While Using Convolutional Neural Networking

    Get PDF
    The purpose of the study is to provide a literature review of the work done on sign language in Pakistan and the world. This study also provides a framework of an already developed prototype to translate Pakistani sign language into speech and text while using convolutional neural networking (CNN) to facilitate unimpaired teachers to bridge the communication gap among the deaf learners and unimpaired teachers. Due to the lack of sign language teaching, unimpaired teachers face difficulty in communicating with impaired learners. This communication gap can be filled with the help of this translation tool. Research indicates that a prototype has been evolved that can translate the English textual content into sign language and highlighted that there is a need for translation tool which can translate the signs into English text. The current study will provide an architectural framework of the Pakistani sign language to English text translation tool that how different components of technology like deep learning, convolutional neural networking, python, tensor Flow, and NumPy, InceptionV3 and transfer learning, eSpeak text to speech help in the development of a translation tool prototype. Keywords: Pakistan sign language (PSL), sign language (SL), translation, deaf, unimpaired, convolutional neural networking (CNN). DOI: 10.7176/JEP/10-15-18 Publication date:May 31st 201

    Enhancing socio-emotional communication and quality of life in young cochlear implant recipients: Perspectives from parameter-specific morphing and caricaturing

    Get PDF
    The use of digitally modified stimuli with enhanced diagnostic information to improve verbal communication in children with sensory or central handicaps was pioneered by Tallal and colleagues in 1996, who targeted speech comprehension in language-learning impaired children. Today, researchers are aware that successful communication cannot be reduced to linguistic information—it depends strongly on the quality of communication, including non-verbal socio-emotional communication. In children with cochlear implants (CIs), quality of life (QoL) is affected, but this can be related to the ability to recognize emotions in a voice rather than speech comprehension alone. In this manuscript, we describe a family of new methods, termed parameter-specific facial and vocal morphing . We propose that these provide novel perspectives for assessing sensory determinants of human communication, but also for enhancing socio-emotional communication and QoL in the context of sensory handicaps, via training with digitally enhanced, caricatured stimuli. Based on promising initial results with various target groups including people with age-related macular degeneration, people with low abilities to recognize faces, older people, and adult CI users, we discuss chances and challenges for perceptual training interventions for young CI users based on enhanced auditory stimuli, as well as perspectives for CI sound processing technology

    Assessing Virtual Assistant Capabilities with Italian Dysarthric Speech

    Get PDF
    The usage of smartphone-based virtual assistants (e.g., Siri or Google Assistant) is growing, and their spread was most possible by the increasing capabilities of natural language processing, and generally has a positive impact on device accessibility, e.g., for people with disabilities. However, people with dysarthria or other speech impairments may be unable to use these virtual assistants with proficiency. This paper investigates to which extent people with ALS-induced dysarthria can be understood and get consistent answers by three widely used smartphone-based assistants, namely Siri, Google Assistant, and Cortana. In particular, we focus on the recognition of Italian dysarthric speech, to study the behavior of the virtual assistants with this specific population for which there are no relevant studies available. We collected and recorded suitable speech samples from people with dysarthria in a dedicated center of the Molinette hospital, in Turin, Italy. Starting from those recordings, the differences between such assistants, in terms of speech recognition and consistency in answer, are investigated and discussed. Results highlight different performance among the virtual assistants. For speech recognition, Google Assistant is the most promising, with around 25% of word error rate per sentence. Consistency in answer, instead, sees Siri and Google Assistant provide coherent answers around 60% of times

    Using Information Communications Technologies to Implement Universal Design for Learning

    Get PDF
    The purpose of this paper is to assist Ministries of Education, their donors and partners, Disabled Persons Organizations (DPOs), and the practitioner community funded by and working with USAID to select, pilot, and (as appropriate) scale up ICT4E solutions to facilitate the implementation of Universal Design for Learning (UDL), with a particular emphasis on supporting students with disabilities to acquire literacy and numeracy skills. The paper focuses primarily on how technology can support foundational skills acquisition for students with disabilities, while also explaining when, why, and how technologies that assist students with disabilities can, in some applications, have positive impacts on all students’ basic skills development. In 2018, USAID released the Toolkit for Universal Design for Learning to Help All Children Read, section 3.1 of which provides basic information on the role of technologies to support UDL principles and classroom learning. This paper expands upon that work and offers more extensive advice on using ICT4E1 to advance equitable access to high quality learning. Like the UDL toolkit, the audience for this guide is mainly Ministries of Education and development agencies working in the area of education, but this resource can also be helpful for DPOs and non-governmental organizations (NGOs) wishing to pilot or spearhead ICT initiatives. Content for this paper was informed by expert interviews and reviews of field reports during 2018. These included programs associated with United Nations, Zero Project, World Innovation Summit, UNESCO Mobile Learning Awards, and USAID’s All Children Reading: A Grand Challenge for Development. Relevant case studies of select education programs integrating technology to improve learning outcomes for students with disabilities were summarized for this document

    The Journal of Early Hearing Detection and Intervention: Volume 4 Issue 3 pages 1-118

    Get PDF

    On the Impact of Dysarthric Speech on Contemporary ASR Cloud Platforms

    Get PDF
    The spread of voice-driven devices has a positive impact for people with disabilities in smart environments, since such devices allow them to perform a series of daily activities that were difficult or impossible before. As a result, their quality of life and autonomy increase. However, the speech recognition technology employed in such devices becomes limited with people having communication disorders, like dysarthria. People with dysarthria may be unable to control their smart environments, at least with the needed proficiency; this problem may negatively affect the perceived reliability of the entire environment. By exploiting the TORGO database of speech samples pronounced by people with dysarthria, this paper compares the accuracy of the dysarthric speech recognition as achieved by three speech recognition cloud platforms, namely IBM Watson Speech-to- Text, Google Cloud Speech, and Microsoft Azure Bing Speech. Such services, indeed, are used in many virtual assistants deployed in smart environments, such as Google Home. The goal is to investigate whether such cloud platforms are usable to recognize dysarthric speech, and to understand which of them is the most suitable for people with dysarthria. Results suggest that the three platforms have comparable performance in recognizing dysarthric speech, and that the accuracy of the recognition is related to the speech intelligibility of the person. Overall, the platforms are limited when the dysarthric speech intelligibility is low (80-90% of word error rate), while they improve up to reach a word error rate of 15-25% for people without abnormality in their speech intelligibility
    • …
    corecore