6,385 research outputs found

    Assistive technologies for severe and profound hearing loss: beyond hearing aids and implants

    Get PDF
    Assistive technologies offer capabilities that were previously inaccessible to individuals with severe and profound hearing loss who have no or limited access to hearing aids and implants. This literature review aims to explore existing assistive technologies and identify what still needs to be done. It is found that there is a lack of focus on the overall objectives of assistive technologies. In addition, several other issues are identified i.e. only a very small number of assistive technologies developed within a research context have led to commercial devices, there is a predisposition to use the latest expensive technologies and a tendency to avoid designing products universally. Finally, the further development of plug-ins that translate the text content of a website to various sign languages is needed to make information on the internet more accessible

    DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation

    Full text link
    There is an undeniable communication barrier between deaf people and people with normal hearing ability. Although innovations in sign language translation technology aim to tear down this communication barrier, the majority of existing sign language translation systems are either intrusive or constrained by resolution or ambient lighting conditions. Moreover, these existing systems can only perform single-sign ASL translation rather than sentence-level translation, making them much less useful in daily-life communication scenarios. In this work, we fill this critical gap by presenting DeepASL, a transformative deep learning-based sign language translation technology that enables ubiquitous and non-intrusive American Sign Language (ASL) translation at both word and sentence levels. DeepASL uses infrared light as its sensing mechanism to non-intrusively capture the ASL signs. It incorporates a novel hierarchical bidirectional deep recurrent neural network (HB-RNN) and a probabilistic framework based on Connectionist Temporal Classification (CTC) for word-level and sentence-level ASL translation respectively. To evaluate its performance, we have collected 7,306 samples from 11 participants, covering 56 commonly used ASL words and 100 ASL sentences. DeepASL achieves an average 94.5% word-level translation accuracy and an average 8.2% word error rate on translating unseen ASL sentences. Given its promising performance, we believe DeepASL represents a significant step towards breaking the communication barrier between deaf people and hearing majority, and thus has the significant potential to fundamentally change deaf people's lives

    A Novel Machine Learning Based Two-Way Communication System for Deaf and Mute

    Get PDF
    first_pagesettingsOrder Article Reprints Open AccessArticle A Novel Machine Learning Based Two-Way Communication System for Deaf and Mute by Muhammad Imran Saleem 1,2,*ORCID,Atif Siddiqui 3ORCID,Shaheena Noor 4ORCID,Miguel-Angel Luque-Nieto 1,2ORCID andPablo Otero 1,2ORCID 1 Telecommunications Engineering School, University of Malaga, 29010 Malaga, Spain 2 Institute of Oceanic Engineering Research, University of Malaga, 29010 Malaga, Spain 3 Airbus Defence and Space, UK 4 Department of Computer Engineering, Faculty of Engineering, Sir Syed University of Engineering and Technology, Karachi 75300, Pakistan * Author to whom correspondence should be addressed. Appl. Sci. 2023, 13(1), 453; https://doi.org/10.3390/app13010453 Received: 12 November 2022 / Revised: 22 December 2022 / Accepted: 26 December 2022 / Published: 29 December 2022 Download Browse Figures Versions Notes Abstract Deaf and mute people are an integral part of society, and it is particularly important to provide them with a platform to be able to communicate without the need for any training or learning. These people rely on sign language, but for effective communication, it is expected that others can understand sign language. Learning sign language is a challenge for those with no impairment. Another challenge is to have a system in which hand gestures of different languages are supported. In this manuscript, a system is presented that provides communication between deaf and mute (DnM) and non-deaf and mute (NDnM). The hand gestures of DnM people are acquired and processed using deep learning, and multiple language support is achieved using supervised machine learning. The NDnM people are provided with an audio interface where the hand gestures are converted into speech and generated through the sound card interface of the computer. Speech from NDnM people is acquired using microphone input and converted into text. The system is easy to use and low cost. (...)This research has been partially funded by Universidad de Málaga, Málaga, Spain

    INTERACTIVE EMIRATE SIGN LANGUAGE E-DICTIONARY BASED ON DEEP LEARNING RECOGNITION MODELS

    Get PDF
    According to the ministry of community development database in the United Arab Emirates (UAE) about 3065 people with disabilities are hearing disabled (Emirates News Agency - Ministry of Community Development). Hearing-impaired people find it difficult to communicate with the rest of society. They usually need Sign Language (SL) interpreters but as the number of hearing-impaired individuals grows the number of Sign Language interpreters can almost be non-existent. In addition, specialized schools lack a unified Sign Language (SL) dictionary, which can be linked to the Arabic language being of a diglossia nature, hence many dialects of the language co-exist. Moreover, there are not sufficient research work in Arabic SL in general, which can be linked to the lack of unification in the Arabic Sign Language. Hence, presenting an Emirate Sign Language (ESL) electronic Dictionary (e-Dictionary), consisting of four features, namely Dictation, Alpha Webcam, Vocabulary, and Spell, and two datasets (letters and vocabulary/sentences) to help the community in exploring and unifying the ESL. The vocabulary/sentences dataset was recorded by Azure Kinect and includes 127 signs and 50 sentences, making a total of 708 clips, performed by 4 Emirate signers with hearing loss. All the signs were reviewed by the head of the Community Development Authority in UAE for compliance. ESL e-Dictionary integrates state-of-the-art methods i.e., Automatic Speech recognition API by Google, YOLOv8 model trained on our dataset, and an algorithm inspired by bag of words model. Experimental results proved the usability of the e-Dictionary in real-time on laptops. The vocabulary/sentences dataset will be publicly offered in the near future for research purposes

    Utilization of Avatar-based Technology in The Area of Sign language... A Review

    Get PDF
    Information and communication technology (ICT) has progressed rapidly in recent years, and it is becoming necessary for everybody including deafpeople. This paper gives an overview of using a technology called Avatar-based technology in the area of sign language, which is the normal language of the deafworldwide, although it is different from country to another. This paper covers the basic concepts related to the signing avatar and the efforts for applying it indifferent sign language worldwide, especially Arabic Sign Language (ArSL)

    A Machine Learning Based Full Duplex System Supporting Multiple Sign Languages for the Deaf and Mute

    Get PDF
    This manuscript presents a full duplex communication system for the Deaf and Mute (D-M) based on Machine Learning (ML). These individuals, who generally communicate through sign language, are an integral part of our society, and their contribution is vital. They face communication difficulties mainly because others, who generally do not know sign language, are unable to communicate with them. The work presents a solution to this problem through a system enabling the non-deaf and mute (ND-M) to communicate with the D-M individuals without the need to learn sign language. The system is low-cost, reliable, easy to use, and based on a commercial-off-the-shelf (COTS) Leap Motion Device (LMD). The hand gesture data of D-M individuals is acquired using an LMD device and processed using a Convolutional Neural Network (CNN) algorithm. A supervised ML algorithm completes the processing and converts the hand gesture data into speech. A new dataset for the ML-based algorithm is created and presented in this manuscript. This dataset includes three sign language datasets, i.e., American Sign Language (ASL), Pakistani Sign Language (PSL), and Spanish Sign Language (SSL). The proposed system automatically detects the sign language and converts it into an audio message for the ND-M. Similarities between the three sign languages are also explored, and further research can be carried out in order to help create more datasets, which can be a combination of multiple sign languages. The ND-M can communicate by recording their speech, which is then converted into text and hand gesture images. The system can be upgraded in the future to support more sign language datasets. The system also provides a training mode that can help D-M individuals improve their hand gestures and also understand how accurately the system is detecting these gestures. The proposed system has been validated through a series of experiments resulting in hand gesture detection accuracy exceeding 95%Funding for open access charge: Universidad de Málag

    TectoMT – a deep-­linguistic core of the combined Chimera MT system

    Get PDF
    Chimera is a machine translation system that combines the TectoMT deep-linguistic core with phrase-based MT system Moses. For English–Czech pair it also uses the Depfix post-correction system. All the components run on Unix/Linux platform and are open source (available from Perl repository CPAN and the LINDAT/CLARIN repository). The main website is https://ufal.mff.cuni.cz/tectomt. The development is currently supported by the QTLeap 7th FP project (http://qtleap.eu)
    corecore