2,183 research outputs found
DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation
There is an undeniable communication barrier between deaf people and people
with normal hearing ability. Although innovations in sign language translation
technology aim to tear down this communication barrier, the majority of
existing sign language translation systems are either intrusive or constrained
by resolution or ambient lighting conditions. Moreover, these existing systems
can only perform single-sign ASL translation rather than sentence-level
translation, making them much less useful in daily-life communication
scenarios. In this work, we fill this critical gap by presenting DeepASL, a
transformative deep learning-based sign language translation technology that
enables ubiquitous and non-intrusive American Sign Language (ASL) translation
at both word and sentence levels. DeepASL uses infrared light as its sensing
mechanism to non-intrusively capture the ASL signs. It incorporates a novel
hierarchical bidirectional deep recurrent neural network (HB-RNN) and a
probabilistic framework based on Connectionist Temporal Classification (CTC)
for word-level and sentence-level ASL translation respectively. To evaluate its
performance, we have collected 7,306 samples from 11 participants, covering 56
commonly used ASL words and 100 ASL sentences. DeepASL achieves an average
94.5% word-level translation accuracy and an average 8.2% word error rate on
translating unseen ASL sentences. Given its promising performance, we believe
DeepASL represents a significant step towards breaking the communication
barrier between deaf people and hearing majority, and thus has the significant
potential to fundamentally change deaf people's lives
A Novel Machine Learning Based Two-Way Communication System for Deaf and Mute
first_pagesettingsOrder Article Reprints
Open AccessArticle
A Novel Machine Learning Based Two-Way Communication System for Deaf and Mute
by Muhammad Imran Saleem 1,2,*ORCID,Atif Siddiqui 3ORCID,Shaheena Noor 4ORCID,Miguel-Angel Luque-Nieto 1,2ORCID andPablo Otero 1,2ORCID
1
Telecommunications Engineering School, University of Malaga, 29010 Malaga, Spain
2
Institute of Oceanic Engineering Research, University of Malaga, 29010 Malaga, Spain
3
Airbus Defence and Space, UK
4
Department of Computer Engineering, Faculty of Engineering, Sir Syed University of Engineering and Technology, Karachi 75300, Pakistan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(1), 453; https://doi.org/10.3390/app13010453
Received: 12 November 2022 / Revised: 22 December 2022 / Accepted: 26 December 2022 / Published: 29 December 2022
Download Browse Figures Versions Notes
Abstract
Deaf and mute people are an integral part of society, and it is particularly important to provide them with a platform to be able to communicate without the need for any training or learning. These people rely on sign language, but for effective communication, it is expected that others can understand sign language. Learning sign language is a challenge for those with no impairment. Another challenge is to have a system in which hand gestures of different languages are supported. In this manuscript, a system is presented that provides communication between deaf and mute (DnM) and non-deaf and mute (NDnM). The hand gestures of DnM people are acquired and processed using deep learning, and multiple language support is achieved using supervised machine learning. The NDnM people are provided with an audio interface where the hand gestures are converted into speech and generated through the sound card interface of the computer. Speech from NDnM people is acquired using microphone input and converted into text. The system is easy to use and low cost. (...)This research has been partially funded by Universidad de Málaga, Málaga, Spain
- …