1,156 research outputs found

    IoT Based Sign Language Recognition System

    Get PDF
    Sign language is the key communication medium which deaf and mute people use in their day-to-day life Talking to disabled people will cause a difficult situation since a non- mute person cannot understand their hand gestures and in many instances mute people are hearing impaired Same as Sinhala Tamil English or any other language sign language also tend to have differences according to the region This paper is an attempt to assist deaf and mute people to develop an effective communication mechanism with non-mute people The end product of this project is a combination of a mobile application that can translate the sign language into digital voice and IoT enabled light-weighted wearable glove which capable of recognizing twenty-six English alphabet 0-9 numbers and words Better user experience provide with voice-to-text feature in mobile application to reduce the communication gap within mute and non-mute communities Research findings and results from current system visualize the output of the product can be optimized up to 25 -35 with enhanced pattern recognition mechanis

    Communicating with Humans and Robots: A Motion Tracking Data Glove for Enhanced Support of Deafblind

    Get PDF
    In this work, we discuss the design and development of a communication system for enhanced support of the deafblind. The system is based on an advanced motion tracking Data Glove that allows for high fidelity determination of finger postures with consequent identification of the basic Malossi alphabet signs. A natural, easy-to-master alphabet extension that supports single-hand signing without touch surface sensing is described, and different scenarios for its use are discussed. The focus is on using the extended Malossi alphabet as a communication medium in a Data Glove-based interface for remote messaging and interactive control of mobile robots. This may be of particular interest to the deafblind community, where distant communications and robotized support and services are rising. The designed Data Glove-based communication interface requires minimal adjustments to the Malossi alphabet and can be mastered after a short training period. The natural interaction style supported by the Data Glove and the popularity of the Malossi alphabet among the deafblind should greatly facilitate the wider adoption of the developed interface

    Building Intelligent Communication Systems for Handicapped Aphasiacs

    Get PDF
    This paper presents an intelligent system allowing handicapped aphasiacs to perform basic communication tasks. It has the following three key features: (1) A 6-sensor data glove measures the finger gestures of a patient in terms of the bending degrees of his fingers. (2) A finger language recognition subsystem recognizes language components from the finger gestures. It employs multiple regression analysis to automatically extract proper finger features so that the recognition model can be fast and correctly constructed by a radial basis function neural network. (3) A coordinate-indexed virtual keyboard allows the users to directly access the letters on the keyboard at a practical speed. The system serves as a viable tool for natural and affordable communication for handicapped aphasiacs through continuous finger language input

    Assistive technologies for severe and profound hearing loss: beyond hearing aids and implants

    Get PDF
    Assistive technologies offer capabilities that were previously inaccessible to individuals with severe and profound hearing loss who have no or limited access to hearing aids and implants. This literature review aims to explore existing assistive technologies and identify what still needs to be done. It is found that there is a lack of focus on the overall objectives of assistive technologies. In addition, several other issues are identified i.e. only a very small number of assistive technologies developed within a research context have led to commercial devices, there is a predisposition to use the latest expensive technologies and a tendency to avoid designing products universally. Finally, the further development of plug-ins that translate the text content of a website to various sign languages is needed to make information on the internet more accessible

    A Cross-Lingual Mobile Medical Communication System Prototype for Foreigners and Subjects with Speech, Hearing, and Mental Disabilities Based on Pictograms

    Get PDF
    People with speech, hearing, or mental impairment require special communication assistance, especially for medical purposes. Automatic solutions for speech recognition and voice synthesis from text are poor fits for communication in the medical domain because they are dependent on error-prone statistical models. Systems dependent on manual text input are insufficient. Recently introduced systems for automatic sign language recognition are dependent on statistical models as well as on image and gesture quality. Such systems remain in early development and are based mostly on minimal hand gestures unsuitable for medical purposes. Furthermore, solutions that rely on the Internet cannot be used after disasters that require humanitarian aid. We propose a high-speed, intuitive, Internet-free, voice-free, and text-free tool suited for emergency medical communication. Our solution is a pictogram-based application that provides easy communication for individuals who have speech or hearing impairment or mental health issues that impair communication, as well as foreigners who do not speak the local language. It provides support and clarification in communication by using intuitive icons and interactive symbols that are easy to use on a mobile device. Such pictogram-based communication can be quite effective and ultimately make people’s lives happier, easier, and safer

    Is Innovative Auditory Technology Here to Stay

    Get PDF
    Innovation is often seen as a necessary constant throughout history, as it is our survival mechanism, evolving from humanity’s inherent ignorance/flaws. However, as society pushes for innovation at an ever-increasing rate, should we be concerned about its implications on the deaf and hard of hearing (DHH) in the United States? The DHH has been quite sensitive to this surplus of innovation, with an increasing amount of ‘auditory aid related’ technologies being marketed every day. What does this mean for deaf culture? More importantly, how will these innovations shape the future of deaf behavior and linguistics

    Indian Sign Language Recognition Using Deep Learning Techniques

    Get PDF
    By automatically translating Indian sign language into English speech, a portable multimedia Indian sign language translation program can help the deaf and/or speaker connect with hearing people. It could act as a translator for those that do not understand sign language, eliminating the need for a mediator and allowing communication to take place in the speaker's native language. As a result, Deaf-Dumb people are denied regular educational opportunities. Uneducated Deaf-Dumb people have a difficult time communicating with members of their culture. We provide an incorporated Android application to help ignorant Deaf-Dumb people fit into society and connect with others. The newly launched program includes a straight forward keyboard translator that really can convert any term from Indian sign language to English. The proposed system is an interactive application program for mobile phones created with application software. The mobile phone is used to photograph Indian sign language gestures, while the operating system performs vision processing tasks and the constructed audio device output signals speech, limiting the need for extra devices and costs. The perceived latency between both the hand signals as well as the translation is reduced by parallel processing. This allows for a very quick translation of finger and hand motions. This is capable of recognizing one-handed sign representations of the numbers 0 through 9. The findings show that the results are highly reproducible, consistent, and accurate

    Use of Key Points and Transfer Learning Techniques in Recognition of Handedness Indian Sign Language

    Get PDF
    The most expressive way of communication for individuals who have trouble speaking or hearing is sign language. Normal people are unable to comprehend sign language. As a result, communication barriers are put up. Majority of people are right-handed. Statistics say that, an average population of left-handed person in the world is about 10%, where they use left hand as their dominating hand. In case of hand written text recognition, if the text is written by left-handed or right-handed person, then there would not be any problem in recognition neither for human and nor for computer. But same thing is not true for sign language and its detection using computer. When the detection is performed using computer vision and if it falls into the category of detection by appearance, then it might not detect correctly. In machine and deep learning, if the model is trained using just one dominating hand, let’s say right hand, then the predictions can go wrong if same sign is performed by left-handed person. This paper addresses this issue. It takes into account the signs performed by any type of signer: left-handed, right-handed or ambidexter. In proposed work is on Indian Sign Language (ISL). Two models are trained: Model I, is trained on one dominating hand and Model II, is trained on both the hands. Model II gives correct predictions regardless of any type of signer. It recognizes alphabets and numbers in ISL. We used the concept of Key points and Transfer Learning techniques for implementation. Using this approach, models get trained quickly and we could achieve validation accuracy of 99%

    Proceedings of the 3rd IUI Workshop on Interacting with Smart Objects

    Get PDF
    These are the Proceedings of the 3rd IUI Workshop on Interacting with Smart Objects. Objects that we use in our everyday life are expanding their restricted interaction capabilities and provide functionalities that go far beyond their original functionality. They feature computing capabilities and are thus able to capture information, process and store it and interact with their environments, turning them into smart objects
    corecore