25 research outputs found

    Multi-Communication System for Physically Disabled PeopleUsing Raspberry Pi

    Get PDF
    Today to lead our life we needed to keep running with our quick materialistic world. We express our considerations, by speaking with various individuals in various languages and in various ways Therefore there effectively. In any case, it is troublesome for physically incapacitated individuals, for example, for example, dump, hard of hearing, visually impaired and deadened to express their considerations and thoughts. Is a need to develop such a concrete solution for physically disabled people which will results as a better communication media for them. The designed system resolves the said problems. (1) Dump people can express their thoughts by pressing keyboard buttons (2) When blind/deaf person speaks, Raspberry pi based device converts it into text and displays on screen. (3) The paralyzed people by wearing flex sensor glove can express their thoughts through feasible figure movement. All these activities are possible with a unique embedded system using raspberry pi and disabled people can lead their life peacefully and independently through smooth communication of their ideas and thoughts with their family members, friends and society

    A Novel Machine Learning Based Two-Way Communication System for Deaf and Mute

    Get PDF
    first_pagesettingsOrder Article Reprints Open AccessArticle A Novel Machine Learning Based Two-Way Communication System for Deaf and Mute by Muhammad Imran Saleem 1,2,*ORCID,Atif Siddiqui 3ORCID,Shaheena Noor 4ORCID,Miguel-Angel Luque-Nieto 1,2ORCID andPablo Otero 1,2ORCID 1 Telecommunications Engineering School, University of Malaga, 29010 Malaga, Spain 2 Institute of Oceanic Engineering Research, University of Malaga, 29010 Malaga, Spain 3 Airbus Defence and Space, UK 4 Department of Computer Engineering, Faculty of Engineering, Sir Syed University of Engineering and Technology, Karachi 75300, Pakistan * Author to whom correspondence should be addressed. Appl. Sci. 2023, 13(1), 453; https://doi.org/10.3390/app13010453 Received: 12 November 2022 / Revised: 22 December 2022 / Accepted: 26 December 2022 / Published: 29 December 2022 Download Browse Figures Versions Notes Abstract Deaf and mute people are an integral part of society, and it is particularly important to provide them with a platform to be able to communicate without the need for any training or learning. These people rely on sign language, but for effective communication, it is expected that others can understand sign language. Learning sign language is a challenge for those with no impairment. Another challenge is to have a system in which hand gestures of different languages are supported. In this manuscript, a system is presented that provides communication between deaf and mute (DnM) and non-deaf and mute (NDnM). The hand gestures of DnM people are acquired and processed using deep learning, and multiple language support is achieved using supervised machine learning. The NDnM people are provided with an audio interface where the hand gestures are converted into speech and generated through the sound card interface of the computer. Speech from NDnM people is acquired using microphone input and converted into text. The system is easy to use and low cost. (...)This research has been partially funded by Universidad de Málaga, Málaga, Spain

    Kotha Bondhu

    Get PDF
    Sign languages are the only communication medium for mute and deaf people all over the world, but all the general people are not familiar with it. It is really tough for these people to make other people understand their voices. In the context of Bangladesh, we have studied these community people specifically from the Tablighi Jamaat community who use different sign language to communicate with people. By studying them we have gathered the local signs along with global standard sign languages in an application named Kotha Bondhu. It is a very simpler application that is understandable to any community people and is able to translate the sign languages into Bengali voic

    Landscape of sign language research based on smartphone apps: coherent literature analysis, motivations, open challenges, recommendations and future directions for app assessment

    Get PDF
    Numerous nations have prioritised the inclusion of citizens with disabilities, such as hearing loss, in all aspects of social life. Sign language is used by this population, yet they still have trouble communicating with others. Many sign language apps are being created to help bridge the communication gap as a result of technology advances enabled by the widespread use of smartphones. These apps are widely used because they are accessible and inexpensive. The services and capabilities they offer and the quality of their content, however, differ greatly. Evaluation of the quality of the content provided by these applications is necessary if they are to have any kind of real effect. A thorough evaluation like this will inspire developers to work hard on new apps, which will lead to improved software development and experience overall. This research used a systematic literature review (SLR) method, which is recognised in gaining a broad understanding of the study whilst offer- ing additional information for future investigations. SLR was adopted in this research for smartphone-based sign language apps to understand the area and main discussion aspects utilised in the assessment. These studies were reviewed on the basis of related work analysis, main issues, discussions and methodological aspects. Results revealed that the evaluation of sign language mobile apps is scarce. Thus, we proposed a future direction for the quality assessment of these apps. The findings will benefit normal-hearing and hearing-impaired users and open up a new area where researchers and developers could work together on sign language mobile apps. The results will help hearing and non-hearing users and will pave the way for future collaboration between academicians and app developers in the field of sign language technology

    Exploring Former & Modern Views: A Catch-All to Assistive Technology Applications

    Get PDF
    In life, everyone faces personalized conditions such as ageing, disease, and impairments in hearing, vision, or mobility. In addition, some individuals are born with disabilities that can limit their participation in various areas of life, including work, education, and daily activities. Assistive technology (AT) is a field that aims to provide tools and resources to facilitate the needs of individuals with disabilities or impairments. This article reviews the latest advances in AT, focusing on using Internet of Things (IoT) technologies to provide innovative solutions. The article discusses the deployment of assistive devices in various areas, such as building access, information access, and work and education participation. The goal of this research is to highlight the potential of AT to improve the lives of individuals with disabilities and to provide an overview of the current state of the field. The article also discusses the use of IoT-based solutions in assistive technology and identifies promising areas for future development and deployment. By providing a comprehensive review of the latest advancements in AT, this research aims to contribute to the ongoing efforts to enhance functional capacities and improve the quality of life for individuals with disabilities

    A Machine Learning Based Full Duplex System Supporting Multiple Sign Languages for the Deaf and Mute

    Get PDF
    This manuscript presents a full duplex communication system for the Deaf and Mute (D-M) based on Machine Learning (ML). These individuals, who generally communicate through sign language, are an integral part of our society, and their contribution is vital. They face communication difficulties mainly because others, who generally do not know sign language, are unable to communicate with them. The work presents a solution to this problem through a system enabling the non-deaf and mute (ND-M) to communicate with the D-M individuals without the need to learn sign language. The system is low-cost, reliable, easy to use, and based on a commercial-off-the-shelf (COTS) Leap Motion Device (LMD). The hand gesture data of D-M individuals is acquired using an LMD device and processed using a Convolutional Neural Network (CNN) algorithm. A supervised ML algorithm completes the processing and converts the hand gesture data into speech. A new dataset for the ML-based algorithm is created and presented in this manuscript. This dataset includes three sign language datasets, i.e., American Sign Language (ASL), Pakistani Sign Language (PSL), and Spanish Sign Language (SSL). The proposed system automatically detects the sign language and converts it into an audio message for the ND-M. Similarities between the three sign languages are also explored, and further research can be carried out in order to help create more datasets, which can be a combination of multiple sign languages. The ND-M can communicate by recording their speech, which is then converted into text and hand gesture images. The system can be upgraded in the future to support more sign language datasets. The system also provides a training mode that can help D-M individuals improve their hand gestures and also understand how accurately the system is detecting these gestures. The proposed system has been validated through a series of experiments resulting in hand gesture detection accuracy exceeding 95%Funding for open access charge: Universidad de Málag

    signWithMe: Intelligent Learning System for Inclusive Learning of the Ghanaian Sign Language

    Get PDF
    Applied project submitted to the Department of Computer Science, Ashesi University, in partial fulfillment of Bachelor of Science degree in Computer Science April 2022Though Machine Learning has been around for a while, it is still considered a new tool for economists and in its application to predicting economic growth. Studies that apply machine learning to predicting economic growth have found that the Random Forest algorithm is currently the best performing machine learning algorithm for predicting economic recessions and economic growth. However, besides studies evaluating the various machine learning algorithms, there is limited literature on the application of these techniques to help economists and policymakers solve problems. Developing African countries, like Zimbabwe, with their unique economic growth challenges, can harness the predictive qualities of this technology in development planning, setting, and achieving growth targets. In this thesis, I apply the random forest algorithm to make income predictions for Zimbabwe, in the country's hopes to attain upper-middle-income status by 2030.Ashesi Universit

    Artificial Intelligence of Things Applied to Assistive Technology: A Systematic Literature Review

    Get PDF
    According to the World Health Organization, about 15% of the world’s population has some form of disability. Assistive Technology, in this context, contributes directly to the overcoming of difficulties encountered by people with disabilities in their daily lives, allowing them to receive education and become part of the labor market and society in a worthy manner. Assistive Technology has made great advances in its integration with Artificial Intelligence of Things (AIoT) devices. AIoT processes and analyzes the large amount of data generated by Internet of Things (IoT) devices and applies Artificial Intelligence models, specifically, machine learning, to discover patterns for generating insights and assisting in decision making. Based on a systematic literature review, this article aims to identify the machine-learning models used across different research on Artificial Intelligence of Things applied to Assistive Technology. The survey of the topics approached in this article also highlights the context of such research, their application, the IoT devices used, and gaps and opportunities for further development. The survey results show that 50% of the analyzed research address visual impairment, and, for this reason, most of the topics cover issues related to computational vision. Portable devices, wearables, and smartphones constitute the majority of IoT devices. Deep neural networks represent 81% of the machine-learning models applied in the reviewed research.N/

    Design of Mobile Application for Assisting Color Blind People to Identify Information on Sign Boards

    Get PDF
    Color blindness is a condition where a person cannot distinguish colors that are of similar contrast. This paper reports an attempt to develop a mobile phone application that can run on any Android or Windows smart phone. The developed application/software tool is able to assist color blind people by converting an image with low contrast to an image with high contrast. The objective of the proposed work was to develop a program on the LabVIEW platform to i) acquire the image whose information should be processed, ii) develop an algorithm to display a high-contrast crisp image of the actual dull image, and iii) identify the colors and characters present in the dull image for messaging to the user's phone. The work was implemented on the LabVIEW platform making use of various image processing tools to identify the color and text from the sign board that otherwise cannot be identified by color blind persons. The implementation was tested with several inputs to validate the performance of the proposed method. It was able to produce accurate results for more than 97.3% of the test inputs

    ROBOTIC FINGERSPELLING HAND FOR THE DEAF-BLIND

    Get PDF
    Because communication has always been difficult for people who are deaf-blind, The Smith-Kettlewell Eye Research Institute (SKERI), in conjunction with the California Polytechnic State University Mechanical Engineering department, has commissioned the design, construction, testing, and programming of a robotic hand capable of performing basic fingerspelling to help bridge the communication gap. The hand parts were modeled using SolidWorks and fabricated using an Objet rapid prototyper. Its fingers are actuated by 11 Maxon motors, and its wrist is actuated by 2 Hitec servo motors. The motors are controlled by Texas Instruments L293D motor driver chips, ATtiny2313 slave microcontroller chips programmed to act as motor controllers, and a master ATmega644p microcontroller. The master controller communicates with a computer over a USB cable to receive sentences typed by a sighted user. The master controller then translates each letter into its corresponding hand gesture in the American Manual Alphabet and instructs each motor controller to move each finger joint into the proper position
    corecore