1,205 research outputs found

    Developing a Prototype to Translate Pakistan Sign Language into Text and Speech While Using Convolutional Neural Networking

    Get PDF
    The purpose of the study is to provide a literature review of the work done on sign language in Pakistan and the world. This study also provides a framework of an already developed prototype to translate Pakistani sign language into speech and text while using convolutional neural networking (CNN) to facilitate unimpaired teachers to bridge the communication gap among the deaf learners and unimpaired teachers. Due to the lack of sign language teaching, unimpaired teachers face difficulty in communicating with impaired learners. This communication gap can be filled with the help of this translation tool. Research indicates that a prototype has been evolved that can translate the English textual content into sign language and highlighted that there is a need for translation tool which can translate the signs into English text. The current study will provide an architectural framework of the Pakistani sign language to English text translation tool that how different components of technology like deep learning, convolutional neural networking, python, tensor Flow, and NumPy, InceptionV3 and transfer learning, eSpeak text to speech help in the development of a translation tool prototype. Keywords: Pakistan sign language (PSL), sign language (SL), translation, deaf, unimpaired, convolutional neural networking (CNN). DOI: 10.7176/JEP/10-15-18 Publication date:May 31st 201

    Gesture and sign language recognition with temporal residual networks

    Get PDF

    Sign language recognition using wearable electronics: Implementing K-nearest neighbors with dynamic time warping and convolutional neural network algorithms

    Get PDF
    We propose a sign language recognition system based on wearable electronics and two different classification algorithms. The wearable electronics were made of a sensory glove and inertial measurement units to gather fingers, wrist, and arm/forearm movements. The classifiers were k-Nearest Neighbors with Dynamic Time Warping (that is a non-parametric method) and Convolutional Neural Networks (that is a parametric method). Ten sign-words were considered from the Italian Sign Language: cose, grazie, maestra, together with words with international meaning such as google, internet, jogging, pizza, television, twitter, and ciao. The signs were repeated one-hundred times each by seven people, five male and two females, aged 29–54 y ± 10.34 (SD). The adopted classifiers performed with an accuracy of 96.6% ± 3.4 (SD) for the k-Nearest Neighbors plus the Dynamic Time Warping and of 98.0% ± 2.0 (SD) for the Convolutional Neural Networks. Our system was made of wearable electronics among the most complete ones, and the classifiers top performed in comparison with other relevant works reported in the literature

    A Sign-to-Speech Translation System

    Get PDF
    This thesis describes sign-to-speech translation using neural networks. Sign language translation is an interesting but difficult problem for which neural network techniques seem promising because of their ability to adjust to the user\u27s hand movements, which is not possible to do by most other techniques. However, even using neural networks and artificial sign languages, the translation is hard, and the best-known system, that of Fels & Hinton (1993), is capable of translating only 66 root words and 203 words including their conjugations. This research improves their results to 790 root signs and 2718 words including their conjugations while preserving a high accuracy (i.e., over 93 %) in translation. The use of matcher neural networks (Revesz 1989, 1990) and asymmetric Hamming distances are the key sources of improvement. This research aims at providing a means of communication for deaf people. Adviser: Peter Z. Reves

    ASL Recognition Quality Analysis Based on Sensory Gloves and MLP Neural Network

    Get PDF
    A simulated human hand model has been built using a virtual reality program which converts printed letters into a human hand figure that represents American Sign Language (ASL), this program was built using forward and inverse kinematics equations of a human hand. The inputs to the simulation program are normal language letters and the outputs are the human hand figures that represent ASL letters. In this research, a hardware system was designed to recognize the human hand manual alphabet of the ASL utilizing a hardware glove sensor design and using artificial neural network for enhancing the recognition process of ASL and for converting the ASL manual alphabet into printed letters. The hardware system uses flex sensors which are positioned on gloves to obtain the finger joint angle data when shown each letter of ASL. In addition, the system uses DAQ 6212 to interface the sensors and the PC. We trained and tested our hardware system for (ASL) manual alphabet words and names recognition and the recognition results have the accuracy of 90.19% and the software system for converting printed English names and words into (ASL) have 100% accuracy

    Measurement of the Flexible Bending Force of the Index and Middle Fingers for Virtual Interaction

    Get PDF
    AbstractIn this paper the development of a new low cost dataglove based on fingertip bending tracking techniques for measuring the fingers bending on various virtual interaction activities is presented as an alternative to the rehabilitation services enhancement in the betterment of the quality of life especially for the disabled person. The purpose of the research is to design a flexible control for measurement study of virtual interaction of index and middle fingers that are important in a variety of contexts as well as the deterministic approach. These analyses of fingers flexing of the system were using the flexible bend sensor functioning as a key intermediate of the process to track the fingertip positions and orientations. The main propose of the low cost dataglove is to provide natural input control of interaction in virtual, multimodal and tele-presence environments as an input devices provide as they can monitor the dexterity and flexibility characteristics of the human hand motion. Preliminary experimental results have shown that the dataglove capable to measure several human Degree of Freedom (DoF), “translating” them into commands for the interaction in the virtual world

    GyGSLA: A portable glove system for learning sign language alphabet

    Get PDF
    The communication between people with normal hearing with those having hearing or speech impairment is difficult. Learning a new alphabet is not always easy, especially when it is a sign language alphabet, which requires both hand skills and practice. This paper presents the GyGSLA system, standing as a completely portable setup created to help inexperienced people in the process of learning a new sign language alphabet. To achieve it, a computer/mobile game-interface and an hardware device, a wearable glove, were developed. When interacting with the computer or mobile device, using the wearable glove, the user is asked to represent alphabet letters and digits, by replicating the hand and fingers positions shown in a screen. The glove then sends the hand and fingers positions to the computer/mobile device using a wireless interface, which interprets the letter or digit that is being done by the user, and gives it a corresponding score. The system was tested with three completely inexperience sign language subjects, achieving a 76% average recognition ratio for the Portuguese sign language alphabet.info:eu-repo/semantics/publishedVersio
    corecore