175,310 research outputs found

    Simulation and Analysis of Hand Gesture Recognition for Indian Sign Language using CNN

    Get PDF
    Sign Language Recognition is a device or program to help deaf and mute people. However, communication has always been difficult for a person with verbal and physical disabilities. Sign language recognition communication between the average person and the disabled using this device easily communicates with people who cannot communicate with the average person, this program reduces the communication gap between people. In total, the world has a population of about 15 -20% of the deaf and mute population which is a clear indication of the need for a Sign Language Awareness Program. Different methods are used to identify sign language but they are not effective due to the economic and commercial situation so we use this cheap and affordable method for people. Therefore, sign language recognition systems based on image processing and sensory networks are preferred over gadget programs as they are more accurate and easier to implement. This paper aims to create an easy-to-use and accurate sign language recognition system trained in the neural network thus producing text and speech input

    Handshape recognition for Argentinian Sign Language using ProbSom

    Full text link
    Automatic sign language recognition is an important topic within the areas of human-computer interaction and machine learning. On the one hand, it poses a complex challenge that requires the intervention of various knowledge areas, such as video processing, image processing, intelligent systems and linguistics. On the other hand, robust recognition of sign language could assist in the translation process and the integration of hearing-impaired people. This paper offers two main contributions: first, the creation of a database of handshapes for the Argentinian Sign Language (LSA), which is a topic that has barely been discussed so far. Secondly, a technique for image processing, descriptor extraction and subsequent handshape classification using a supervised adaptation of self-organizing maps that is called ProbSom. This technique is compared to others in the state of the art, such as Support Vector Machines (SVM), Random Forests, and Neural Networks. The database that was built contains 800 images with 16 LSA handshapes, and is a first step towards building a comprehensive database of Argentinian signs. The ProbSom-based neural classifier, using the proposed descriptor, achieved an accuracy rate above 90%

    Handshape recognition for Argentinian Sign Language using ProbSom

    Get PDF
    Automatic sign language recognition is an important topic within the areas of human-computer interaction and machine learning. On the one hand, it poses a complex challenge that requires the intervention of various knowledge areas, such as video processing, image processing, intelligent systems and linguistics. On the other hand, robust recognition of sign language could assist in the translation process and the integration of hearingimpaired people. This paper offers two main contributions: first, the creation of a database of handshapes for the Argentinian Sign Language (LSA), which is a topic that has barely been discussed so far. Secondly, a technique for image processing, descriptor extraction and subsequent handshape classification using a supervised adaptation of self-organizing maps that is called ProbSom. This technique is compared to others in the state of the art, such as Support Vector Machines (SVM), Random Forests, and Neural Networks. The database that was built contains 800 images with 16 LSA conjurations, and is a first step towards building a comprehensive database of Argentinian signs. The ProbSom-based neural classifier, using the proposed descriptor, achieved an accuracy rate above 90%.Facultad de Informátic

    Handshape recognition for Argentinian Sign Language using ProbSom

    Get PDF
    Automatic sign language recognition is an important topic within the areas of human-computer interaction and machine learning. On the one hand, it poses a complex challenge that requires the intervention of various knowledge areas, such as video processing, image processing, intelligent systems and linguistics. On the other hand, robust recognition of sign language could assist in the translation process and the integration of hearingimpaired people. This paper offers two main contributions: first, the creation of a database of handshapes for the Argentinian Sign Language (LSA), which is a topic that has barely been discussed so far. Secondly, a technique for image processing, descriptor extraction and subsequent handshape classification using a supervised adaptation of self-organizing maps that is called ProbSom. This technique is compared to others in the state of the art, such as Support Vector Machines (SVM), Random Forests, and Neural Networks. The database that was built contains 800 images with 16 LSA conjurations, and is a first step towards building a comprehensive database of Argentinian signs. The ProbSom-based neural classifier, using the proposed descriptor, achieved an accuracy rate above 90%.Facultad de Informátic

    Indian Sign Language Recognition System for Differently-able People

    Get PDF
    Sign languages commonly develop in deaf communities, that can include interpreters and friends and families of deaf people as well as people who are deaf or hard of hearing themselves. Sign Language Recognition is one of the most growing fields of research today. There are Many new techniques that have been developed recently in these fields. Here in this paper, we will propose a system for conversion of Indian sign language to text using Open CV. OpenCV designed to generate motion template images that can be used to rapidly determine where that motion occurred, how that motion occurred, and in which direction it occurred. There is also support for static gesture recognition in OpenCV which can locate hand position and define orientation (right or left) in image and create hand mask image. In this we will use image processing in which captured image will be processed which are digital in nature by the digital computer. By this we will enhance the quality of a picture so that it looks better. Our aim is to design a human computer interface system that can recognize language of the deaf and dumb accurately

    Voice to Sign Language Translator System

    Get PDF
    The process of learning the sign language may be cumbersome to some, and therefore, this project proposes a solution to this problem by providing a voice (English Language) to sign language translator system solution using Speech and image processing. Speech processing which includes Speech Recognition is the study of recognizing the words being spoken, regardless of whom the speaker is. This project uses template-based recognition as the main approach. In this approach, computer first needs to be trained with speech pattern based on some generic spectral parameter set. These spectral parameter set then will be stored as template inside the database. The system will perform the recognition process through matching the parameter set of the input speech with the stored template. Pattern recognition will be used for this project because it has the advantage of flexibility in terms of storage and matching process. In addition, its implementation is easier as compared to other methods. This paper discusses on the solution to the problem stated above, as well as the methodologies used to develop the system

    Multi-Modal Deep Hand Sign Language Recognition in Still Images Using Restricted Boltzmann Machine

    Get PDF
    In this paper, a deep learning approach, Restricted Boltzmann Machine (RBM), is used to perform automatic hand sign language recognition from visual data. We evaluate how RBM, as a deep generative model, is capable of generating the distribution of the input data for an enhanced recognition of unseen data. Two modalities, RGB and Depth, are considered in the model input in three forms: original image, cropped image, and noisy cropped image. Five crops of the input image are used and the hand of these cropped images are detected using Convolutional Neural Network (CNN). After that, three types of the detected hand images are generated for each modality and input to RBMs. The outputs of the RBMs for two modalities are fused in another RBM in order to recognize the output sign label of the input image. The proposed multi-modal model is trained on all and part of the American alphabet and digits of four publicly available datasets. We also evaluate the robustness of the proposal against noise. Experimental results show that the proposed multi-modal model, using crops and the RBM fusing methodology, achieves state-of-the-art results on Massey University Gesture Dataset 2012, American Sign Language (ASL). and Fingerspelling Dataset from the University of Surrey's Center for Vision, Speech and Signal Processing, NYU, and ASL Fingerspelling A datasets

    The Efficacy of the Eigenvector Approach to South African Sign Language Identification

    Get PDF
    Masters of ScienceThe communication barriers between deaf and hearing society mean that interaction between these communities is kept to a minimum. The South African Sign Language research group, Integration of Signed and Verbal Communication: South African Sign Language Recognition and Animation (SASL), at the University of the Western Cape aims to create technologies to bridge the communication gap. In this thesis we address the subject of whole hand gesture recognition. We demonstrate a method to identify South African Sign Language classifiers using an eigenvector approach. The classifiers researched within this thesis are based on those outlined by the Thibologa Sign Language Institute for SASL. Gesture recognition is achieved in real time. Utilising a pre-processing method for image registration we are able to increase the recognition rates for the eigenvector approach

    The efficacy of the Eigenvector approach to South African sign language identification

    Get PDF
    Masters of ScienceThe communication barriers between deaf and hearing society mean that interaction between these communities is kept to a minimum. The South African Sign Language research group, Integration of Signed and Verbal Communication: South African Sign Language Recognition and Animation (SASL), at the University of the Western Cape aims to create technologies to bridge the communication gap. In this thesis we address the subject of whole hand gesture recognition. We demonstrate a method to identify South African Sign Language classifiers using an eigenvector ap- proach. The classifiers researched within this thesis are based on those outlined by the Thibologa Sign Language Institute for SASL. Gesture recognition is achieved in real- time. Utilising a pre-processing method for image registration we are able to increase the recognition rates for the eigenvector approach.South Afric
    corecore