15 research outputs found

    Sign Language Recognition Using Convolutional Neural Networks

    Get PDF
    Abstract-Sign language is a lingua among the speech and the hearing impaired community. It is hard for most people who are not familiar with sign language to communicate without an interpreter. Sign language recognition appertains to track and recognize the meaningful emotion of human made with fingers, hands, head, arms, face etc. The technique that has been proposed in this work, transcribes the gestures from a sign language to a spoken language which is easily understood by the hearing. The gestures that have been translated include alphabets, words from static images. This becomes more important for the people who completely rely on the gestural sign language for communication tries to communicate with a person who does not understand the sign language. We aim at representing features which will be learned by a technique known as convolutional neural networks (CNN), contains four types of layers: convolution layers, pooling/subsampling layers, non-linear layers, and fully connected layers. The new representation is expected to capture various image features and complex non-linear feature interactions. A softmax layer will be used to recognize signs

    Template Matching Based Sign Language Recognition System for Android Devices

    Get PDF
    An android based sign language recognition system for selected English vocabularies was developed with the explicit objective to examine the specific characteristics that are responsible for gestures recognition. Also, a recognition model for the process was designed, implemented, and evaluated on 230 samples of hand gestures.  The collected samples were pre-processed and rescaled from 3024 ×4032 pixels to 245 ×350 pixels. The samples were examined for the specific characteristics using Oriented FAST and Rotated BRIEF, and the Principal Component Analysis used for feature extraction. The model was implemented in Android Studio using the template matching algorithm as its classifier. The performance of the system was evaluated using precision, recall, and accuracy as metrics. It was observed that the system obtained an average classification rate of 87%, an average precision value of 88% and 91% for the average recall rate on the test data of hand gestures.  The study, therefore, has successfully classified hand gestures for selected English vocabularies. The developed system will enhance the communication skills between hearing and hearing-impaired people, and also aid their teaching and learning processes. Future work include exploring state-of-the-art machining learning techniques such Generative Adversarial Networks (GANs) for large dataset to improve the accuracy of results. Keywords— Feature extraction; Gestures Recognition; Sign Language; Vocabulary, Android device

    Informative Function in the Contents of Preachers’ Sermons in Jayapura Churches

    Get PDF
    Informative function has function to represents one’s understanding of fact and knowledge, to communicate information, and to deliver message. In sermons’ preachers in GKI churches of Jayapura give view that language in theological context can be elaborated with local education/wisdom in Sentani.  This study tries to figure out micro function of informative function used by the preacher in educating their congregation. The method used in this study was descriptive qualitative with the technique of recording. The data was transcribed and then reduced. The reduction data then was indexed into table as data displayed. The analysis of data was done based on the elements and features of language used in informative function. The results show that the micro functions of informative function used by the preachers are to give advice, to lecture, to announce, and to give opinion. These micro functions are framed in true values, facts, and historical traces. Further action is also facilitated by the level of acceptance or understanding among the congregations. The acceptance is proved by four elements such as language use, context, point of discussion, and attitude. Fortunately these four elements show the positive respond by the congregations

    Fuzzy rule-based hand gesture recognition

    Get PDF
    This paper introduces a fuzzy rule-based method for the recognition of hand gestures acquired from a data glove, with an application to the recognition of some sample hand gestures of LIBRAS, the Brazilian Sign Language. The method uses the set of angles of finger joints for the classification of hand configurations, and classifications of segments of hand gestures for recognizing gestures. The segmentation of gestures is based on the concept of monotonic gesture segment, sequences of hand configurations in which the variations of the angles of the finger joints have the same sign (non-increasing or non-decreasing). Each gesture is characterized by its list of monotonic segments. The set of all lists of segments of a given set of gestures determine a set of finite automata, which are able to recognize every such gesture.IFIP International Conference on Artificial Intelligence in Theory and Practice - Speech and Natural LanguageRed de Universidades con Carreras en Informática (RedUNCI

    CLASIFICACIÓN DEL LENGUAJE DE SEÑAS MEXICANO CON SVM GENERANDO DATOS ARTIFICIALES

    Get PDF
    El desarrollo de herramientas que faciliten la comunicación de personas sordas es un reto de investigación actual muy importante. Una línea de investigación es el desarrollo de sistemas de visión con un gran poder de generalización. Obtener una buena precisión de generalización requiere un conjunto de datos muy grande durante el entrenamiento, y el incremento de datos muchas veces solo añade información repetitiva y no representativa. En este artículo describimos el desarrollo de un sistema de reconocimiento de lenguaje de señas. El sistema propuesto permite identificar con una alta precisión el Lenguaje de Señas Mexicano introduciendo datos muy representativos generados artificialmente, que permiten mejorar la capacidad de generalización del clasificador. Los resultados obtenidos muestran que el algoritmo propuesto mejora la precisión de generalización de las SVM al utilizar la metodología propuesta

    NM表現を含めた類似性尺度による手話認識方式の検討

    Get PDF

    Clasificación del lenguaje de señas mexicano con SVM generando datos artificiales

    Get PDF
    El desarrollo de herramientas que faciliten la comunicación de personas sordas es un reto de investigación actual muy importante. Una línea de investigación es el desarrollo de sistemas de visión con un gran poder de generalización. Obtener una buena precisión de generalización requiere un conjunto de datos muy grande durante el entrenamiento, y el incremento de datos muchas veces solo añade información repetitiva y no representativa. En este artículo describimos el desarrollo de un sistema de reconocimiento de lenguaje de señas. El sistema propuesto permite identificar con una alta precisión el Lenguaje de Señas Mexicano introduciendo datos muy representativos generados artificialmente, que permiten mejorar la capacidad de generalización del clasificador. Los resultados obtenidos muestran que el algoritmo propuesto mejora la precisión de generalización de las SVM al utilizar la metodología propuesta

    Sign Language Recognition

    Get PDF
    This chapter covers the key aspects of sign-language recognition (SLR), starting with a brief introduction to the motivations and requirements, followed by a précis of sign linguistics and their impact on the field. The types of data available and the relative merits are explored allowing examination of the features which can be extracted. Classifying the manual aspects of sign (similar to gestures) is then discussed from a tracking and non-tracking viewpoint before summarising some of the approaches to the non-manual aspects of sign languages. Methods for combining the sign classification results into full SLR are given showing the progression towards speech recognition techniques and the further adaptations required for the sign specific case. Finally the current frontiers are discussed and the recent research presented. This covers the task of continuous sign recognition, the work towards true signer independence, how to effectively combine the different modalities of sign, making use of the current linguistic research and adapting to larger more noisy data set
    corecore