26 research outputs found

    Development in Signer-Independent Sign Language Recognition and the Ideas of Solving Some Key Problems

    Full text link

    SIGNER-INDEPENDENT SIGN LANGUAGE RECOGNITION BASED ON HMMs AND DEPTH INFORMATION

    Get PDF
    [[abstract]]In this paper, we use the depth information to effectively locate the 3D position of hands in sign language recognition system. But the information will be changed by different signers and we can’t do recognition well. Here, we use the incremental changes of the three- dimensional coordinates on a unit time as feature set to fix the above problem. And we use hidden Markov models(HMMs) as time-varying classifier to recognize the moving change of sign language on time domain. We also include HMMs with scaling factor to solve the underflow effect of HMMs. Experiments verify that the proposed method is superior then traditional one.[[sponsorship]]中華民國影像處理與圖形識別學會; 宜蘭大學圖書資訊館; 宜蘭大學圖書資訊館[[conferencetype]]國際[[conferencedate]]20130818~20130820[[booktype]]紙本[[iscallforpapers]]Y[[conferencelocation]]宜蘭, 臺

    Perancangan Sarung Tangan Untuk Pengenalan Sistem Isyarat Bahasa Indonesia Berbasis Sensor

    Get PDF
    Penelitian ini bertujuan untuk mengembangkan sarung tangan yang dilengkapi sensor (embedded system) yang digunakan dalam sistem pengenalan Sistem Isyarat Bahasa Indonesia (SIBI). Dengan pendekatan berbasis data sensor, sistem pengenalan SIBI diharapkan dapat memiliki akurasi yang lebih baik, yaitu dengan menggunakan sensor flex (untuk gerakan lekukan jari, dan menggunakan kombinasi sensor accelerometer-gyroscope untuk mengetahui kemiringan/orientasi tangan. Penelitian ini masih dalam tahap perancangan sarung tangan. Dalam tahap perancangan ini telah diselesaikan untuk desain rangkaian, desain PCB, pembuatan PCB, pemasangan sensor flex dan desain program mikrokontroler

    Arabic sign language recognition using an instrumented glove

    Get PDF

    Leveraging Graph-based Cross-modal Information Fusion for Neural Sign Language Translation

    Full text link
    Sign Language (SL), as the mother tongue of the deaf community, is a special visual language that most hearing people cannot understand. In recent years, neural Sign Language Translation (SLT), as a possible way for bridging communication gap between the deaf and the hearing people, has attracted widespread academic attention. We found that the current mainstream end-to-end neural SLT models, which tries to learning language knowledge in a weakly supervised manner, could not mine enough semantic information under the condition of low data resources. Therefore, we propose to introduce additional word-level semantic knowledge of sign language linguistics to assist in improving current end-to-end neural SLT models. Concretely, we propose a novel neural SLT model with multi-modal feature fusion based on the dynamic graph, in which the cross-modal information, i.e. text and video, is first assembled as a dynamic graph according to their correlation, and then the graph is processed by a multi-modal graph encoder to generate the multi-modal embeddings for further usage in the subsequent neural translation models. To the best of our knowledge, we are the first to introduce graph neural networks, for fusing multi-modal information, into neural sign language translation models. Moreover, we conducted experiments on a publicly available popular SLT dataset RWTH-PHOENIX-Weather-2014T. and the quantitative experiments show that our method can improve the model
    corecore