41 research outputs found

    SIGN LANGUAGE RECOGNITION USING CONVOLUTIONAL NEURAL NETWORKS

    Get PDF
    Sign Language Recognition (SLR) targets on interpreting the sign language into text or speech, so as to facilitate the communication between deaf-mute people and ordinary people. This task has broad social impact, but is still very challenging due to the complexity and large variations in hand actions. Existing methods for SLR use hand-crafted features to describe sign language motion and build classification models based on those features. However, it is difficult to design reliable features to adapt to the large variations of hand gestures. To approach this problem, we propose a novel convolution neural network (CNN) which extracts discriminative spatial-temporal features from raw video stream automatically without any prior knowledge, avoiding designing features. To boost the performance, multi-channels of video streams, including color information, depth clue, and body joint positions, are used as input to the CNN in order to integrate color, depth and trajectory information. We validate the proposed model on a real dataset collected with Microsoft Kinect and demonstrate its effectiveness over the traditional approaches based on hand-crafted features

    Sign Language Recognition Using Convolutional Neural Networks

    Get PDF
    Abstract-Sign language is a lingua among the speech and the hearing impaired community. It is hard for most people who are not familiar with sign language to communicate without an interpreter. Sign language recognition appertains to track and recognize the meaningful emotion of human made with fingers, hands, head, arms, face etc. The technique that has been proposed in this work, transcribes the gestures from a sign language to a spoken language which is easily understood by the hearing. The gestures that have been translated include alphabets, words from static images. This becomes more important for the people who completely rely on the gestural sign language for communication tries to communicate with a person who does not understand the sign language. We aim at representing features which will be learned by a technique known as convolutional neural networks (CNN), contains four types of layers: convolution layers, pooling/subsampling layers, non-linear layers, and fully connected layers. The new representation is expected to capture various image features and complex non-linear feature interactions. A softmax layer will be used to recognize signs

    Sign language recognition using convolutional neural networks

    Get PDF
    There is an undeniable communication problem between the Deaf community and the hearing majority. Innovations in automatic sign language recognition try to tear down this communication barrier. Our contribution considers a recognition system using the Microsoft Kinect, convolutional neural networks (CNNs) and GPU acceleration. Instead of constructing complex handcrafted features, CNNs are able to auto- mate the process of feature construction. We are able to recognize 20 Italian gestures with high accuracy. The predictive model is able to gen- eralize on users and surroundings not occurring during training with a cross-validation accuracy of 91.7%. Our model achieves a mean Jaccard Index of 0.789 in the ChaLearn 2014 Looking at People gesture spotting competition

    Sign language recognition with transformer networks

    Get PDF
    Sign languages are complex languages. Research into them is ongoing, supported by large video corpora of which only small parts are annotated. Sign language recognition can be used to speed up the annotation process of these corpora, in order to aid research into sign languages and sign language recognition. Previous research has approached sign language recognition in various ways, using feature extraction techniques or end-to-end deep learning. In this work, we apply a combination of feature extraction using OpenPose for human keypoint estimation and end-to-end feature learning with Convolutional Neural Networks. The proven multi-head attention mechanism used in transformers is applied to recognize isolated signs in the Flemish Sign Language corpus. Our proposed method significantly outperforms the previous state of the art of sign language recognition on the Flemish Sign Language corpus: we obtain an accuracy of 74.7% on a vocabulary of 100 classes. Our results will be implemented as a suggestion system for sign language corpus annotation

    Real Time Bangladeshi Sign Language Detection using Faster R-CNN

    Full text link
    Bangladeshi Sign Language (BdSL) is a commonly used medium of communication for the hearing-impaired people in Bangladesh. Developing a real time system to detect these signs from images is a great challenge. In this paper, we present a technique to detect BdSL from images that performs in real time. Our method uses Convolutional Neural Network based object detection technique to detect the presence of signs in the image region and to recognize its class. For this purpose, we adopted Faster Region-based Convolutional Network approach and developed a dataset −- BdSLImset −- to train our system. Previous research works in detecting BdSL generally depend on external devices while most of the other vision-based techniques do not perform efficiently in real time. Our approach, however, is free from such limitations and the experimental results demonstrate that the proposed method successfully identifies and recognizes Bangladeshi signs in real time.Comment: 6 pages, Accepted in International Conference on Innovation in Engineering and Technology (ICIET) 27-29 December, 2018, Dhaka, Banglades

    Big Data Analytics for Early Detection of Drug Safety Signals in Postmarketing Surveillance

    Get PDF
    The increasing availability of vast amounts of healthcare data and the advancements in analytics techniques have opened new avenues for drug safety surveillance. This study investigates the application of big data analytics in this domain, focusing on data aggregation, signal detection, real-time monitoring, signal validation and prioritization, comparative effectiveness studies, and data integration and collaboration. It explores how diverse data sources, such as electronic health records (EHRs), insurance claims databases, social media, and patient forums, can be integrated to obtain a comprehensive view of drug usage patterns and potential safety issues. Advanced analytics techniques, including data mining, machine learning, and natural language processing, are examined for their ability to automatically detect potential drug safety signals. The study emphasizes the significance of real-time monitoring for the rapid identification of emerging drug safety issues and the role of signal validation and prioritization in focusing resources on critical signals. Furthermore, it explores how big data analytics enables comparative effectiveness studies to assess the safety profiles of different interventions. The research also highlights the importance of data integration and collaboration in enhancing the understanding of drug safety signals and promoting collective decision-making among stakeholders

    Japanese sign language classification based on gathered images and neural networks

    Get PDF
    This paper proposes a method to classify words in Japanese Sign Language (JSL). This approach employs a combined gathered image generation technique and a neural network with convolutional and pooling layers (CNNs). The gathered image generation generates images based on mean images. Herein, the maximum difference value is between blocks of mean and JSL motions images. The gathered images comprise blocks that having the calculated maximum difference value. CNNs extract the features of the gathered images, while a support vector machine for multi-class classification, and a multilayer perceptron are employed to classify 20 JSL words. The experimental results had 94.1% for the mean recognition accuracy of the proposed method. These results suggest that the proposed method can obtain information to classify the sample words
    corecore