33,015 research outputs found

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Review on Classification Methods used in Image based Sign Language Recognition System

    Get PDF
    Sign language is the way of communication among the Deaf-Dumb people by expressing signs. This paper is present review on Sign language Recognition system that aims to provide communication way for Deaf and Dumb pople. This paper describes review of Image based sign language recognition system. Signs are in the form of hand gestures and these gestures are identified from images as well as videos. Gestures are identified and classified according to features of Gesture image. Features are like shape, rotation, angle, pixels, hand movement etc. Features are finding by various Features Extraction methods and classified by various machine learning methods. Main pupose of this paper is to review on classification methods of similar systems used in Image based hand gesture recognition . This paper also describe comarison of various system on the base of classification methods and accuracy rate

    Enhancing Sign Language Recognition through Fusion of CNN Models

    Get PDF
    This study introduces a pioneering hybrid model designed for the recognition of sign language, with a specific focus on American Sign Language (ASL) and Indian Sign Language (ISL). Departing from traditional machine learning methods, the model ingeniously blends hand-crafted techniques with deep learning approaches to surmount inherent limitations. Notably, the hybrid model achieves an exceptional accuracy rate of 96% for ASL and 97% for ISL, surpassing the typical 90-93% accuracy rates of previous models. This breakthrough underscores the efficacy of combining predefined features and rules with neural networks. What sets this hybrid model apart is its versatility in recognizing both ASL and ISL signs, addressing the global variations in sign languages. The elevated accuracy levels make it a practical and accessible tool for the hearing-impaired community. This has significant implications for real-world applications, particularly in education, healthcare, and various contexts where improved communication between hearing-impaired individuals and others is paramount. The study represents a noteworthy stride in sign language recognition, presenting a hybrid model that excels in accurately identifying ASL and ISL signs, thereby contributing to the advancement of communication and inclusivity

    Multimodal Based Audio-Visual Speech Recognition for Hard-of-Hearing: State of the Art Techniques and Challenges

    Get PDF
    Multimodal Integration (MI) is the study of merging the knowledge acquired by the nervous system using sensory modalities such as speech, vision, touch, and gesture. The applications of MI expand over the areas of Audio-Visual Speech Recognition (AVSR), Sign Language Recognition (SLR), Emotion Recognition (ER), Bio Metrics Applications (BMA), Affect Recognition (AR), Multimedia Retrieval (MR), etc. The fusion of modalities such as hand gestures- facial, lip- hand position, etc., are mainly used sensory modalities for the development of hearing-impaired multimodal systems. This paper encapsulates an overview of multimodal systems available within literature towards hearing impaired studies. This paper also discusses some of the studies related to hearing-impaired acoustic analysis. It is observed that very less algorithms have been developed for hearing impaired AVSR as compared to normal hearing. Thus, the study of audio-visual based speech recognition systems for the hearing impaired is highly demanded for the people who are trying to communicate with natively speaking languages.  This paper also highlights the state-of-the-art techniques in AVSR and the challenges faced by the researchers for the development of AVSR systems

    Indian Sign Language Recognition Using Deep Learning Techniques

    Get PDF
    By automatically translating Indian sign language into English speech, a portable multimedia Indian sign language translation program can help the deaf and/or speaker connect with hearing people. It could act as a translator for those that do not understand sign language, eliminating the need for a mediator and allowing communication to take place in the speaker's native language. As a result, Deaf-Dumb people are denied regular educational opportunities. Uneducated Deaf-Dumb people have a difficult time communicating with members of their culture. We provide an incorporated Android application to help ignorant Deaf-Dumb people fit into society and connect with others. The newly launched program includes a straight forward keyboard translator that really can convert any term from Indian sign language to English. The proposed system is an interactive application program for mobile phones created with application software. The mobile phone is used to photograph Indian sign language gestures, while the operating system performs vision processing tasks and the constructed audio device output signals speech, limiting the need for extra devices and costs. The perceived latency between both the hand signals as well as the translation is reduced by parallel processing. This allows for a very quick translation of finger and hand motions. This is capable of recognizing one-handed sign representations of the numbers 0 through 9. The findings show that the results are highly reproducible, consistent, and accurate

    Support Vector Machine based Image Classification for Deaf and Mute People

    Full text link
    A hand gesture recognition system provides a natural, innovative and modern way of nonverbal communication. It has a wide area of application in human computer interaction and sign language. The whole system consists of three components: hand detection, gesture recognition and human-computer interaction (HCI) based on recognition; in the existing technique, ANFIS(adaptive neuro-fuzzy interface system) to recognize gestures and makes it attainable to identify relatively complex gestures were used. But the complexity is high and performance is low. To achieve high accuracy and high performance with less complexity, a gray illumination technique is introduced in the proposed Hand gesture recognition. Here, live video is converted into frames and resize the frame, then apply gray illumination algorithm for color balancing in order to separate the skin separately. Then morphological feature extraction operation is carried out. After that support vector machine (SVM) train and testing process are carried out for gesture recognition. Finally, the character sound is played as audio output

    Analysis of the Efficacy of Real-Time Hand Gesture Detection with Hog and Haar-Like Features Using SVM Classification

    Get PDF
    The field of hand gesture recognition has recently reached new heights thanks to its widespread use in domains like remote sensing, robotic control, and smart home appliances, among others. Despite this, identifying gestures is difficult because of the intransigent features of the human hand, which make the codes used to decode them illegible and impossible to compare. Differentiating regional patterns is the job of pattern recognition. Pattern recognition is at the heart of sign language. People who are deaf or mute may understand the spoken language of the rest of the world by learning sign language. Any part of the body may be used to create signs in sign language. The suggested system employs a gesture recognition system trained on Indian sign language. The methods of preprocessing, hand segmentation, feature extraction, gesture identification, and classification of hand gestures are discussed in this work as they pertain to hand gesture sign language. A hybrid approach is used to extract the features, which combines the usage of Haar-like features with the application of Histogram of Oriented Gradients (HOG).The SVM classifier is then fed the characteristics it has extracted from the pictures in order to make an accurate classification. A false rejection error rate of 8% is achieved while the accuracy of hand gesture detection is improved by 93.5%
    • …
    corecore