5,110 research outputs found

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    A Novel Approach for Optimization of Convolution Neural Network with Particle Swarm Optimization and Genetic Algorithm for Face Recognition

    Get PDF
    Convolutional neural networks are contemporary deep learning models that are employed for many various applications. In general, the filter size, number of filters, number of convolutional layers, number of fully connected layers, activation function and learning rate are some of the hyperparameters that significantly determine how well a CNN performs.. Generally, these hyperparameters are selected manually and varied for each CNN model depending on the application and dataset. During optimization, CNN could get stuck in local minima. To overcome this, metaheuristic algorithms are used for optimization. In this work, the CNN structure is first constructed with randomly chosen hyperparameters and these parameters are optimized using Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) algorithm. A CNN with optimized hyperparameters is used for face recognition. CNNs optimized with these algorithms use RMSprop optimizer instead of stochastic gradient descent. This RMSprop optimizer helps the CNN reach global minimum quickly. It has been observed that optimizing with GA and PSO improves the performance of CNNs. It also reduces the time it takes for the CNN to reach the global minimum

    Enhancing Sign Language Recognition through Fusion of CNN Models

    Get PDF
    This study introduces a pioneering hybrid model designed for the recognition of sign language, with a specific focus on American Sign Language (ASL) and Indian Sign Language (ISL). Departing from traditional machine learning methods, the model ingeniously blends hand-crafted techniques with deep learning approaches to surmount inherent limitations. Notably, the hybrid model achieves an exceptional accuracy rate of 96% for ASL and 97% for ISL, surpassing the typical 90-93% accuracy rates of previous models. This breakthrough underscores the efficacy of combining predefined features and rules with neural networks. What sets this hybrid model apart is its versatility in recognizing both ASL and ISL signs, addressing the global variations in sign languages. The elevated accuracy levels make it a practical and accessible tool for the hearing-impaired community. This has significant implications for real-world applications, particularly in education, healthcare, and various contexts where improved communication between hearing-impaired individuals and others is paramount. The study represents a noteworthy stride in sign language recognition, presenting a hybrid model that excels in accurately identifying ASL and ISL signs, thereby contributing to the advancement of communication and inclusivity

    Developing a Prototype to Translate Pakistan Sign Language into Text and Speech While Using Convolutional Neural Networking

    Get PDF
    The purpose of the study is to provide a literature review of the work done on sign language in Pakistan and the world. This study also provides a framework of an already developed prototype to translate Pakistani sign language into speech and text while using convolutional neural networking (CNN) to facilitate unimpaired teachers to bridge the communication gap among the deaf learners and unimpaired teachers. Due to the lack of sign language teaching, unimpaired teachers face difficulty in communicating with impaired learners. This communication gap can be filled with the help of this translation tool. Research indicates that a prototype has been evolved that can translate the English textual content into sign language and highlighted that there is a need for translation tool which can translate the signs into English text. The current study will provide an architectural framework of the Pakistani sign language to English text translation tool that how different components of technology like deep learning, convolutional neural networking, python, tensor Flow, and NumPy, InceptionV3 and transfer learning, eSpeak text to speech help in the development of a translation tool prototype. Keywords: Pakistan sign language (PSL), sign language (SL), translation, deaf, unimpaired, convolutional neural networking (CNN). DOI: 10.7176/JEP/10-15-18 Publication date:May 31st 201

    Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech

    Get PDF
    We describe a statistical approach for modeling dialogue acts in conversational speech, i.e., speech-act-like units such as Statement, Question, Backchannel, Agreement, Disagreement, and Apology. Our model detects and predicts dialogue acts based on lexical, collocational, and prosodic cues, as well as on the discourse coherence of the dialogue act sequence. The dialogue model is based on treating the discourse structure of a conversation as a hidden Markov model and the individual dialogue acts as observations emanating from the model states. Constraints on the likely sequence of dialogue acts are modeled via a dialogue act n-gram. The statistical dialogue grammar is combined with word n-grams, decision trees, and neural networks modeling the idiosyncratic lexical and prosodic manifestations of each dialogue act. We develop a probabilistic integration of speech recognition with dialogue modeling, to improve both speech recognition and dialogue act classification accuracy. Models are trained and evaluated using a large hand-labeled database of 1,155 conversations from the Switchboard corpus of spontaneous human-to-human telephone speech. We achieved good dialogue act labeling accuracy (65% based on errorful, automatically recognized words and prosody, and 71% based on word transcripts, compared to a chance baseline accuracy of 35% and human accuracy of 84%) and a small reduction in word recognition error.Comment: 35 pages, 5 figures. Changes in copy editing (note title spelling changed
    • …
    corecore