30,651 research outputs found

    ARTMAP Neural Network Classification of Land Use Change

    Full text link
    The ability to detect and monitor changes in land use is essential for assessment of the sustainability of development. In the next decade, NASA will gather high-resolution multi-spectral and multi-temporal data, which could be used for detecting and monitoring long-term changes. Existing methods are insufficient for detecting subtle long-term changes from high-dimensional data. This project employs neural network architectures as alternatives to conventional systems for classifying changes in the status of agricultural lands from a sequence of satellite images. Landsat TM imagery of the Nile River delta provides a testbed for these land use change classification methods. A sequence often images was taken, at various times of year, from 1984 to 1993. Field data were collected during the summer of 1993 at88 sites in the Nile Delta and surrounding desert areas. Ground truth data for 231 additional sites were determined by expert site assessment at the Boston University Center for Remote Sensing. The field observations are grouped into classes including urban, reduced productivity agriculture, agriculture in delta, desert/coast reclamation, wetland reclamation, and agriculture in desert/coast. Reclamation classes represent land use changes. A particular challenge posed by this database is the unequal representation of various land use categories: urban and agriculture in delta pixels comprise the vast majority of the ground truth data available in the database. A new, two-step training data selection method was introduced to enable unbiased training of neural network systems on sites with unequal numbers of pixels. Data were successfully classified by using multi-date feature vectors containing data from all of the available satellite images as inputs to the neural network system.National Science Foundation Graduate Fellowship; National Science Foundation (SBR 95-13889); Office of Naval Research (N00014-95-I-409, N00014-95-0657); Air Force Office of Scientific Research (F49620-0l-1-0397)

    An Energy-Efficient Spiking CNN Implementation for Cross-Patient Epileptic Seizure Detection

    Get PDF
    This research aims to develop a data-driven computationally efficient strategy for automatic cross-patient seizure detection using spatio temporal features learned from multichannel electroencephalogram (EEG) time-series data. In this approach, we utilize an algorithm that seeks to capture spectral, temporal, and spatial information in order to achieve high generalization. This algorithm's initial step is to convert EEG signals into a series of temporal and multi-spectral pictures. The produced images are then sent into a convolutional neural network (CNN) as inputs. Our convolutional neural network as a deep learning method learns a general spatially irreducible representation of a seizure to improves sensitivity, specificity, and accuracy results comparable to the state-of-the-art results. In this work, in order to avoid the inherent high computational cost of CNNs while benefiting from their superior classification performance, a neuromorphic computing strategy for seizure prediction called spiking CNN is developed from the traditional CNN method, which is motivated by the energy-efficient spiking neural networks (SNNs) of the human brain

    High resolution urban monitoring using neural network and transform algorithms

    Get PDF
    The advent of new high spatial resolution optical satellite imagery has greatly increased our ability to monitor land cover from space. Satellite observations are carried out regularly and continuously and provide a great deal of information on land cover over large areas. High spatial resolution imagery makes it possible to overcome the “mixed-pixel” problem inherent in more moderate resolution satellite sensors. At the same time, high-resolution images present a new challenge over other satellite systems since a relatively large amount of data must be analyzed, processed, and classified in order to characterize land cover features and to produce classification maps. Actually, in spite of the great potential of remote sensing as a source of information on land cover and the long history of research devoted to the extraction of land cover information from remotely sensed imagery, many problems have been encountered, and the accuracy of land cover maps derived from remotely sensed imagery has often been viewed as too low for operational users. This study focuses on high resolution urban monitoring using Neural Network (NN) analyses for land cover classification and change detection, and Fast Fourier Transform (FFT) evaluations of wavenumber spectra to characterize the spatial scales of land cover features. The contributions of the present work include: classification and change detection for urban areas using NN algorithms and multi-temporal very high resolution multi-spectral images (QuickBird, Digital Globe Co.); development and implementation of neural networks apt to classify a variety of multi-spectral images of cities arbitrarily located in the world; use of different wavenumber spectra produced by two-dimensional FFTs to understand the origin of significant features in the images of different urban environments subject to the subsequent classification; optimization of the neural net topology to classify urban environments, to produce thematic maps, and to analyze the urbanization processes. This work can considered as a first step in demonstrating how NN and FFT algorithms can contribute to the development of Image Information Mining (IMM) in Earth Observation

    Learning Spectral-Spatial-Temporal Features via a Recurrent Convolutional Neural Network for Change Detection in Multispectral Imagery

    Full text link
    Change detection is one of the central problems in earth observation and was extensively investigated over recent decades. In this paper, we propose a novel recurrent convolutional neural network (ReCNN) architecture, which is trained to learn a joint spectral-spatial-temporal feature representation in a unified framework for change detection in multispectral images. To this end, we bring together a convolutional neural network (CNN) and a recurrent neural network (RNN) into one end-to-end network. The former is able to generate rich spectral-spatial feature representations, while the latter effectively analyzes temporal dependency in bi-temporal images. In comparison with previous approaches to change detection, the proposed network architecture possesses three distinctive properties: 1) It is end-to-end trainable, in contrast to most existing methods whose components are separately trained or computed; 2) it naturally harnesses spatial information that has been proven to be beneficial to change detection task; 3) it is capable of adaptively learning the temporal dependency between multitemporal images, unlike most of algorithms that use fairly simple operation like image differencing or stacking. As far as we know, this is the first time that a recurrent convolutional network architecture has been proposed for multitemporal remote sensing image analysis. The proposed network is validated on real multispectral data sets. Both visual and quantitative analysis of experimental results demonstrates competitive performance in the proposed mode

    Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks

    Full text link
    One of the challenges in modeling cognitive events from electroencephalogram (EEG) data is finding representations that are invariant to inter- and intra-subject differences, as well as to inherent noise associated with such data. Herein, we propose a novel approach for learning such representations from multi-channel EEG time-series, and demonstrate its advantages in the context of mental load classification task. First, we transform EEG activities into a sequence of topology-preserving multi-spectral images, as opposed to standard EEG analysis techniques that ignore such spatial information. Next, we train a deep recurrent-convolutional network inspired by state-of-the-art video classification to learn robust representations from the sequence of images. The proposed approach is designed to preserve the spatial, spectral, and temporal structure of EEG which leads to finding features that are less sensitive to variations and distortions within each dimension. Empirical evaluation on the cognitive load classification task demonstrated significant improvements in classification accuracy over current state-of-the-art approaches in this field.Comment: To be published as a conference paper at ICLR 201

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin
    • …
    corecore