2,035 research outputs found

    GETNET: A General End-to-end Two-dimensional CNN Framework for Hyperspectral Image Change Detection

    Full text link
    Change detection (CD) is an important application of remote sensing, which provides timely change information about large-scale Earth surface. With the emergence of hyperspectral imagery, CD technology has been greatly promoted, as hyperspectral data with the highspectral resolution are capable of detecting finer changes than using the traditional multispectral imagery. Nevertheless, the high dimension of hyperspectral data makes it difficult to implement traditional CD algorithms. Besides, endmember abundance information at subpixel level is often not fully utilized. In order to better handle high dimension problem and explore abundance information, this paper presents a General End-to-end Two-dimensional CNN (GETNET) framework for hyperspectral image change detection (HSI-CD). The main contributions of this work are threefold: 1) Mixed-affinity matrix that integrates subpixel representation is introduced to mine more cross-channel gradient features and fuse multi-source information; 2) 2-D CNN is designed to learn the discriminative features effectively from multi-source data at a higher level and enhance the generalization ability of the proposed CD algorithm; 3) A new HSI-CD data set is designed for the objective comparison of different methods. Experimental results on real hyperspectral data sets demonstrate the proposed method outperforms most of the state-of-the-arts

    Crop identification technology assessment for remote sensing (CITARS). Volume 10: Interpretation of results

    Get PDF
    The CITARS was an experiment designed to quantitatively evaluate crop identification performance for corn and soybeans in various environments using a well-defined set of automatic data processing (ADP) techniques. Each technique was applied to data acquired to recognize and estimate proportions of corn and soybeans. The CITARS documentation summarizes, interprets, and discusses the crop identification performances obtained using (1) different ADP procedures; (2) a linear versus a quadratic classifier; (3) prior probability information derived from historic data; (4) local versus nonlocal recognition training statistics and the associated use of preprocessing; (5) multitemporal data; (6) classification bias and mixed pixels in proportion estimation; and (7) data with differnt site characteristics, including crop, soil, atmospheric effects, and stages of crop maturity

    The contribution of multitemporal information from multispectral satellite images for automatic land cover classification at the national scale

    Get PDF
    Thesis submitted to the Instituto Superior de Estatística e Gestão de Informação da Universidade Nova de Lisboa in partial fulfillment of the requirements for the Degree of Doctor of Philosophy in Information Management – Geographic Information SystemsImaging and sensing technologies are constantly evolving so that, now, the latest generations of satellites commonly provide with Earth’s surface snapshots at very short sampling periods (i.e. daily images). It is unquestionable that this tendency towards continuous time observation will broaden up the scope of remotely sensed activities. Inevitable also, such increasing amount of information will prompt methodological approaches that combine digital image processing techniques with time series analysis for the characterization of land cover distribution and monitoring of its dynamics on a frequent basis. Nonetheless, quantitative analyses that convey the proficiency of three-dimensional satellite images data sets (i.e. spatial, spectral and temporal) for the automatic mapping of land cover and land cover time evolution have not been thoroughly explored. In this dissertation, we investigate the usefulness of multispectral time series sets of medium spatial resolution satellite images for the regular land cover characterization at the national scale. This study is carried out on the territory of Continental Portugal and exploits satellite images acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS) and MEdium Resolution Imaging Spectrometer (MERIS). In detail, we first focus on the analysis of the contribution of multitemporal information from multispectral satellite images for the automatic land cover classes’ discrimination. The outcomes show that multispectral information contributes more significantly than multitemporal information for the automatic classification of land cover types. In the sequence, we review some of the most important steps that constitute a standard protocol for the automatic land cover mapping from satellite images. Moreover, we delineate a methodological approach for the production and assessment of land cover maps from multitemporal satellite images that guides us in the production of a land cover map with high thematic accuracy for the study area. Finally, we develop a nonlinear harmonic model for fitting multispectral reflectances and vegetation indices time series from satellite images for numerous land cover classes. The simplified multitemporal information retrieved with the model proves adequate to describe the main land cover classes’ characteristics and to predict the time evolution of land cover classes’individuals

    A Multiple Cascade-Classifier System for a Robust and Partially Unsupervised Updating of Land-Cover Maps

    Get PDF
    A system for a regular updating of land-cover maps is proposed that is based on the use of multitemporal remote-sensing images. Such a system is able to face the updating problem under the realistic but critical constraint that, for the image to be classified (i.e., the most recent of the considered multitemporal data set), no ground truth information is available. The system is composed of an ensemble of partially unsupervised classifiers integrated in a multiple classifier architecture. Each classifier of the ensemble exhibits the following novel peculiarities: i) it is developed in the framework of the cascade-classification approach to exploit the temporal correlation existing between images acquired at different times in the considered area; ii) it is based on a partially unsupervised methodology capable to accomplish the classification process under the aforementioned critical constraint. Both a parametric maximum-likelihood classification approach and a non-parametric radial basis function (RBF) neural-network classification approach are used as basic methods for the development of partially unsupervised cascade classifiers. In addition, in order to generate an effective ensemble of classification algorithms, hybrid maximum-likelihood and RBF neural network cascade classifiers are defined by exploiting the peculiarities of the cascade-classification methodology. The results yielded by the different classifiers are combined by using standard unsupervised combination strategies. This allows the definition of a robust and accurate partially unsupervised classification system capable of analyzing a wide typology of remote-sensing data (e.g., images acquired by passive sensors, SAR images, multisensor and multisource data). Experimental results obtained on a real multitemporal and multisource data set confirm the effectiveness of the proposed system

    Learning Spectral-Spatial-Temporal Features via a Recurrent Convolutional Neural Network for Change Detection in Multispectral Imagery

    Full text link
    Change detection is one of the central problems in earth observation and was extensively investigated over recent decades. In this paper, we propose a novel recurrent convolutional neural network (ReCNN) architecture, which is trained to learn a joint spectral-spatial-temporal feature representation in a unified framework for change detection in multispectral images. To this end, we bring together a convolutional neural network (CNN) and a recurrent neural network (RNN) into one end-to-end network. The former is able to generate rich spectral-spatial feature representations, while the latter effectively analyzes temporal dependency in bi-temporal images. In comparison with previous approaches to change detection, the proposed network architecture possesses three distinctive properties: 1) It is end-to-end trainable, in contrast to most existing methods whose components are separately trained or computed; 2) it naturally harnesses spatial information that has been proven to be beneficial to change detection task; 3) it is capable of adaptively learning the temporal dependency between multitemporal images, unlike most of algorithms that use fairly simple operation like image differencing or stacking. As far as we know, this is the first time that a recurrent convolutional network architecture has been proposed for multitemporal remote sensing image analysis. The proposed network is validated on real multispectral data sets. Both visual and quantitative analysis of experimental results demonstrates competitive performance in the proposed mode

    A framework of rapid regional tsunami damage recognition from post-event TerraSAR-X imagery using deep neural networks

    Full text link
    Near real-time building damage mapping is an indispensable prerequisite for governments to make decisions for disaster relief. With high-resolution synthetic aperture radar (SAR) systems, such as TerraSAR-X, the provision of such products in a fast and effective way becomes possible. In this letter, a deep learning-based framework for rapid regional tsunami damage recognition using post-event SAR imagery is proposed. To perform such a rapid damage mapping, a series of tile-based image split analysis is employed to generate the data set. Next, a selection algorithm with the SqueezeNet network is developed to swiftly distinguish between built-up (BU) and nonbuilt-up regions. Finally, a recognition algorithm with a modified wide residual network is developed to classify the BU regions into wash away, collapsed, and slightly damaged regions. Experiments performed on the TerraSAR-X data from the 2011 Tohoku earthquake and tsunami in Japan show a BU region extraction accuracy of 80.4% and a damage-level recognition accuracy of 74.8%, respectively. Our framework takes around 2 h to train on a new region, and only several minutes for prediction.This work was supported in part by JST CREST, Japan, under Grant JPMJCR1411 and in part by the China Scholarship Council. (JPMJCR1411 - JST CREST, Japan; China Scholarship Council

    Results from the Crop Identification Technology Assessment for Remote Sensing (CITARS) project

    Get PDF
    The author has identified the following significant results. It was found that several factors had a significant effect on crop identification performance: (1) crop maturity and site characteristics, (2) which of several different single date automatic data processing procedures was used for local recognition, (3) nonlocal recognition, both with and without preprocessing for the extension of recognition signatures, and (4) use of multidate data. It also was found that classification accuracy for field center pixels was not a reliable indicator of proportion estimation performance for whole areas, that bias was present in proportion estimates, and that training data and procedures strongly influenced crop identification performance

    Classification software technique assessment

    Get PDF
    A catalog of software options is presented for the use of local user communities to obtain software for analyzing remotely sensed multispectral imagery. The resources required to utilize a particular software program are described. Descriptions of how a particular program analyzes data and the performance of that program for an application and data set provided by the user are shown. An effort is made to establish a statistical performance base for various software programs with regard to different data sets and analysis applications, to determine the status of the state-of-the-art
    • …
    corecore