299 research outputs found

    Kernel-Based Framework for Multitemporal and Multisource Remote Sensing Data Classification and Change Detection

    Get PDF
    The multitemporal classification of remote sensing images is a challenging problem, in which the efficient combination of different sources of information (e.g., temporal, contextual, or multisensor) can improve the results. In this paper, we present a general framework based on kernel methods for the integration of heterogeneous sources of information. Using the theoretical principles in this framework, three main contributions are presented. First, a novel family of kernel-based methods for multitemporal classification of remote sensing images is presented. The second contribution is the development of nonlinear kernel classifiers for the well-known difference and ratioing change detection methods by formulating them in an adequate high-dimensional feature space. Finally, the presented methodology allows the integration of contextual information and multisensor images with different levels of nonlinear sophistication. The binary support vector (SV) classifier and the one-class SV domain description classifier are evaluated by using both linear and nonlinear kernel functions. Good performance on synthetic and real multitemporal classification scenarios illustrates the generalization of the framework and the capabilities of the proposed algorithms.Publicad

    Learning Spectral-Spatial-Temporal Features via a Recurrent Convolutional Neural Network for Change Detection in Multispectral Imagery

    Full text link
    Change detection is one of the central problems in earth observation and was extensively investigated over recent decades. In this paper, we propose a novel recurrent convolutional neural network (ReCNN) architecture, which is trained to learn a joint spectral-spatial-temporal feature representation in a unified framework for change detection in multispectral images. To this end, we bring together a convolutional neural network (CNN) and a recurrent neural network (RNN) into one end-to-end network. The former is able to generate rich spectral-spatial feature representations, while the latter effectively analyzes temporal dependency in bi-temporal images. In comparison with previous approaches to change detection, the proposed network architecture possesses three distinctive properties: 1) It is end-to-end trainable, in contrast to most existing methods whose components are separately trained or computed; 2) it naturally harnesses spatial information that has been proven to be beneficial to change detection task; 3) it is capable of adaptively learning the temporal dependency between multitemporal images, unlike most of algorithms that use fairly simple operation like image differencing or stacking. As far as we know, this is the first time that a recurrent convolutional network architecture has been proposed for multitemporal remote sensing image analysis. The proposed network is validated on real multispectral data sets. Both visual and quantitative analysis of experimental results demonstrates competitive performance in the proposed mode

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Classifying multisensor remote sensing data : Concepts, Algorithms and Applications

    Get PDF
    Today, a large quantity of the Earth’s land surface has been affected by human induced land cover changes. Detailed knowledge of the land cover is elementary for several decision support and monitoring systems. Earth-observation (EO) systems have the potential to frequently provide information on land cover. Thus many land cover classifications are performed based on remotely sensed EO data. In this context, it has been shown that the performance of remote sensing applications is further improved by multisensor data sets, such as combinations of synthetic aperture radar (SAR) and multispectral imagery. The two systems operate in different wavelength domains and therefore provide different yet complementary information on land cover. Considering the increase in revisit times and better spatial resolutions of recent and upcoming systems like TerraSAR-X (11 days; up to1 m), Radarsat-2 (24 days; up to 3 m), or RapidEye constellation (up to 1 day; 5 m), multisensor approaches become even more promising. However, these data sets with high spatial and temporal resolution might become very large and complex. Commonly used statistical pattern recognition methods are usually not appropriate for the classification of multisensor data sets. Hence, one of the greatest challenges in remote sensing might be the development of adequate concepts for classifying multisensor imagery. The presented study aims at an adequate classification of multisensor data sets, including SAR data and multispectral images. Different conventional classifiers and recent developments are used, such as support vector machines (SVM) and random forests (RF), which are well known in the field of machine learning and pattern recognition. Furthermore, the impact of image segmentation on the classification accuracy is investigated and the value of a multilevel concept is discussed. To increase the performance of the algorithms in terms of classification accuracy, the concept of SVM is modified and combined with RF for optimized decision making. The results clearly demonstrate that the use of multisensor imagery is worthwhile. Irrespective of the classification method used, classification accuracies increase by combining SAR and multispectral imagery. Nevertheless, SVM and RF are more adequate for classifying multisensor data sets and significantly outperform conventional classifier algorithms in terms of accuracy. The finally introduced multisensor-multilevel classification strategy, which is based on the sequential use of SVM and RF, outperforms all other approaches. The proposed concept achieves an accuracy of 84.9%. This is significantly higher than all single-source results and also better than those achieved on any other combination of data. Both aspects, i.e. the fusion of SAR and multispectral data as well as the integration of multiple segmentation scales, improve the results. Contrary to the high accuracy value by the proposed concept, the pixel-based classification on single-source data sets achieves a maximal accuracy of 65% (SAR) and 69.8% (multispectral) respectively. The findings and good performance of the presented strategy are underlined by the successful application of the approach to data sets from a second year. Based on the results from this work it can be concluded that the suggested strategy is particularly interesting with regard to recent and future satellite missions

    Multi-Classifiers And Decision Fusion For Robust Statistical Pattern Recognition With Applications To Hyperspectral Classification

    Get PDF
    In this dissertation, a multi-classifier, decision fusion framework is proposed for robust classification of high dimensional data in small-sample-size conditions. Such datasets present two key challenges. (1) The high dimensional feature spaces compromise the classifiers’ generalization ability in that the classifier tends to overit decision boundaries to the training data. This phenomenon is commonly known as the Hughes phenomenon in the pattern classification community. (2) The small-sample-size of the training data results in ill-conditioned estimates of its statistics. Most classifiers rely on accurate estimation of these statistics for modeling training data and labeling test data, and hence ill-conditioned statistical estimates result in poorer classification performance. This dissertation tests the efficacy of the proposed algorithms to classify primarily remotely sensed hyperspectral data and secondarily diagnostic digital mammograms, since these applications naturally result in very high dimensional feature spaces and often do not have sufficiently large training datasets to support the dimensionality of the feature space. Conventional approaches, such as Stepwise LDA (S-LDA) are sub-optimal, in that they utilize a small subset of the rich spectral information provided by hyperspectral data for classification. In contrast, the approach proposed in this dissertation utilizes the entire high dimensional feature space for classification by identifying a suitable partition of this space, employing a bank-of-classifiers to perform “local” classification over this partition, and then merging these local decisions using an appropriate decision fusion mechanism. Adaptive classifier weight assignment and nonlinear pre-processing (in kernel induced spaces) are also proposed within this framework to improve its robustness over a wide range of fidelity conditions. Experimental results demonstrate that the proposed framework results in significant improvements in classification accuracies (as high as a 12% increase) over conventional approaches

    Multi-Classifiers And Decision Fusion For Robust Statistical Pattern Recognition With Applications To Hyperspectral Classification

    Get PDF
    In this dissertation, a multi-classifier, decision fusion framework is proposed for robust classification of high dimensional data in small-sample-size conditions. Such datasets present two key challenges. (1) The high dimensional feature spaces compromise the classifiers’ generalization ability in that the classifier tends to overit decision boundaries to the training data. This phenomenon is commonly known as the Hughes phenomenon in the pattern classification community. (2) The small-sample-size of the training data results in ill-conditioned estimates of its statistics. Most classifiers rely on accurate estimation of these statistics for modeling training data and labeling test data, and hence ill-conditioned statistical estimates result in poorer classification performance. This dissertation tests the efficacy of the proposed algorithms to classify primarily remotely sensed hyperspectral data and secondarily diagnostic digital mammograms, since these applications naturally result in very high dimensional feature spaces and often do not have sufficiently large training datasets to support the dimensionality of the feature space. Conventional approaches, such as Stepwise LDA (S-LDA) are sub-optimal, in that they utilize a small subset of the rich spectral information provided by hyperspectral data for classification. In contrast, the approach proposed in this dissertation utilizes the entire high dimensional feature space for classification by identifying a suitable partition of this space, employing a bank-of-classifiers to perform “local” classification over this partition, and then merging these local decisions using an appropriate decision fusion mechanism. Adaptive classifier weight assignment and nonlinear pre-processing (in kernel induced spaces) are also proposed within this framework to improve its robustness over a wide range of fidelity conditions. Experimental results demonstrate that the proposed framework results in significant improvements in classification accuracies (as high as a 12% increase) over conventional approaches

    Advances in Hyperspectral Image Classification Methods for Vegetation and Agricultural Cropland Studies

    Get PDF
    Hyperspectral data are becoming more widely available via sensors on airborne and unmanned aerial vehicle (UAV) platforms, as well as proximal platforms. While space-based hyperspectral data continue to be limited in availability, multiple spaceborne Earth-observing missions on traditional platforms are scheduled for launch, and companies are experimenting with small satellites for constellations to observe the Earth, as well as for planetary missions. Land cover mapping via classification is one of the most important applications of hyperspectral remote sensing and will increase in significance as time series of imagery are more readily available. However, while the narrow bands of hyperspectral data provide new opportunities for chemistry-based modeling and mapping, challenges remain. Hyperspectral data are high dimensional, and many bands are highly correlated or irrelevant for a given classification problem. For supervised classification methods, the quantity of training data is typically limited relative to the dimension of the input space. The resulting Hughes phenomenon, often referred to as the curse of dimensionality, increases potential for unstable parameter estimates, overfitting, and poor generalization of classifiers. This is particularly problematic for parametric approaches such as Gaussian maximum likelihoodbased classifiers that have been the backbone of pixel-based multispectral classification methods. This issue has motivated investigation of alternatives, including regularization of the class covariance matrices, ensembles of weak classifiers, development of feature selection and extraction methods, adoption of nonparametric classifiers, and exploration of methods to exploit unlabeled samples via semi-supervised and active learning. Data sets are also quite large, motivating computationally efficient algorithms and implementations. This chapter provides an overview of the recent advances in classification methods for mapping vegetation using hyperspectral data. Three data sets that are used in the hyperspectral classification literature (e.g., Botswana Hyperion satellite data and AVIRIS airborne data over both Kennedy Space Center and Indian Pines) are described in Section 3.2 and used to illustrate methods described in the chapter. An additional high-resolution hyperspectral data set acquired by a SpecTIR sensor on an airborne platform over the Indian Pines area is included to exemplify the use of new deep learning approaches, and a multiplatform example of airborne hyperspectral data is provided to demonstrate transfer learning in hyperspectral image classification. Classical approaches for supervised and unsupervised feature selection and extraction are reviewed in Section 3.3. In particular, nonlinearities exhibited in hyperspectral imagery have motivated development of nonlinear feature extraction methods in manifold learning, which are outlined in Section 3.3.1.4. Spatial context is also important in classification of both natural vegetation with complex textural patterns and large agricultural fields with significant local variability within fields. Approaches to exploit spatial features at both the pixel level (e.g., co-occurrencebased texture and extended morphological attribute profiles [EMAPs]) and integration of segmentation approaches (e.g., HSeg) are discussed in this context in Section 3.3.2. Recently, classification methods that leverage nonparametric methods originating in the machine learning community have grown in popularity. An overview of both widely used and newly emerging approaches, including support vector machines (SVMs), Gaussian mixture models, and deep learning based on convolutional neural networks is provided in Section 3.4. Strategies to exploit unlabeled samples, including active learning and metric learning, which combine feature extraction and augmentation of the pool of training samples in an active learning framework, are outlined in Section 3.5. Integration of image segmentation with classification to accommodate spatial coherence typically observed in vegetation is also explored, including as an integrated active learning system. Exploitation of multisensor strategies for augmenting the pool of training samples is investigated via a transfer learning framework in Section 3.5.1.2. Finally, we look to the future, considering opportunities soon to be provided by new paradigms, as hyperspectral sensing is becoming common at multiple scales from ground-based and airborne autonomous vehicles to manned aircraft and space-based platforms

    An Adaptive Semi-Parametric and Context-Based Approach to Unsupervised Change Detection in Multitemporal Remote-Sensing Images

    Get PDF
    In this paper, a novel automatic approach to the unsupervised identification of changes in multitemporal remote-sensing images is proposed. This approach, unlike classical ones, is based on the formulation of the unsupervised change-detection problem in terms of the Bayesian decision theory. In this context, an adaptive semi-parametric technique for the unsupervised estimation of the statistical terms associated with the gray levels of changed and unchanged pixels in a difference image is presented. Such a technique exploits the effectivenesses of two theoretically well-founded estimation procedures: the reduced Parzen estimate (RPE) procedure and the expectation-maximization (EM) algorithm. Then, thanks to the resulting estimates and to a Markov Random Field (MRF) approach used to model the spatial-contextual information contained in the multitemporal images considered, a change detection map is generated. The adaptive semi-parametric nature of the proposed technique allows its application to different kinds of remote-sensing images. Experimental results, obtained on two sets of multitemporal remote-sensing images acquired by two different sensors, confirm the validity of the proposed approach
    • 

    corecore