9 research outputs found

    A FRAMEWORK OF CHANGE DETECTION BASED ON COMBINED MORPHOLOGICA FEATURES AND MULTI-INDEX CLASSIFICATION

    Get PDF
    Remote sensing images are particularly well suited for analysis of land cover change. In this paper, we present a new framework for detection of changing land cover using satellite imagery. Morphological features and a multi-index are used to extract typical objects from the imagery, including vegetation, water, bare land, buildings, and roads. Our method, based on connected domains, is different from traditional methods; it uses image segmentation to extract morphological features, while the enhanced vegetation index (EVI), the differential water index (NDWI) are used to extract vegetation and water, and a fragmentation index is used to the correct extraction results of water. HSV transformation and threshold segmentation extract and remove the effects of shadows on extraction results. Change detection is performed on these results. One of the advantages of the proposed framework is that semantic information is extracted automatically using low-level morphological features and indexes. Another advantage is that the proposed method detects specific types of change without any training samples. A test on ZY-3 images demonstrates that our framework has a promising capability to detect change

    Can I Trust My One-Class Classification?

    Get PDF
    Contrary to binary and multi-class classifiers, the purpose of a one-class classifier for remote sensing applications is to map only one specific land use/land cover class of interest. Training these classifiers exclusively requires reference data for the class of interest, while training data for other classes is not required. Thus, the acquisition of reference data can be significantly reduced. However, one-class classification is fraught with uncertainty and full automatization is difficult, due to the limited reference information that is available for classifier training. Thus, a user-oriented one-class classification strategy is proposed, which is based among others on the visualization and interpretation of the one-class classifier outcomes during the data processing. Careful interpretation of the diagnostic plots fosters the understanding of the classification outcome, e.g., the class separability and suitability of a particular threshold. In the absence of complete and representative validation data, which is the fact in the context of a real one-class classification application, such information is valuable for evaluation and improving the classification. The potential of the proposed strategy is demonstrated by classifying different crop types with hyperspectral data from Hyperion

    Semi-Supervised Novelty Detection using SVM entire solution path

    Get PDF
    Very often, the only reliable information available to perform change detection is the description of some unchanged regions. Since sometimes these regions do not contain all the relevant information to identify their counterpart (the changes), we consider the use of unlabeled data to perform Semi-Supervised Novelty detection (SSND). SSND can be seen as an unbalanced classification problem solved using the Cost-Sensitive Support Vector Machine (CS-SVM), but this requires a heavy parameter search. We propose here to use entire solution path algorithms for the CS-SVM in order to facilitate and accelerate the parameter selection for SSND. Two algorithms are considered and evaluated. The first one is an extension of the CS-SVM algorithm that returns the entire solution path in a single optimization. This way, the optimization of a separate model for each hyperparameter set is avoided. The second forces the solution to be coherent through the solution path, thus producing classification boundaries that are nested (included in each other). We also present a low density criterion for selecting the optimal classification boundaries, thus avoiding the recourse to cross-validation that usually requires information about the ``change'' class. Experiments are performed on two multitemporal change detection datasets (flood and fire detection). Both algorithms tracing the solution path provide similar performances than the standard CS-SVM while being significantly faster. The low density criterion proposed achieves results that are close to the ones obtained by cross-validation, but without using information about the changes

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Quantitative Spatial Upscaling of Categorical Data in the Context of Landscape Ecology: A New Scaling Algorithm

    Get PDF
    Spatially explicit ecological models rely on spatially exhaustive data layers that have scales appropriate to the ecological processes of interest. Such data layers are often categorical raster maps derived from high-resolution, remotely sensed data that must be scaled to a lower spatial resolution to make them compatible with the scale of ecological analysis. Statistical functions commonly used to aggregate categorical data are majority-, nearest-neighbor- and random-rule. For heterogeneous landscapes and large scaling factors, however, use of these functions results in two critical issues: (1) ignoring large portions of information present in the high-resolution grid cells leads to high and uncontrolled loss of information in the scaled dataset; and (2) maintaining classes from the high-resolution dataset at the lower spatial resolution assumes validity of the classification scheme at the low-resolution scale, failing to represent recurring mixes of heterogeneous classes present in the low-resolution grid cells. The proposed new scaling algorithm resolves these issues, aggregating categorical data while simultaneously controlling for information loss by generating a non-hierarchical, representative, classification system valid at the aggregated scale. Implementing scaling parameters, that control class-label precision effectively reduced information loss of scaled landscapes as class-label precision increased. In a neutral-landscape simulation study, the algorithm consistently preserved information at a significantly higher level than the other commonly used algorithms. When applied to maps of real landscapes, the same increase in information retention was observed, and the scaled classes were detectable from lower-resolution, remotely sensed, multi-spectral reflectance data with high accuracy. The framework developed in this research facilitates scaling-parameter selection to address trade-offs among information retention, label fidelity, and spectral detectability of scaled classes. When generating high spatial resolution land-cover maps, quantifying effects of sampling intensity, feature-space dimensionality and classifier method on overall accuracy, confidence estimates, and classifier efficiency allowed optimization of the mapping method. Increase in sampling intensity boosted accuracies in a reasonably predictable fashion. However, adding a second image acquired when ground conditions and vegetation phenology differed from those of the first image had a much greater impact, increasing classification accuracy even at low sampling intensities, to levels not reached with a single season image

    Anomalous change detection in multi-temporal hyperspectral images

    Get PDF
    In latest years, the possibility to exploit the high amount of spectral information has made hyperspectral remote sensing a very promising approach to detect changes occurred in multi-temporal images. Detection of changes in images of the same area collected at different times is of crucial interest in military and civilian applications, spanning from wide area surveillance and damage assessment to geology and land cover. In military operations, the interest is in rapid location and tracking of objects of interest, people, vehicles or equipment that pose a potential threat. In civilian contexts, changes of interest may include different types of natural or manmade threats, such as the path of an impending storm or the source of a hazardous material spill. In this PhD thesis, the focus is on Anomalous Change Detection (ACD) in airborne hyperspectral images. The goal is the detection of small changes occurred in two images of the same scene, i.e. changes having size comparable with the sensor ground resolution. The objects of interest typically occupy few pixels of the image and change detection must be accomplished in a pixel-wise fashion. Moreover, since the images are in general not radiometrically comparable, because illumination, atmospheric and environmental conditions change from one acquisition to the other, pervasive and uninteresting changes must be accounted for in developing ACD strategies. ACD process can be distinguished into two main phases: a pre-processing step, which includes radiometric correction, image co-registration and noise filtering, and a detection step, where the pre-processed images are compared according to a defined criterion in order to derive a statistical ACD map highlighting the anomalous changes occurred in the scene. In the literature, ACD has been widely investigated providing valuable methods in order to cope with these problems. In this work, a general overview of ACD methods is given reviewing the most known pre-processing and detection methods proposed in the literature. The analysis has been conducted unifying different techniques in a common framework based on binary decision theory, where one has to test the two competing hypotheses H0 (change absent) and H1 (change present) on the basis of an observation vector derived from the radiance measured on each pixel of the two images. Particular emphasis has been posed on statistical approaches, where ACD is derived in the framework of Neymann Pearson theory and the decision rule is carried out on the basis of the statistical properties assumed for the two hypotheses distribution, the observation vector space and the secondary data exploited for the estimation of the unknown parameters. Typically, ACD techniques assume that the observation represents the realization of jointly Gaussian spatially stationary random process. Though such assumption is adopted because of its mathematical tractability, it may be quite simplistic to model the multimodality usually met in real data. A more appropriate model is that adopted to derive the well known RX anomaly detector which assumes the local Gaussianity of the hyperspectral data. In this framework, a new statistical ACD method has been proposed considering the local Gaussianity of the hyperspectral data. The assumption of local stationarity for the observations in the two hypotheses is taken into account by considering two different models, leading to two different detectors. In addition, when data are collected by airborne platforms, perfect co-registration between images is very difficult to achieve. As a consequence, a residual misregistration (RMR) error should be taken into account in developing ACD techniques. Different techniques have been proposed to cope with the performance degradation problem due to the RMR, embedding the a priori knowledge on the statistical properties of the RMR in the change detection scheme. In this context, a new method has been proposed for the estimation of the first and second order statistics of the RMR. The technique is based on a sequential strategy that exploits the Scale Invariant Feature Transform (SIFT) algorithm cascaded with the Minimum Covariance Determinant algorithm. The proposed method adapts the SIFT procedure to hyperspectral images and improves the robustness of the outliers filtering by means of a highly robust estimator of multivariate location. Then, the attention has been focused on noise filtering techniques aimed at enforcing the consistency of the ACD process. To this purpose, a new method has been proposed to mitigate the negative effects due to random noise. In particular, this is achieved by means of a band selection technique aimed at discarding spectral channels whose useful signal content is low compared with the noise contribution. Band selection is performed on a per-pixel basis by exploiting the estimates of the noise variance accounting also for the presence of the signal dependent noise component. Finally, the effectiveness of the proposed techniques has been extensively evaluated by employing different real hyperspectral datasets containing anomalous changes collected in different acquisition conditions and on different scenarios, highlighting advantages and drawbacks of each method. In summary, the main issues related to ACD in multi-temporal hyperspectral images have been examined in this PhD thesis. With reference to the pre-processing step, two original contributions have been offered: i) an unsupervised technique for the estimation of the RMR noise affecting hyperspectral images, and ii) an adaptive approach for ACD which mitigates the negative effects due to random noise. As to the detection step, a survey of the existing techniques has been carried out, highlighting the major drawbacks and disadvantages, and a novel contribution has been offered by presenting a new statistical ACD method which considers the local Gaussianity of the hyperspectral data
    corecore