740 research outputs found

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    GETNET: A General End-to-end Two-dimensional CNN Framework for Hyperspectral Image Change Detection

    Full text link
    Change detection (CD) is an important application of remote sensing, which provides timely change information about large-scale Earth surface. With the emergence of hyperspectral imagery, CD technology has been greatly promoted, as hyperspectral data with the highspectral resolution are capable of detecting finer changes than using the traditional multispectral imagery. Nevertheless, the high dimension of hyperspectral data makes it difficult to implement traditional CD algorithms. Besides, endmember abundance information at subpixel level is often not fully utilized. In order to better handle high dimension problem and explore abundance information, this paper presents a General End-to-end Two-dimensional CNN (GETNET) framework for hyperspectral image change detection (HSI-CD). The main contributions of this work are threefold: 1) Mixed-affinity matrix that integrates subpixel representation is introduced to mine more cross-channel gradient features and fuse multi-source information; 2) 2-D CNN is designed to learn the discriminative features effectively from multi-source data at a higher level and enhance the generalization ability of the proposed CD algorithm; 3) A new HSI-CD data set is designed for the objective comparison of different methods. Experimental results on real hyperspectral data sets demonstrate the proposed method outperforms most of the state-of-the-arts

    Context dependent spectral unmixing.

    Get PDF
    A hyperspectral unmixing algorithm that finds multiple sets of endmembers is proposed. The algorithm, called Context Dependent Spectral Unmixing (CDSU), is a local approach that adapts the unmixing to different regions of the spectral space. It is based on a novel function that combines context identification and unmixing. This joint objective function models contexts as compact clusters and uses the linear mixing model as the basis for unmixing. Several variations of the CDSU, that provide additional desirable features, are also proposed. First, the Context Dependent Spectral unmixing using the Mahalanobis Distance (CDSUM) offers the advantage of identifying non-spherical clusters in the high dimensional spectral space. Second, the Cluster and Proportion Constrained Multi-Model Unmixing (CC-MMU and PC-MMU) algorithms use partial supervision information, in the form of cluster or proportion constraints, to guide the search process and narrow the space of possible solutions. The supervision information could be provided by an expert, generated by analyzing the consensus of multiple unmixing algorithms, or extracted from co-located data from a different sensor. Third, the Robust Context Dependent Spectral Unmixing (RCDSU) introduces possibilistic memberships into the objective function to reduce the effect of noise and outliers in the data. Finally, the Unsupervised Robust Context Dependent Spectral Unmixing (U-RCDSU) algorithm learns the optimal number of contexts in an unsupervised way. The performance of each algorithm is evaluated using synthetic and real data. We show that the proposed methods can identify meaningful and coherent contexts, and appropriate endmembers within each context. The second main contribution of this thesis is consensus unmixing. This approach exploits the diversity and similarity of the large number of existing unmixing algorithms to identify an accurate and consistent set of endmembers in the data. We run multiple unmixing algorithms using different parameters, and combine the resulting unmixing ensemble using consensus analysis. The extracted endmembers will be the ones that have a consensus among the multiple runs. The third main contribution consists of developing subpixel target detectors that rely on the proposed CDSU algorithms to adapt target detection algorithms to different contexts. A local detection statistic is computed for each context and then all scores are combined to yield a final detection score. The context dependent unmixing provides a better background description and limits target leakage, which are two essential properties for target detection algorithms

    A Multi Views Approach for Remote Sensing Fusion Based on Spectral, Spatial and Temporal Information

    Get PDF
    The objectives of this chapter are to contribute to the apprehension of image fusion approaches including concepts definition, techniques ethics and results assessment. It is structured in five sections. Following this introduction, a definition of image fusion provides involved fundamental concepts. Respectively, we explain cases in which image fusion might be useful. Most existing techniques and architectures are reviewed and classified in the third section. In fourth section, we focuses heavily on algorithms based on multi-views approach, we compares and analyses the process model and algorithms including advantages, limitations and applicability of each view. The last part of the chapter summarized the benefits and limitations of a multi-view approach image fusion; it gives some recommendations on the effectiveness and the performance of these methods. These recommendations, based on a comprehensive study and meaningful quantitative metrics, evaluate various proposed views by applying them to various environmental applications with different remotely sensed images coming from different sensors. In the concluding section, we fence the chapter with a summary and recommendations for future researches

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    A Quantitative Assessment of Forest Cover Change in the Moulouya River Watershed (Morocco) by the Integration of a Subpixel-Based and Object-Based Analysis of Landsat Data

    Get PDF
    A quantitative assessment of forest cover change in the Moulouya River watershed (Morocco) was carried out by means of an innovative approach from atmospherically corrected reflectance Landsat images corresponding to 1984 (Landsat 5 Thematic Mapper) and 2013 (Landsat 8 Operational Land Imager). An object-based image analysis (OBIA) was undertaken to classify segmented objects as forested or non-forested within the 2013 Landsat orthomosaic. A Random Forest classifier was applied to a set of training data based on a features vector composed of different types of object features such as vegetation indices, mean spectral values and pixel-based fractional cover derived from probabilistic spectral mixture analysis). The very high spatial resolution image data of Google Earth 2013 were employed to train/validate the Random Forest classifier, ranking the NDVI vegetation index and the corresponding pixel-based percentages of photosynthetic vegetation and bare soil as the most statistically significant object features to extract forested and non-forested areas. Regarding classification accuracy, an overall accuracy of 92.34% was achieved. The previously developed classification scheme was applied to the 1984 Landsat data to extract the forest cover change between 1984 and 2013, showing a slight net increase of 5.3% (ca. 8800 ha) in forested areas for the whole region

    Comparison of land-cover classification methods in the Brazilian Amazon Basin.

    Get PDF
    Numerous classifiers have been developed and different classifiers have their own characteristics. Controversial results often occurred depending on the landscape complexity of the study area and the data used. Therefore, this paper aims to find a suitable classifier for the tropical land cover classification. Five classifiers ? minimum distance classifier (MDC), maximum likelihood classifier (MLC), fisher linear discriminant (FLD), extraction and classification of homogeneous objects (ECHO), and linear spectral mixture analysis (LSMA) ? were tested using Landsat Thematic Mapper (TM) data in the Amazon basin using the same training sample data sets. Seven land cover classes ? mature forest, advanced succession forest, initial secondary succession forest, pasture, agricultural lands, bare lands, and water ? were classified. Overall classification accuracy and kappa analysis were calculated. The results indicate that LSMA and ECHO classifiers provided better classification accuracies than the MDC, MLC, and FLD in the moist tropical region. The overall accuracy of LSMA approach reaches 86% associated with 0.82 kappa coefficien

    Evaluating the use of an object-based approach to lithological mapping in vegetated terrain

    Get PDF
    Remote sensing-based approaches to lithological mapping are traditionally pixel-oriented, with classification performed on either a per-pixel or sub-pixel basis with complete disregard for contextual information about neighbouring pixels. However, intra-class variability due to heterogeneous surface cover (i.e., vegetation and soil) or regional variations in mineralogy and chemical composition can result in the generation of unrealistic, generalised lithological maps that exhibit the “salt-and-pepper” artefact of spurious pixel classifications, as well as poorly defined contacts. In this study, an object-based image analysis (OBIA) approach to lithological mapping is evaluated with respect to its ability to overcome these issues by instead classifying groups of contiguous pixels (i.e., objects). Due to significant vegetation cover in the study area, the OBIA approach incorporates airborne multispectral and LiDAR data to indirectly map lithologies by exploiting associations with both topography and vegetation type. The resulting lithological maps were assessed both in terms of their thematic accuracy and ability to accurately delineate lithological contacts. The OBIA approach is found to be capable of generating maps with an overall accuracy of 73.5% through integrating spectral and topographic input variables. When compared to equivalent per-pixel classifications, the OBIA approach achieved thematic accuracy increases of up to 13.1%, whilst also reducing the “salt-and-pepper” artefact to produce more realistic maps. Furthermore, the OBIA approach was also generally capable of mapping lithological contacts more accurately. The importance of optimising the segmentation stage of the OBIA approach is also highlighted. Overall, this study clearly demonstrates the potential of OBIA for lithological mapping applications, particularly in significantly vegetated and heterogeneous terrain

    Sub-pixel change detection for urban land-cover analysis via multi-temporal remote sensing images

    Get PDF
    Conventional change detection approaches are mainly based on per-pixel processing, which ignore the sub-pixel spectral variation resulted from spectral mixture. Especially for medium-resolution remote sensing images used in urban land-cover change monitoring, land use/cover components within a single pixel are usually complicated and heterogeneous due to the limitation of the spatial resolution. Thus, traditional hard detection methods based on pure pixel assumption may lead to a high level of omission and commission errors inevitably, degrading the overall accuracy of change detection. In order to address this issue and find a possible way to exploit the spectral variation in a sub-pixel level, a novel change detection scheme is designed based on the spectral mixture analysis and decision-level fusion. Nonlinear spectral mixture model is selected for spectral unmixing, and change detection is implemented in a sub-pixel level by investigating the inner-pixel subtle changes and combining multiple compositi..
    corecore