124 research outputs found

    Fast and Robust Recursive Algorithms for Separable Nonnegative Matrix Factorization

    Full text link
    In this paper, we study the nonnegative matrix factorization problem under the separability assumption (that is, there exists a cone spanned by a small subset of the columns of the input nonnegative data matrix containing all columns), which is equivalent to the hyperspectral unmixing problem under the linear mixing model and the pure-pixel assumption. We present a family of fast recursive algorithms, and prove they are robust under any small perturbations of the input data matrix. This family generalizes several existing hyperspectral unmixing algorithms and hence provides for the first time a theoretical justification of their better practical performance.Comment: 30 pages, 2 figures, 7 tables. Main change: Improvement of the bound of the main theorem (Th. 3), replacing r with sqrt(r

    Regularization approaches to hyperspectral unmixing

    Get PDF
    We consider a few different approaches to hyperspectral unmixing of remotely sensed imagery which exploit and extend recent advances in sparse statistical regularization, handling of constraints and dictionary reduction. Hyperspectral unmixing methods often use a conventional least-squares based lasso which assumes that the data follows the Gaussian distribution, we use this as a starting point. In addition, we consider a robust approach to sparse spectral unmixing of remotely sensed imagery which reduces the sensitivity of the estimator to outliers. Due to water absorption and atmospheric effects that affect data collection, hyperspectral images are prone to have large outliers. The framework comprises of several well-principled penalties. A non-convex, hyper-Laplacian prior is incorporated to induce sparsity in the number of active pure spectral components, and total variation regularizer is included to exploit the spatial-contextual information of hyperspectral images. Enforcing the sum-to-one and non-negativity constraint on the models parameters is essential for obtaining realistic estimates. We consider two approaches to account for this: an iterative heuristic renormalization and projection onto the positive orthant, and a reparametrization of the coefficients which gives rise to a theoretically founded method. Since the large size of modern spectral libraries cannot only present computational challenges but also introduce collinearities between regressors, we introduce a library reduction step. This uses the multiple signal classi fication (MUSIC) array processing algorithm, which both speeds up unmixing and yields superior results in scenarios where the library size is extensive. We show that although these problems are non-convex, they can be solved by a properly de fined algorithm based on either trust region optimization or iteratively reweighted least squares. The performance of the different approaches is validated in several simulated and real hyperspectral data experiments

    Nonlinear unmixing of hyperspectral images: Models and algorithms

    Get PDF
    When considering the problem of unmixing hyperspectral images, most of the literature in the geoscience and image processing areas relies on the widely used linear mixing model (LMM). However, the LMM may be not valid, and other nonlinear models need to be considered, for instance, when there are multiscattering effects or intimate interactions. Consequently, over the last few years, several significant contributions have been proposed to overcome the limitations inherent in the LMM. In this article, we present an overview of recent advances in nonlinear unmixing modeling

    Exploitation of Intra-Spectral Band Correlation for Rapid Feacture Selection and Target Identification in Hyperspectral Imagery

    Get PDF
    This research extends the work produced by Capt. Robert Johnson for detecting target pixels within hyperspectral imagery (HSI). The methodology replaces Principle Components Analysis for dimensionality reduction with a clustering algorithm which seeks to associate spectral rather than spatial dimensions. By seeking similar spectral dimensions, the assumption of no a priori knowledge of the relationship between clustered members can be eliminated and clusters are formed by seeking high correlated adjacent spectral bands. Following dimensionality reduction Independent Components Analysis (ICA) is used to perform feature extraction. Kurtosis and Potential Target Fraction are added to Maximum Component Score and Potential Target Signal to Noise Ratio as mechanisms for discriminating between target and non-target maps. A new methodology exploiting Johnson’s Maximum Distance Secant Line method replaces the first zero bin method for identifying the breakpoint between signal and noise. A parameter known as Left Partial Kurtosis is defined and applied to determine when target pixels are likely to be found in the left tail of each signal histogram. A variable control over the number of iterations of Adaptive Iterative Noise filtering is introduced. Results of this modified algorithm are compared to those of Johnson’s AutoGAD [2007]

    Manifold learning based spectral unmixing of hyperspectral remote sensing data

    Get PDF
    Nonlinear mixing effects inherent in hyperspectral data are not properly represented in linear spectral unmixing models. Although direct nonlinear unmixing models provide capability to capture nonlinear phenomena, they are difficult to formulate and the results are not always generalizable. Manifold learning based spectral unmixing accommodates nonlinearity in the data in the feature extraction stage followed by linear mixing, thereby incorporating some characteristics of nonlinearity while retaining advantages of linear unmixing approaches. Since endmember selection is critical to successful spectral unmixing, it is important to select proper endmembers from the manifold space. However, excessive computational burden hinders development of manifolds for large-scale remote sensing datasets. This dissertation addresses issues related to high computational overhead requirements of manifold learning for developing representative manifolds for the spectral unmixing task. Manifold approximations using landmarks are popular for mitigating the computational complexity of manifold learning. A new computationally effective landmark selection method that exploits spatial redundancy in the imagery is proposed. A robust, less costly landmark set with low spectral and spatial redundancy is successfully incorporated with a hybrid manifold which shares properties of both global and local manifolds. While landmark methods reduce computational demand, the resulting manifolds may not represent subtle features of the manifold adequately. Active learning heuristics are introduced to increase the number of landmarks, with the goal of developing more representative manifolds for spectral unmixing. By communicating between the landmark set and the query criteria relative to spectral unmixing, more representative and stable manifolds with less spectrally and spatially redundant landmarks are developed. A new ranking method based on the pixels with locally high spectral variability within image subsets and convex-geometry finds a solution more quickly and precisely. Experiments were conducted to evaluate the proposed methods using the AVIRIS Cuprite hyperspectral reference dataset. A case study of manifold learning based spectral unmixing in agricultural areas is included in the dissertation.Remotely sensed data collected by airborne or spaceborne sensors are utilized to quantify crop residue cover over an extensive area. Although remote sensing indices are popular for characterizing residue amounts, they are not effective with noisy Hyperion data because the effect of residual striping artifacts is amplified in ratios involving band differences. In this case study, spectral unmixing techniques are investigated for estimating crop residue as an alternative approach to empirical models developed using band based indices. The spectral unmixing techniques, and especially the manifold learning approaches, provide more robust, lower RMSE estimates for crop residue cover than the hyperspectral index based method for Hyperion data

    Estimating the Intrinsic Dimension of Hyperspectral Images Using a Noise-Whitened Eigengap Approach

    Get PDF
    International audienceLinear mixture models are commonly used to represent a hyperspectral data cube as linear combinations of endmember spectra. However, determining the number of endmembers for images embedded in noise is a crucial task. This paper proposes a fully automatic approach for estimating the number of endmembers in hyperspectral images. The estimation is based on recent results of random matrix theory related to the so-called spiked population model. More precisely, we study the gap between successive eigenvalues of the sample covariance matrix constructed from high-dimensional noisy samples. The resulting estimation strategy is fully automatic and robust to correlated noise owing to the consideration of a noise-whitening step. This strategy is validated on both synthetic and real images. The experimental results are very promising and show the accuracy of this algorithm with respect to state-of-the-art algorithms

    Tensor-based Hyperspectral Image Processing Methodology and its Applications in Impervious Surface and Land Cover Mapping

    Get PDF
    The emergence of hyperspectral imaging provides a new perspective for Earth observation, in addition to previously available orthophoto and multispectral imagery. This thesis focused on both the new data and new methodology in the field of hyperspectral imaging. First, the application of the future hyperspectral satellite EnMAP in impervious surface area (ISA) mapping was studied. During the search for the appropriate ISA mapping procedure for the new data, the subpixel classification based on nonnegative matrix factorization (NMF) achieved the best success. The simulated EnMAP image shows great potential in urban ISA mapping with over 85% accuracy. Unfortunately, the NMF based on the linear algebra only considers the spectral information and neglects the spatial information in the original image. The recent wide interest of applying the multilinear algebra in computer vision sheds light on this problem and raised the idea of nonnegative tensor factorization (NTF). This thesis found that the NTF has more advantages over the NMF when work with medium- rather than the high-spatial-resolution hyperspectral image. Furthermore, this thesis proposed to equip the NTF-based subpixel classification methods with the variations adopted from the NMF. By adopting the variations from the NMF, the urban ISA mapping results from the NTF were improved by ~2%. Lastly, the problem known as the curse of dimensionality is an obstacle in hyperspectral image applications. The majority of current dimension reduction (DR) methods are restricted to using only the spectral information, when the spatial information is neglected. To overcome this defect, two spectral-spatial methods: patch-based and tensor-patch-based, were thoroughly studied and compared in this thesis. To date, the popularity of the two solutions remains in computer vision studies and their applications in hyperspectral DR are limited. The patch-based and tensor-patch-based variations greatly improved the quality of dimension-reduced hyperspectral images, which then improved the land cover mapping results from them. In addition, this thesis proposed to use an improved method to produce an important intermediate result in the patch-based and tensor-patch-based DR process, which further improved the land cover mapping results

    ă‚čăƒšă‚Żăƒˆăƒ«ăźç·šćœąæ€§ă‚’è€ƒæ…źă—ăŸăƒă‚€ăƒ‘ăƒŒă‚čăƒšă‚Żăƒˆăƒ©ăƒ«ç”»ćƒăźăƒŽă‚€ă‚șé™€ćŽ»ăšă‚ąăƒłăƒŸă‚­ă‚·ăƒłă‚°ă«é–ąă™ă‚‹ç ”ç©¶

    Get PDF
    This study aims to generalize color line to M-dimensional spectral line feature (M>3) and introduce methods for denoising and unmixing of hyperspectral images based on the spectral linearity.For denoising, we propose a local spectral component decomposition method based on the spectral line. We first calculate the spectral line of an M-channel image, then using the line, we decompose the image into three components: a single M-channel image and two gray-scale images. By virtue of the decomposition, the noise is concentrated on the two images, thus the algorithm needs to denoise only two grayscale images, regardless of the number of channels. For unmixing, we propose an algorithm that exploits the low-rank local abundance by applying the unclear norm to the abundance matrix for local regions of spatial and abundance domains. In optimization problem, the local abundance regularizer is collaborated with the L2, 1 norm and the total variation.挗äčć·žćž‚立性

    Modeling spatial and temporal variabilities in hyperspectral image unmixing

    Get PDF
    Acquired in hundreds of contiguous spectral bands, hyperspectral (HS) images have received an increasing interest due to the significant spectral information they convey about the materials present in a given scene. However, the limited spatial resolution of hyperspectral sensors implies that the observations are mixtures of multiple signatures corresponding to distinct materials. Hyperspectral unmixing is aimed at identifying the reference spectral signatures composing the data -- referred to as endmembers -- and their relative proportion in each pixel according to a predefined mixture model. In this context, a given material is commonly assumed to be represented by a single spectral signature. This assumption shows a first limitation, since endmembers may vary locally within a single image, or from an image to another due to varying acquisition conditions, such as declivity and possibly complex interactions between the incident light and the observed materials. Unless properly accounted for, spectral variability can have a significant impact on the shape and the amplitude of the acquired signatures, thus inducing possibly significant estimation errors during the unmixing process. A second limitation results from the significant size of HS data, which may preclude the use of batch estimation procedures commonly used in the literature, i.e., techniques exploiting all the available data at once. Such computational considerations notably become prominent to characterize endmember variability in multi-temporal HS (MTHS) images, i.e., sequences of HS images acquired over the same area at different time instants. The main objective of this thesis consists in introducing new models and unmixing procedures to account for spatial and temporal endmember variability. Endmember variability is addressed by considering an explicit variability model reminiscent of the total least squares problem, and later extended to account for time-varying signatures. The variability is first estimated using an unsupervised deterministic optimization procedure based on the Alternating Direction Method of Multipliers (ADMM). Given the sensitivity of this approach to abrupt spectral variations, a robust model formulated within a Bayesian framework is introduced. This formulation enables smooth spectral variations to be described in terms of spectral variability, and abrupt changes in terms of outliers. Finally, the computational restrictions induced by the size of the data is tackled by an online estimation algorithm. This work further investigates an asynchronous distributed estimation procedure to estimate the parameters of the proposed models
    • 

    corecore