1,051 research outputs found

    Distributed Unmixing of Hyperspectral Data With Sparsity Constraint

    Full text link
    Spectral unmixing (SU) is a data processing problem in hyperspectral remote sensing. The significant challenge in the SU problem is how to identify endmembers and their weights, accurately. For estimation of signature and fractional abundance matrices in a blind problem, nonnegative matrix factorization (NMF) and its developments are used widely in the SU problem. One of the constraints which was added to NMF is sparsity constraint that was regularized by L 1/2 norm. In this paper, a new algorithm based on distributed optimization has been used for spectral unmixing. In the proposed algorithm, a network including single-node clusters has been employed. Each pixel in hyperspectral images considered as a node in this network. The distributed unmixing with sparsity constraint has been optimized with diffusion LMS strategy, and then the update equations for fractional abundance and signature matrices are obtained. Simulation results based on defined performance metrics, illustrate advantage of the proposed algorithm in spectral unmixing of hyperspectral data compared with other methods. The results show that the AAD and SAD of the proposed approach are improved respectively about 6 and 27 percent toward distributed unmixing in SNR=25dB.Comment: 6 pages, conference pape

    Semi-supervised linear spectral unmixing using a hierarchical Bayesian model for hyperspectral imagery

    Get PDF
    This paper proposes a hierarchical Bayesian model that can be used for semi-supervised hyperspectral image unmixing. The model assumes that the pixel reflectances result from linear combinations of pure component spectra contaminated by an additive Gaussian noise. The abundance parameters appearing in this model satisfy positivity and additivity constraints. These constraints are naturally expressed in a Bayesian context by using appropriate abundance prior distributions. The posterior distributions of the unknown model parameters are then derived. A Gibbs sampler allows one to draw samples distributed according to the posteriors of interest and to estimate the unknown abundances. An extension of the algorithm is finally studied for mixtures with unknown numbers of spectral components belonging to a know library. The performance of the different unmixing strategies is evaluated via simulations conducted on synthetic and real data

    A Quantitative Assessment of Forest Cover Change in the Moulouya River Watershed (Morocco) by the Integration of a Subpixel-Based and Object-Based Analysis of Landsat Data

    Get PDF
    A quantitative assessment of forest cover change in the Moulouya River watershed (Morocco) was carried out by means of an innovative approach from atmospherically corrected reflectance Landsat images corresponding to 1984 (Landsat 5 Thematic Mapper) and 2013 (Landsat 8 Operational Land Imager). An object-based image analysis (OBIA) was undertaken to classify segmented objects as forested or non-forested within the 2013 Landsat orthomosaic. A Random Forest classifier was applied to a set of training data based on a features vector composed of different types of object features such as vegetation indices, mean spectral values and pixel-based fractional cover derived from probabilistic spectral mixture analysis). The very high spatial resolution image data of Google Earth 2013 were employed to train/validate the Random Forest classifier, ranking the NDVI vegetation index and the corresponding pixel-based percentages of photosynthetic vegetation and bare soil as the most statistically significant object features to extract forested and non-forested areas. Regarding classification accuracy, an overall accuracy of 92.34% was achieved. The previously developed classification scheme was applied to the 1984 Landsat data to extract the forest cover change between 1984 and 2013, showing a slight net increase of 5.3% (ca. 8800 ha) in forested areas for the whole region

    Hyperspectral Endmember Extraction Techniques

    Get PDF
    Hyperspectral data processing and analysis mainly plays a vital role in detection, identification, discrimination and estimation of earth surface materials. It involves atmospheric correction, dimensionality reduction, endmember extraction, spectral unmixing and classification phases. One of the ultimate aims of hyperspectral data processing and analysis is to achieve high classification accuracy. The classification accuracy of hyperspectral data most probably depends upon image-derived endmembers. Ideally, an endmember is defined as a spectrally unique, idealized and pure signature of a surface material. Extraction of consistent and desired endmember is one of the important criteria to achieve the high accuracy of hyperspectral data classification and spectral unmixing. Several methods, strategies and algorithms are proposed by various researchers to extract the endmembers from hyperspectral imagery. Most of these techniques and algorithms are significantly dependent on user-defined input parameters, and this issue is subjective because there is no standard specificity about these input parameters. This leads to inconsistencies in overall endmember extraction. To resolve the aforementioned problems, systematic, generic, robust and automated mechanism of endmember extraction is required. This chapter gives and highlights the generic approach of endmember extraction with popular algorithm limitations and challenges

    Smoothed Separable Nonnegative Matrix Factorization

    Full text link
    Given a set of data points belonging to the convex hull of a set of vertices, a key problem in data analysis and machine learning is to estimate these vertices in the presence of noise. Many algorithms have been developed under the assumption that there is at least one nearby data point to each vertex; two of the most widely used ones are vertex component analysis (VCA) and the successive projection algorithm (SPA). This assumption is known as the pure-pixel assumption in blind hyperspectral unmixing, and as the separability assumption in nonnegative matrix factorization. More recently, Bhattacharyya and Kannan (ACM-SIAM Symposium on Discrete Algorithms, 2020) proposed an algorithm for learning a latent simplex (ALLS) that relies on the assumption that there is more than one nearby data point for each vertex. In that scenario, ALLS is probalistically more robust to noise than algorithms based on the separability assumption. In this paper, inspired by ALLS, we propose smoothed VCA (SVCA) and smoothed SPA (SSPA) that generalize VCA and SPA by assuming the presence of several nearby data points to each vertex. We illustrate the effectiveness of SVCA and SSPA over VCA, SPA and ALLS on synthetic data sets, and on the unmixing of hyperspectral images.Comment: 27 pages, 11 figure

    A Method for Finding Structured Sparse Solutions to Non-negative Least Squares Problems with Applications

    Full text link
    Demixing problems in many areas such as hyperspectral imaging and differential optical absorption spectroscopy (DOAS) often require finding sparse nonnegative linear combinations of dictionary elements that match observed data. We show how aspects of these problems, such as misalignment of DOAS references and uncertainty in hyperspectral endmembers, can be modeled by expanding the dictionary with grouped elements and imposing a structured sparsity assumption that the combinations within each group should be sparse or even 1-sparse. If the dictionary is highly coherent, it is difficult to obtain good solutions using convex or greedy methods, such as non-negative least squares (NNLS) or orthogonal matching pursuit. We use penalties related to the Hoyer measure, which is the ratio of the l1l_1 and l2l_2 norms, as sparsity penalties to be added to the objective in NNLS-type models. For solving the resulting nonconvex models, we propose a scaled gradient projection algorithm that requires solving a sequence of strongly convex quadratic programs. We discuss its close connections to convex splitting methods and difference of convex programming. We also present promising numerical results for example DOAS analysis and hyperspectral demixing problems.Comment: 38 pages, 14 figure

    Macroscale multimodal imaging reveals ancient painting production technology and the vogue in Greco-Roman Egypt.

    Get PDF
    Macroscale multimodal chemical imaging combining hyperspectral diffuse reflectance (400-2500 nm), luminescence (400-1000 nm), and X-ray fluorescence (XRF, 2 to 25 keV) data, is uniquely equipped for noninvasive characterization of heterogeneous complex systems such as paintings. Here we present the first application of multimodal chemical imaging to analyze the production technology of an 1,800-year-old painting and one of the oldest surviving encaustic ("burned in") paintings in the world. Co-registration of the data cubes from these three hyperspectral imaging modalities enabled the comparison of reflectance, luminescence, and XRF spectra at each pixel in the image for the entire painting. By comparing the molecular and elemental spectral signatures at each pixel, this fusion of the data allowed for a more thorough identification and mapping of the painting's constituent organic and inorganic materials, revealing key information on the selection of raw materials, production sequence and the fashion aesthetics and chemical arts practiced in Egypt in the second century AD
    corecore