47 research outputs found

    A Method for Finding Structured Sparse Solutions to Non-negative Least Squares Problems with Applications

    Full text link
    Demixing problems in many areas such as hyperspectral imaging and differential optical absorption spectroscopy (DOAS) often require finding sparse nonnegative linear combinations of dictionary elements that match observed data. We show how aspects of these problems, such as misalignment of DOAS references and uncertainty in hyperspectral endmembers, can be modeled by expanding the dictionary with grouped elements and imposing a structured sparsity assumption that the combinations within each group should be sparse or even 1-sparse. If the dictionary is highly coherent, it is difficult to obtain good solutions using convex or greedy methods, such as non-negative least squares (NNLS) or orthogonal matching pursuit. We use penalties related to the Hoyer measure, which is the ratio of the l1l_1 and l2l_2 norms, as sparsity penalties to be added to the objective in NNLS-type models. For solving the resulting nonconvex models, we propose a scaled gradient projection algorithm that requires solving a sequence of strongly convex quadratic programs. We discuss its close connections to convex splitting methods and difference of convex programming. We also present promising numerical results for example DOAS analysis and hyperspectral demixing problems.Comment: 38 pages, 14 figure

    A Homotopy-based Algorithm for Sparse Multiple Right-hand Sides Nonnegative Least Squares

    Full text link
    Nonnegative least squares (NNLS) problems arise in models that rely on additive linear combinations. In particular, they are at the core of nonnegative matrix factorization (NMF) algorithms. The nonnegativity constraint is known to naturally favor sparsity, that is, solutions with few non-zero entries. However, it is often useful to further enhance this sparsity, as it improves the interpretability of the results and helps reducing noise. While the 0\ell_0-"norm", equal to the number of non-zeros entries in a vector, is a natural sparsity measure, its combinatorial nature makes it difficult to use in practical optimization schemes. Most existing approaches thus rely either on its convex surrogate, the 1\ell_1-norm, or on heuristics such as greedy algorithms. In the case of multiple right-hand sides NNLS (MNNLS), which are used within NMF algorithms, sparsity is often enforced column- or row-wise, and the fact that the solution is a matrix is not exploited. In this paper, we first introduce a novel formulation for sparse MNNLS, with a matrix-wise 0\ell_0 sparsity constraint. Then, we present a two-step algorithm to tackle this problem. The first step uses a homotopy algorithm to produce the whole regularization path for all the 1\ell_1-penalized NNLS problems arising in MNNLS, that is, to produce a set of solutions representing different tradeoffs between reconstruction error and sparsity. The second step selects solutions among these paths in order to build a sparsity-constrained matrix that minimizes the reconstruction error. We illustrate the advantages of our proposed algorithm for the unmixing of facial and hyperspectral images.Comment: 20 pages + 7 pages supplementary materia

    Total Variation Spatial Regularization for Sparse Hyperspectral Unmixing

    Full text link

    Image Processing and Machine Learning for Hyperspectral Unmixing: An Overview and the HySUPP Python Package

    Full text link
    Spectral pixels are often a mixture of the pure spectra of the materials, called endmembers, due to the low spatial resolution of hyperspectral sensors, double scattering, and intimate mixtures of materials in the scenes. Unmixing estimates the fractional abundances of the endmembers within the pixel. Depending on the prior knowledge of endmembers, linear unmixing can be divided into three main groups: supervised, semi-supervised, and unsupervised (blind) linear unmixing. Advances in Image processing and machine learning substantially affected unmixing. This paper provides an overview of advanced and conventional unmixing approaches. Additionally, we draw a critical comparison between advanced and conventional techniques from the three categories. We compare the performance of the unmixing techniques on three simulated and two real datasets. The experimental results reveal the advantages of different unmixing categories for different unmixing scenarios. Moreover, we provide an open-source Python-based package available at https://github.com/BehnoodRasti/HySUPP to reproduce the results

    スペクトルの線形性を考慮したハイパースペクトラル画像のノイズ除去とアンミキシングに関する研究

    Get PDF
    This study aims to generalize color line to M-dimensional spectral line feature (M>3) and introduce methods for denoising and unmixing of hyperspectral images based on the spectral linearity.For denoising, we propose a local spectral component decomposition method based on the spectral line. We first calculate the spectral line of an M-channel image, then using the line, we decompose the image into three components: a single M-channel image and two gray-scale images. By virtue of the decomposition, the noise is concentrated on the two images, thus the algorithm needs to denoise only two grayscale images, regardless of the number of channels. For unmixing, we propose an algorithm that exploits the low-rank local abundance by applying the unclear norm to the abundance matrix for local regions of spatial and abundance domains. In optimization problem, the local abundance regularizer is collaborated with the L2, 1 norm and the total variation.北九州市立大

    A convex model for non-negative matrix factorization and dimensionality reduction on physical space

    Full text link
    A collaborative convex framework for factoring a data matrix XX into a non-negative product ASAS, with a sparse coefficient matrix SS, is proposed. We restrict the columns of the dictionary matrix AA to coincide with certain columns of the data matrix XX, thereby guaranteeing a physically meaningful dictionary and dimensionality reduction. We use l1,l_{1,\infty} regularization to select the dictionary from the data and show this leads to an exact convex relaxation of l0l_0 in the case of distinct noise free data. We also show how to relax the restriction-to-XX constraint by initializing an alternating minimization approach with the solution of the convex model, obtaining a dictionary close to but not necessarily in XX. We focus on applications of the proposed framework to hyperspectral endmember and abundances identification and also show an application to blind source separation of NMR data.Comment: 14 pages, 9 figures. EE and JX were supported by NSF grants {DMS-0911277}, {PRISM-0948247}, MM by the German Academic Exchange Service (DAAD), SO and MM by NSF grants {DMS-0835863}, {DMS-0914561}, {DMS-0914856} and ONR grant {N00014-08-1119}, and GS was supported by NSF, NGA, ONR, ARO, DARPA, and {NSSEFF.

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1
    corecore