964 research outputs found

    Source Separation in Chemical Analysis : Recent Achievements and Perspectives

    Get PDF
    International audienceSource separation is one of the most relevant estimation problems found in chemistry. Indeed, dealing with mixtures is paramount in different kinds of chemical analysis. For instance, there are some cases where the analyte is a chemical mixture of different components, e.g., in the analysis of rocks and heterogeneous materials through spectroscopy. Moreover, a mixing process can also take place even when the components are not chemically mixed. For instance, in ionic analysis of liquid samples, the ions are not chemically connected, but, due to the lack of selectivity of the chemical sensors, the acquired responses may be influenced by ions that are not the desired ones. Finally, there are some situations where the pure components cannot be isolated chemically since they appear only in the presence of other components. In this case, BSS may provide these components that cannot be retrieved otherwise. In this paper, our aim is to shed some light on the use of BSS in chemical analysis. In this context, we firstly provide a brief overview on source separation (Section II), with particular attention to the classes of linear and nonlinear mixing models (Sections III and IV, respectively). Then, (in Section V), we will give some conclusions and focus on challenging aspects that are found in chemical analysis. Although dealing with a relatively new field of applications, this article is not an exhaustive survey of source separation methods and algorithms, since there are solutions originated in closely related domains (e.g. remote sensing and hyperspectral imaging) that suit well several problems found in chemical analysis. Moreover, we do not discuss the supervised source separation methods, which are basically multivariate regression techniques, that one can find in chemometrics

    Distributed Unmixing of Hyperspectral Data With Sparsity Constraint

    Full text link
    Spectral unmixing (SU) is a data processing problem in hyperspectral remote sensing. The significant challenge in the SU problem is how to identify endmembers and their weights, accurately. For estimation of signature and fractional abundance matrices in a blind problem, nonnegative matrix factorization (NMF) and its developments are used widely in the SU problem. One of the constraints which was added to NMF is sparsity constraint that was regularized by L 1/2 norm. In this paper, a new algorithm based on distributed optimization has been used for spectral unmixing. In the proposed algorithm, a network including single-node clusters has been employed. Each pixel in hyperspectral images considered as a node in this network. The distributed unmixing with sparsity constraint has been optimized with diffusion LMS strategy, and then the update equations for fractional abundance and signature matrices are obtained. Simulation results based on defined performance metrics, illustrate advantage of the proposed algorithm in spectral unmixing of hyperspectral data compared with other methods. The results show that the AAD and SAD of the proposed approach are improved respectively about 6 and 27 percent toward distributed unmixing in SNR=25dB.Comment: 6 pages, conference pape

    Hyperspectral Endmember Extraction Techniques

    Get PDF
    Hyperspectral data processing and analysis mainly plays a vital role in detection, identification, discrimination and estimation of earth surface materials. It involves atmospheric correction, dimensionality reduction, endmember extraction, spectral unmixing and classification phases. One of the ultimate aims of hyperspectral data processing and analysis is to achieve high classification accuracy. The classification accuracy of hyperspectral data most probably depends upon image-derived endmembers. Ideally, an endmember is defined as a spectrally unique, idealized and pure signature of a surface material. Extraction of consistent and desired endmember is one of the important criteria to achieve the high accuracy of hyperspectral data classification and spectral unmixing. Several methods, strategies and algorithms are proposed by various researchers to extract the endmembers from hyperspectral imagery. Most of these techniques and algorithms are significantly dependent on user-defined input parameters, and this issue is subjective because there is no standard specificity about these input parameters. This leads to inconsistencies in overall endmember extraction. To resolve the aforementioned problems, systematic, generic, robust and automated mechanism of endmember extraction is required. This chapter gives and highlights the generic approach of endmember extraction with popular algorithm limitations and challenges

    A convex model for non-negative matrix factorization and dimensionality reduction on physical space

    Full text link
    A collaborative convex framework for factoring a data matrix XX into a non-negative product ASAS, with a sparse coefficient matrix SS, is proposed. We restrict the columns of the dictionary matrix AA to coincide with certain columns of the data matrix XX, thereby guaranteeing a physically meaningful dictionary and dimensionality reduction. We use l1,l_{1,\infty} regularization to select the dictionary from the data and show this leads to an exact convex relaxation of l0l_0 in the case of distinct noise free data. We also show how to relax the restriction-to-XX constraint by initializing an alternating minimization approach with the solution of the convex model, obtaining a dictionary close to but not necessarily in XX. We focus on applications of the proposed framework to hyperspectral endmember and abundances identification and also show an application to blind source separation of NMR data.Comment: 14 pages, 9 figures. EE and JX were supported by NSF grants {DMS-0911277}, {PRISM-0948247}, MM by the German Academic Exchange Service (DAAD), SO and MM by NSF grants {DMS-0835863}, {DMS-0914561}, {DMS-0914856} and ONR grant {N00014-08-1119}, and GS was supported by NSF, NGA, ONR, ARO, DARPA, and {NSSEFF.

    Cell Detection by Functional Inverse Diffusion and Non-negative Group Sparsity-Part II: Proximal Optimization and Performance Evaluation

    Full text link
    In this two-part paper, we present a novel framework and methodology to analyze data from certain image-based biochemical assays, e.g., ELISPOT and Fluorospot assays. In this second part, we focus on our algorithmic contributions. We provide an algorithm for functional inverse diffusion that solves the variational problem we posed in Part I. As part of the derivation of this algorithm, we present the proximal operator for the non-negative group-sparsity regularizer, which is a novel result that is of interest in itself, also in comparison to previous results on the proximal operator of a sum of functions. We then present a discretized approximated implementation of our algorithm and evaluate it both in terms of operational cell-detection metrics and in terms of distributional optimal-transport metrics.Comment: published, 16 page

    Non-Negative Blind Source Separation Algorithm Based on Minimum Aperture Simplicial Cone

    Get PDF
    International audienceWe address the problem of Blind Source Separation (BSS) when the hidden sources are Nonnegative (N-BSS). In this case, the scatter plot of the mixed data is contained within the simplicial cone generated by the columns of the mixing matrix. The proposed method, termed SCSA-UNS for Simplicial Cone Shrinking Algorithm for Unmixing Non-negative Sources, aims at estimating the mixing matrix and the sources by fitting a Minimum Aperture Simplicial Cone (MASC) to the cloud of mixed data points. SCSA-UNS is evaluated on both independent and correlated synthetic data and compared to other N-BSS methods. Simulations are also performed on real Liquid Chromatography-Mass Spectrum (LC-MS) data for the metabolomic analysis of a chemical sample, and on real dynamic Positron Emission Tomography (PET) images, in order to study the pharmacokinetics of the [18F]-FDG (FluoroDeoxyGlucose) tracer in the brain

    Band Selection for Hyperspectral Images Using Non-Negativity Constraints

    Get PDF
    This paper presents a new factorization technique for hyperspectral signal processing based on a constrained singular value decomposition (SVD) approach. Hyperpectral images typically have a large number of contiguous bands that are highly correlated. Likewise the field of view typically contains a limited number of materials and the spectra are also correlated. Only a selected number of bands, the extreme bands that include the dominant materials spectral signatures, are needed to express the data. Factorization can provide a means for interpretation and compression of the spectral data. Hyperspectral images are represented as non-negative matrices by graphic concatenation, with the pixels arranged into columns and each row corresponding to a spectral band. SVD and principal component analysis enjoy a broad range of applications, including, rank estimation, noise reduction, classification and compression, with the resulting singular vectors forming orthogonal basis sets for subspace projection techniques. A key property of non-negative matrices is that their columns/rows form non-negative cones, with any non-negative linear combination of the columns/rows belonging to the cone. Data sets of spectral images and time series reside in non-negative orthants and while subspaces spanned by SVD include all orthants, SVD projections can be constrained to the non-negative orthants. In this paper we utilize constraint sets that confine projections of SVD singular vectors to lie within the cones formed by the spectral data. The extreme vectors of the cone are found and these vectors form a basis for the factorization of the data. The approach is illustrated in an application to hyperspectral data of a mining area collected by an airborne sensor

    Generative-Discriminitive Basis Learning for Medical Imaging

    Get PDF
    This paper presents a novel dimensionality reduction method for classification in medical imaging. The goal is to transform very high-dimensional input (typically, millions of voxels) to a low-dimensional representation (small number of constructed features) that preserves discriminative signal and is clinically interpretable. We formulate the task as a constrained optimization problem that combines generative and discriminative objectives and show how to extend it to the semisupervised learning (SSL) setting. We propose a novel largescale algorithm to solve the resulting optimization problem. In the fully supervised case, we demonstrate accuracy rates that are better than or comparable to state-of-the-art algorithms on several datasets while producing a representation of the group difference that is consistent with prior clinical reports. Effectiveness of the proposed algorithm for SSL is evaluated with both benchmark and medical imaging datasets. In the benchmark datasets, the results are better than or comparable to the state-of-the-art methods for SSL. For evaluation of the SSL setting in medical datasets, we use images of subjects with Mild Cognitive Impairment (MCI), which is believed to be a precursor to Alzheimer’s disease (AD), as unlabeled data. AD subjects and Normal Control (NC) subjects are used as labeled data, and we try to predict conversion from MCI to AD on follow-up. The semi-supervised extension of this method not only improves the generalization accuracy for the labeled data (AD/NC) slightly but is also able to predict subjects which are likely to converge to AD
    corecore