2,182 research outputs found

    Data Mining by NonNegative Tensor Approximation

    No full text
    International audienceInferring multilinear dependences within multi-way data can be performed by tensor decompositions. Because of the presence of noise or modeling errors, the problem actually requires an approximation of lower rank. We concentrate on the case of real 3-way data arrays with nonnegative values, and propose an unconstrained algorithm resorting to an hyperspherical parameterization implemented in a novel way, and to a global line search. To illustrate the contribution, we report computer experiments allowing to detect and identify toxic molecules in a solvent with the help of fluorescent spectroscopy measurements

    Using Underapproximations for Sparse Nonnegative Matrix Factorization

    Full text link
    Nonnegative Matrix Factorization consists in (approximately) factorizing a nonnegative data matrix by the product of two low-rank nonnegative matrices. It has been successfully applied as a data analysis technique in numerous domains, e.g., text mining, image processing, microarray data analysis, collaborative filtering, etc. We introduce a novel approach to solve NMF problems, based on the use of an underapproximation technique, and show its effectiveness to obtain sparse solutions. This approach, based on Lagrangian relaxation, allows the resolution of NMF problems in a recursive fashion. We also prove that the underapproximation problem is NP-hard for any fixed factorization rank, using a reduction of the maximum edge biclique problem in bipartite graphs. We test two variants of our underapproximation approach on several standard image datasets and show that they provide sparse part-based representations with low reconstruction error. Our results are comparable and sometimes superior to those obtained by two standard Sparse Nonnegative Matrix Factorization techniques.Comment: Version 2 removed the section about convex reformulations, which was not central to the development of our main results; added material to the introduction; added a review of previous related work (section 2.3); completely rewritten the last part (section 4) to provide extensive numerical results supporting our claims. Accepted in J. of Pattern Recognitio

    Renormalization group flows of Hamiltonians using tensor networks

    Get PDF
    A renormalization group flow of Hamiltonians for two-dimensional classical partition functions is constructed using tensor networks. Similar to tensor network renormalization ([G. Evenbly and G. Vidal, Phys. Rev. Lett. 115, 180405 (2015)], [S. Yang, Z.-C. Gu, and X.-G Wen, Phys. Rev. Lett. 118, 110504 (2017)]) we obtain approximate fixed point tensor networks at criticality. Our formalism however preserves positivity of the tensors at every step and hence yields an interpretation in terms of Hamiltonian flows. We emphasize that the key difference between tensor network approaches and Kadanoff's spin blocking method can be understood in terms of a change of local basis at every decimation step, a property which is crucial to overcome the area law of mutual information. We derive algebraic relations for fixed point tensors, calculate critical exponents, and benchmark our method on the Ising model and the six-vertex model.Comment: accepted version for Phys. Rev. Lett, main text: 5 pages, 3 figures, appendices: 9 pages, 1 figur
    • …
    corecore