10,415 research outputs found
Descent methods for Nonnegative Matrix Factorization
In this paper, we present several descent methods that can be applied to
nonnegative matrix factorization and we analyze a recently developped fast
block coordinate method called Rank-one Residue Iteration (RRI). We also give a
comparison of these different methods and show that the new block coordinate
method has better properties in terms of approximation error and complexity. By
interpreting this method as a rank-one approximation of the residue matrix, we
prove that it \emph{converges} and also extend it to the nonnegative tensor
factorization and introduce some variants of the method by imposing some
additional controllable constraints such as: sparsity, discreteness and
smoothness.Comment: 47 pages. New convergence proof using damped version of RRI. To
appear in Numerical Linear Algebra in Signals, Systems and Control. Accepted.
Illustrating Matlab code is included in the source bundl
Dictionary-based Tensor Canonical Polyadic Decomposition
To ensure interpretability of extracted sources in tensor decomposition, we
introduce in this paper a dictionary-based tensor canonical polyadic
decomposition which enforces one factor to belong exactly to a known
dictionary. A new formulation of sparse coding is proposed which enables high
dimensional tensors dictionary-based canonical polyadic decomposition. The
benefits of using a dictionary in tensor decomposition models are explored both
in terms of parameter identifiability and estimation accuracy. Performances of
the proposed algorithms are evaluated on the decomposition of simulated data
and the unmixing of hyperspectral images
Model Selection for Nonnegative Matrix Factorization by Support Union Recovery
Nonnegative matrix factorization (NMF) has been widely used in machine
learning and signal processing because of its non-subtractive, part-based
property which enhances interpretability. It is often assumed that the latent
dimensionality (or the number of components) is given. Despite the large amount
of algorithms designed for NMF, there is little literature about automatic
model selection for NMF with theoretical guarantees. In this paper, we propose
an algorithm that first calculates an empirical second-order moment from the
empirical fourth-order cumulant tensor, and then estimates the latent
dimensionality by recovering the support union (the index set of non-zero rows)
of a matrix related to the empirical second-order moment. By assuming a
generative model of the data with additional mild conditions, our algorithm
provably detects the true latent dimensionality. We show on synthetic examples
that our proposed algorithm is able to find an approximately correct number of
components
Uniqueness of Nonnegative Tensor Approximations
We show that for a nonnegative tensor, a best nonnegative rank-r
approximation is almost always unique, its best rank-one approximation may
always be chosen to be a best nonnegative rank-one approximation, and that the
set of nonnegative tensors with non-unique best rank-one approximations form an
algebraic hypersurface. We show that the last part holds true more generally
for real tensors and thereby determine a polynomial equation so that a real or
nonnegative tensor which does not satisfy this equation is guaranteed to have a
unique best rank-one approximation. We also establish an analogue for real or
nonnegative symmetric tensors. In addition, we prove a singular vector variant
of the Perron--Frobenius Theorem for positive tensors and apply it to show that
a best nonnegative rank-r approximation of a positive tensor can never be
obtained by deflation. As an aside, we verify that the Euclidean distance (ED)
discriminants of the Segre variety and the Veronese variety are hypersurfaces
and give defining equations of these ED discriminants
Overview of Constrained PARAFAC Models
In this paper, we present an overview of constrained PARAFAC models where the
constraints model linear dependencies among columns of the factor matrices of
the tensor decomposition, or alternatively, the pattern of interactions between
different modes of the tensor which are captured by the equivalent core tensor.
Some tensor prerequisites with a particular emphasis on mode combination using
Kronecker products of canonical vectors that makes easier matricization
operations, are first introduced. This Kronecker product based approach is also
formulated in terms of the index notation, which provides an original and
concise formalism for both matricizing tensors and writing tensor models. Then,
after a brief reminder of PARAFAC and Tucker models, two families of
constrained tensor models, the co-called PARALIND/CONFAC and PARATUCK models,
are described in a unified framework, for order tensors. New tensor
models, called nested Tucker models and block PARALIND/CONFAC models, are also
introduced. A link between PARATUCK models and constrained PARAFAC models is
then established. Finally, new uniqueness properties of PARATUCK models are
deduced from sufficient conditions for essential uniqueness of their associated
constrained PARAFAC models
- …