1,032 research outputs found
Parallel Algorithms for Constrained Tensor Factorization via the Alternating Direction Method of Multipliers
Tensor factorization has proven useful in a wide range of applications, from
sensor array processing to communications, speech and audio signal processing,
and machine learning. With few recent exceptions, all tensor factorization
algorithms were originally developed for centralized, in-memory computation on
a single machine; and the few that break away from this mold do not easily
incorporate practically important constraints, such as nonnegativity. A new
constrained tensor factorization framework is proposed in this paper, building
upon the Alternating Direction method of Multipliers (ADMoM). It is shown that
this simplifies computations, bypassing the need to solve constrained
optimization problems in each iteration; and it naturally leads to distributed
algorithms suitable for parallel implementation on regular high-performance
computing (e.g., mesh) architectures. This opens the door for many emerging big
data-enabled applications. The methodology is exemplified using nonnegativity
as a baseline constraint, but the proposed framework can more-or-less readily
incorporate many other types of constraints. Numerical experiments are very
encouraging, indicating that the ADMoM-based nonnegative tensor factorization
(NTF) has high potential as an alternative to state-of-the-art approaches.Comment: Submitted to the IEEE Transactions on Signal Processin
Non-negative Matrix Factorization: A Survey
CAUL read and publish agreement 2022Publishe
A convex model for non-negative matrix factorization and dimensionality reduction on physical space
A collaborative convex framework for factoring a data matrix into a
non-negative product , with a sparse coefficient matrix , is proposed.
We restrict the columns of the dictionary matrix to coincide with certain
columns of the data matrix , thereby guaranteeing a physically meaningful
dictionary and dimensionality reduction. We use regularization
to select the dictionary from the data and show this leads to an exact convex
relaxation of in the case of distinct noise free data. We also show how
to relax the restriction-to- constraint by initializing an alternating
minimization approach with the solution of the convex model, obtaining a
dictionary close to but not necessarily in . We focus on applications of the
proposed framework to hyperspectral endmember and abundances identification and
also show an application to blind source separation of NMR data.Comment: 14 pages, 9 figures. EE and JX were supported by NSF grants
{DMS-0911277}, {PRISM-0948247}, MM by the German Academic Exchange Service
(DAAD), SO and MM by NSF grants {DMS-0835863}, {DMS-0914561}, {DMS-0914856}
and ONR grant {N00014-08-1119}, and GS was supported by NSF, NGA, ONR, ARO,
DARPA, and {NSSEFF.
- …