93,746 research outputs found
Multiarray Signal Processing: Tensor decomposition meets compressed sensing
We discuss how recently discovered techniques and tools from compressed
sensing can be used in tensor decompositions, with a view towards modeling
signals from multiple arrays of multiple sensors. We show that with appropriate
bounds on a measure of separation between radiating sources called coherence,
one could always guarantee the existence and uniqueness of a best rank-r
approximation of the tensor representing the signal. We also deduce a
computationally feasible variant of Kruskal's uniqueness condition, where the
coherence appears as a proxy for k-rank. Problems of sparsest recovery with an
infinite continuous dictionary, lowest-rank tensor representation, and blind
source separation are treated in a uniform fashion. The decomposition of the
measurement tensor leads to simultaneous localization and extraction of
radiating sources, in an entirely deterministic manner.Comment: 10 pages, 1 figur
Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis
The widespread use of multi-sensor technology and the emergence of big
datasets has highlighted the limitations of standard flat-view matrix models
and the necessity to move towards more versatile data analysis tools. We show
that higher-order tensors (i.e., multiway arrays) enable such a fundamental
paradigm shift towards models that are essentially polynomial and whose
uniqueness, unlike the matrix methods, is guaranteed under verymild and natural
conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical
backbone, data analysis techniques using tensor decompositions are shown to
have great flexibility in the choice of constraints that match data properties,
and to find more general latent components in the data than matrix-based
methods. A comprehensive introduction to tensor decompositions is provided from
a signal processing perspective, starting from the algebraic foundations, via
basic Canonical Polyadic and Tucker models, through to advanced cause-effect
and multi-view data analysis schemes. We show that tensor decompositions enable
natural generalizations of some commonly used signal processing paradigms, such
as canonical correlation and subspace techniques, signal separation, linear
regression, feature extraction and classification. We also cover computational
aspects, and point out how ideas from compressed sensing and scientific
computing may be used for addressing the otherwise unmanageable storage and
manipulation problems associated with big datasets. The concepts are supported
by illustrative real world case studies illuminating the benefits of the tensor
framework, as efficient and promising tools for modern signal processing, data
analysis and machine learning applications; these benefits also extend to
vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker
decomposition, HOSVD, tensor networks, Tensor Train
Parallel Algorithms for Constrained Tensor Factorization via the Alternating Direction Method of Multipliers
Tensor factorization has proven useful in a wide range of applications, from
sensor array processing to communications, speech and audio signal processing,
and machine learning. With few recent exceptions, all tensor factorization
algorithms were originally developed for centralized, in-memory computation on
a single machine; and the few that break away from this mold do not easily
incorporate practically important constraints, such as nonnegativity. A new
constrained tensor factorization framework is proposed in this paper, building
upon the Alternating Direction method of Multipliers (ADMoM). It is shown that
this simplifies computations, bypassing the need to solve constrained
optimization problems in each iteration; and it naturally leads to distributed
algorithms suitable for parallel implementation on regular high-performance
computing (e.g., mesh) architectures. This opens the door for many emerging big
data-enabled applications. The methodology is exemplified using nonnegativity
as a baseline constraint, but the proposed framework can more-or-less readily
incorporate many other types of constraints. Numerical experiments are very
encouraging, indicating that the ADMoM-based nonnegative tensor factorization
(NTF) has high potential as an alternative to state-of-the-art approaches.Comment: Submitted to the IEEE Transactions on Signal Processin
Multi-way Graph Signal Processing on Tensors: Integrative analysis of irregular geometries
Graph signal processing (GSP) is an important methodology for studying data
residing on irregular structures. As acquired data is increasingly taking the
form of multi-way tensors, new signal processing tools are needed to maximally
utilize the multi-way structure within the data. In this paper, we review
modern signal processing frameworks generalizing GSP to multi-way data,
starting from graph signals coupled to familiar regular axes such as time in
sensor networks, and then extending to general graphs across all tensor modes.
This widely applicable paradigm motivates reformulating and improving upon
classical problems and approaches to creatively address the challenges in
tensor-based data. We synthesize common themes arising from current efforts to
combine GSP with tensor analysis and highlight future directions in extending
GSP to the multi-way paradigm.Comment: In review for IEEE Signal Processing Magazin
- …