5,443 research outputs found
Stable, Robust and Super Fast Reconstruction of Tensors Using Multi-Way Projections
In the framework of multidimensional Compressed Sensing (CS), we introduce an
analytical reconstruction formula that allows one to recover an th-order
data tensor
from a reduced set of multi-way compressive measurements by exploiting its low
multilinear-rank structure. Moreover, we show that, an interesting property of
multi-way measurements allows us to build the reconstruction based on
compressive linear measurements taken only in two selected modes, independently
of the tensor order . In addition, it is proved that, in the matrix case and
in a particular case with rd-order tensors where the same 2D sensor operator
is applied to all mode-3 slices, the proposed reconstruction
is stable in the sense that the approximation
error is comparable to the one provided by the best low-multilinear-rank
approximation, where is a threshold parameter that controls the
approximation error. Through the analysis of the upper bound of the
approximation error we show that, in the 2D case, an optimal value for the
threshold parameter exists, which is confirmed by our
simulation results. On the other hand, our experiments on 3D datasets show that
very good reconstructions are obtained using , which means that this
parameter does not need to be tuned. Our extensive simulation results
demonstrate the stability and robustness of the method when it is applied to
real-world 2D and 3D signals. A comparison with state-of-the-arts sparsity
based CS methods specialized for multidimensional signals is also included. A
very attractive characteristic of the proposed method is that it provides a
direct computation, i.e. it is non-iterative in contrast to all existing
sparsity based CS algorithms, thus providing super fast computations, even for
large datasets.Comment: Submitted to IEEE Transactions on Signal Processin
Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis
The widespread use of multi-sensor technology and the emergence of big
datasets has highlighted the limitations of standard flat-view matrix models
and the necessity to move towards more versatile data analysis tools. We show
that higher-order tensors (i.e., multiway arrays) enable such a fundamental
paradigm shift towards models that are essentially polynomial and whose
uniqueness, unlike the matrix methods, is guaranteed under verymild and natural
conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical
backbone, data analysis techniques using tensor decompositions are shown to
have great flexibility in the choice of constraints that match data properties,
and to find more general latent components in the data than matrix-based
methods. A comprehensive introduction to tensor decompositions is provided from
a signal processing perspective, starting from the algebraic foundations, via
basic Canonical Polyadic and Tucker models, through to advanced cause-effect
and multi-view data analysis schemes. We show that tensor decompositions enable
natural generalizations of some commonly used signal processing paradigms, such
as canonical correlation and subspace techniques, signal separation, linear
regression, feature extraction and classification. We also cover computational
aspects, and point out how ideas from compressed sensing and scientific
computing may be used for addressing the otherwise unmanageable storage and
manipulation problems associated with big datasets. The concepts are supported
by illustrative real world case studies illuminating the benefits of the tensor
framework, as efficient and promising tools for modern signal processing, data
analysis and machine learning applications; these benefits also extend to
vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker
decomposition, HOSVD, tensor networks, Tensor Train
Tensor Computation: A New Framework for High-Dimensional Problems in EDA
Many critical EDA problems suffer from the curse of dimensionality, i.e. the
very fast-scaling computational burden produced by large number of parameters
and/or unknown variables. This phenomenon may be caused by multiple spatial or
temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit
simulation), nonlinearity of devices and circuits, large number of design or
optimization parameters (e.g. full-chip routing/placement and circuit sizing),
or extensive process variations (e.g. variability/reliability analysis and
design for manufacturability). The computational challenges generated by such
high dimensional problems are generally hard to handle efficiently with
traditional EDA core algorithms that are based on matrix and vector
computation. This paper presents "tensor computation" as an alternative general
framework for the development of efficient EDA algorithms and tools. A tensor
is a high-dimensional generalization of a matrix and a vector, and is a natural
choice for both storing and solving efficiently high-dimensional EDA problems.
This paper gives a basic tutorial on tensors, demonstrates some recent examples
of EDA applications (e.g., nonlinear circuit modeling and high-dimensional
uncertainty quantification), and suggests further open EDA problems where the
use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and
System
- …