9 research outputs found
Stable, Robust and Super Fast Reconstruction of Tensors Using Multi-Way Projections
In the framework of multidimensional Compressed Sensing (CS), we introduce an
analytical reconstruction formula that allows one to recover an th-order
data tensor
from a reduced set of multi-way compressive measurements by exploiting its low
multilinear-rank structure. Moreover, we show that, an interesting property of
multi-way measurements allows us to build the reconstruction based on
compressive linear measurements taken only in two selected modes, independently
of the tensor order . In addition, it is proved that, in the matrix case and
in a particular case with rd-order tensors where the same 2D sensor operator
is applied to all mode-3 slices, the proposed reconstruction
is stable in the sense that the approximation
error is comparable to the one provided by the best low-multilinear-rank
approximation, where is a threshold parameter that controls the
approximation error. Through the analysis of the upper bound of the
approximation error we show that, in the 2D case, an optimal value for the
threshold parameter exists, which is confirmed by our
simulation results. On the other hand, our experiments on 3D datasets show that
very good reconstructions are obtained using , which means that this
parameter does not need to be tuned. Our extensive simulation results
demonstrate the stability and robustness of the method when it is applied to
real-world 2D and 3D signals. A comparison with state-of-the-arts sparsity
based CS methods specialized for multidimensional signals is also included. A
very attractive characteristic of the proposed method is that it provides a
direct computation, i.e. it is non-iterative in contrast to all existing
sparsity based CS algorithms, thus providing super fast computations, even for
large datasets.Comment: Submitted to IEEE Transactions on Signal Processin
A dual framework for low-rank tensor completion
One of the popular approaches for low-rank tensor completion is to use the
latent trace norm regularization. However, most existing works in this
direction learn a sparse combination of tensors. In this work, we fill this gap
by proposing a variant of the latent trace norm that helps in learning a
non-sparse combination of tensors. We develop a dual framework for solving the
low-rank tensor completion problem. We first show a novel characterization of
the dual solution space with an interesting factorization of the optimal
solution. Overall, the optimal solution is shown to lie on a Cartesian product
of Riemannian manifolds. Furthermore, we exploit the versatile Riemannian
optimization framework for proposing computationally efficient trust region
algorithm. The experiments illustrate the efficacy of the proposed algorithm on
several real-world datasets across applications.Comment: Aceepted to appear in Advances of Nueral Information Processing
Systems (NIPS), 2018. A shorter version appeared in the NIPS workshop on
Synergies in Geometric Data Analysis 201
A Joint Tensor Completion and Prediction Scheme for Multi-Dimensional Spectrum Map Construction
Spectrum data, which are usually characterized by many dimensions, such as location,
frequency, time, and signal strength, present formidable challenges in terms of acquisition, processing, and visualization. In practice, a portion of spectrum data entries may be unavailable due to the interference during the acquisition process or compression during the sensing process. Nevertheless, the completion work in multi-dimensional spectrum data has drawn few attention to the researchers working in the eld. In this paper, we rst put forward the concept of spectrum tensor to depict the multi-dimensional spectrum data.
Then, we develop a joint tensor completion and prediction scheme, which combines an improved tensor completion algorithm with prediction models to retrieve the incomplete measurements. Moreover, we build an experimental platform using Universal Software Radio Peripheral to collect real-world spectrum tensor data. Experimental results demonstrate that the effectiveness of the proposed joint tensor processing scheme is superior than relying on the completion or prediction scheme only
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page