2,388 research outputs found
Low-rank Tensor Recovery
Low-rank tensor recovery is an interesting subject from both the theoretical and application point of view. On one side, it is a natural extension of the sparse vector and low-rank matrix recovery problem. On the other side, estimating a low-rank tensor has applications in many different areas such as machine learning, video compression, and seismic data interpolation. In this thesis, two approaches are introduced. The first approach is a convex optimization approach and could be considered as a tractable extension of -minimization for sparse vector and nuclear norm minimization for matrix recovery to tensor scenario. It is based on theta bodies – a recently introduced tool from real algebraic geometry. In particular, theta bodies of appropriately defined polynomial ideal correspond to the unit-theta norm balls. These unit-theta norm balls are relaxations of the unit-tensor-nuclear norm ball. Thus, in this case, we consider a canonical tensor format. The method requires computing the reduced Groebner basis (with respect to the graded reverse lexicographic ordering) of the appropriately defined polynomial ideal. Numerical results for third-order tensor recovery via -norm are provided. The second approach is a generalization of iterative hard thresholding algorithm for sparse vector and low-rank matrix recovery to tensor scenario (tensor IHT or TIHT algorithm). Here, we consider the Tucker format, the tensor train decomposition, and the hierarchical Tucker decomposition. The analysis of the algorithm is based on a version of the restricted isometry property (tensor RIP or TRIP) adapted to the tensor decomposition at hand. We show that subgaussian measurement ensembles satisfy TRIP with high probability under an almost optimal condition on the number of measurements. Additionally, we show that partial Fourier maps combined with random sign flips of the tensor entries satisfy TRIP with high probability. Under the assumption that the linear operator satisfies TRIP and under an additional assumption on the thresholding operator, we provide a linear convergence result for the TIHT algorithm. Finally, we present numerical results on low-Tucker-rank third-order tensors via partial Fourier maps combined with random sign flips of tensor entries, tensor completion, and Gaussian measurement ensembles
Tensor Computation: A New Framework for High-Dimensional Problems in EDA
Many critical EDA problems suffer from the curse of dimensionality, i.e. the
very fast-scaling computational burden produced by large number of parameters
and/or unknown variables. This phenomenon may be caused by multiple spatial or
temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit
simulation), nonlinearity of devices and circuits, large number of design or
optimization parameters (e.g. full-chip routing/placement and circuit sizing),
or extensive process variations (e.g. variability/reliability analysis and
design for manufacturability). The computational challenges generated by such
high dimensional problems are generally hard to handle efficiently with
traditional EDA core algorithms that are based on matrix and vector
computation. This paper presents "tensor computation" as an alternative general
framework for the development of efficient EDA algorithms and tools. A tensor
is a high-dimensional generalization of a matrix and a vector, and is a natural
choice for both storing and solving efficiently high-dimensional EDA problems.
This paper gives a basic tutorial on tensors, demonstrates some recent examples
of EDA applications (e.g., nonlinear circuit modeling and high-dimensional
uncertainty quantification), and suggests further open EDA problems where the
use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and
System
Tensor completion in hierarchical tensor representations
Compressed sensing extends from the recovery of sparse vectors from
undersampled measurements via efficient algorithms to the recovery of matrices
of low rank from incomplete information. Here we consider a further extension
to the reconstruction of tensors of low multi-linear rank in recently
introduced hierarchical tensor formats from a small number of measurements.
Hierarchical tensors are a flexible generalization of the well-known Tucker
representation, which have the advantage that the number of degrees of freedom
of a low rank tensor does not scale exponentially with the order of the tensor.
While corresponding tensor decompositions can be computed efficiently via
successive applications of (matrix) singular value decompositions, some
important properties of the singular value decomposition do not extend from the
matrix to the tensor case. This results in major computational and theoretical
difficulties in designing and analyzing algorithms for low rank tensor
recovery. For instance, a canonical analogue of the tensor nuclear norm is
NP-hard to compute in general, which is in stark contrast to the matrix case.
In this book chapter we consider versions of iterative hard thresholding
schemes adapted to hierarchical tensor formats. A variant builds on methods
from Riemannian optimization and uses a retraction mapping from the tangent
space of the manifold of low rank tensors back to this manifold. We provide
first partial convergence results based on a tensor version of the restricted
isometry property (TRIP) of the measurement map. Moreover, an estimate of the
number of measurements is provided that ensures the TRIP of a given tensor rank
with high probability for Gaussian measurement maps.Comment: revised version, to be published in Compressed Sensing and Its
Applications (edited by H. Boche, R. Calderbank, G. Kutyniok, J. Vybiral
Low rank tensor recovery via iterative hard thresholding
We study extensions of compressive sensing and low rank matrix recovery
(matrix completion) to the recovery of low rank tensors of higher order from a
small number of linear measurements. While the theoretical understanding of low
rank matrix recovery is already well-developed, only few contributions on the
low rank tensor recovery problem are available so far. In this paper, we
introduce versions of the iterative hard thresholding algorithm for several
tensor decompositions, namely the higher order singular value decomposition
(HOSVD), the tensor train format (TT), and the general hierarchical Tucker
decomposition (HT). We provide a partial convergence result for these
algorithms which is based on a variant of the restricted isometry property of
the measurement operator adapted to the tensor decomposition at hand that
induces a corresponding notion of tensor rank. We show that subgaussian
measurement ensembles satisfy the tensor restricted isometry property with high
probability under a certain almost optimal bound on the number of measurements
which depends on the corresponding tensor format. These bounds are extended to
partial Fourier maps combined with random sign flips of the tensor entries.
Finally, we illustrate the performance of iterative hard thresholding methods
for tensor recovery via numerical experiments where we consider recovery from
Gaussian random measurements, tensor completion (recovery of missing entries),
and Fourier measurements for third order tensors.Comment: 34 page
- …