2,613 research outputs found
Low rank tensor recovery via iterative hard thresholding
We study extensions of compressive sensing and low rank matrix recovery
(matrix completion) to the recovery of low rank tensors of higher order from a
small number of linear measurements. While the theoretical understanding of low
rank matrix recovery is already well-developed, only few contributions on the
low rank tensor recovery problem are available so far. In this paper, we
introduce versions of the iterative hard thresholding algorithm for several
tensor decompositions, namely the higher order singular value decomposition
(HOSVD), the tensor train format (TT), and the general hierarchical Tucker
decomposition (HT). We provide a partial convergence result for these
algorithms which is based on a variant of the restricted isometry property of
the measurement operator adapted to the tensor decomposition at hand that
induces a corresponding notion of tensor rank. We show that subgaussian
measurement ensembles satisfy the tensor restricted isometry property with high
probability under a certain almost optimal bound on the number of measurements
which depends on the corresponding tensor format. These bounds are extended to
partial Fourier maps combined with random sign flips of the tensor entries.
Finally, we illustrate the performance of iterative hard thresholding methods
for tensor recovery via numerical experiments where we consider recovery from
Gaussian random measurements, tensor completion (recovery of missing entries),
and Fourier measurements for third order tensors.Comment: 34 page
Trading quantum for classical resources in quantum data compression
We study the visible compression of a source E of pure quantum signal states,
or, more formally, the minimal resources per signal required to represent
arbitrarily long strings of signals with arbitrarily high fidelity, when the
compressor is given the identity of the input state sequence as classical
information. According to the quantum source coding theorem, the optimal
quantum rate is the von Neumann entropy S(E) qubits per signal.
We develop a refinement of this theorem in order to analyze the situation in
which the states are coded into classical and quantum bits that are quantified
separately. This leads to a trade--off curve Q(R), where Q(R) qubits per signal
is the optimal quantum rate for a given classical rate of R bits per signal.
Our main result is an explicit characterization of this trade--off function
by a simple formula in terms of only single signal, perfect fidelity encodings
of the source. We give a thorough discussion of many further mathematical
properties of our formula, including an analysis of its behavior for group
covariant sources and a generalization to sources with continuously
parameterized states. We also show that our result leads to a number of
corollaries characterizing the trade--off between information gain and state
disturbance for quantum sources. In addition, we indicate how our techniques
also provide a solution to the so--called remote state preparation problem.
Finally, we develop a probability--free version of our main result which may be
interpreted as an answer to the question: ``How many classical bits does a
qubit cost?'' This theorem provides a type of dual to Holevo's theorem, insofar
as the latter characterizes the cost of coding classical bits into qubits.Comment: 51 pages, 7 figure
On Optimizing Distributed Tucker Decomposition for Dense Tensors
The Tucker decomposition expresses a given tensor as the product of a small
core tensor and a set of factor matrices. Apart from providing data
compression, the construction is useful in performing analysis such as
principal component analysis (PCA)and finds applications in diverse domains
such as signal processing, computer vision and text analytics. Our objective is
to develop an efficient distributed implementation for the case of dense
tensors. The implementation is based on the HOOI (Higher Order Orthogonal
Iterator) procedure, wherein the tensor-times-matrix product forms the core
routine. Prior work have proposed heuristics for reducing the computational
load and communication volume incurred by the routine. We study the two metrics
in a formal and systematic manner, and design strategies that are optimal under
the two fundamental metrics. Our experimental evaluation on a large benchmark
of tensors shows that the optimal strategies provide significant reduction in
load and volume compared to prior heuristics, and provide up to 7x speed-up in
the overall running time.Comment: Preliminary version of the paper appears in the proceedings of
IPDPS'1
MARS: Masked Automatic Ranks Selection in Tensor Decompositions
Tensor decomposition methods are known to be efficient for compressing and
accelerating neural networks. However, the problem of optimal decomposition
structure determination is still not well studied while being quite important.
Specifically, decomposition ranks present the crucial parameter controlling the
compression-accuracy trade-off. In this paper, we introduce MARS -- a new
efficient method for the automatic selection of ranks in general tensor
decompositions. During training, the procedure learns binary masks over
decomposition cores that "select" the optimal tensor structure. The learning is
performed via relaxed maximum a posteriori (MAP) estimation in a specific
Bayesian model. The proposed method achieves better results compared to
previous works in various tasks
- …