265 research outputs found
Spectral Methods from Tensor Networks
A tensor network is a diagram that specifies a way to "multiply" a collection
of tensors together to produce another tensor (or matrix). Many existing
algorithms for tensor problems (such as tensor decomposition and tensor PCA),
although they are not presented this way, can be viewed as spectral methods on
matrices built from simple tensor networks. In this work we leverage the full
power of this abstraction to design new algorithms for certain continuous
tensor decomposition problems.
An important and challenging family of tensor problems comes from orbit
recovery, a class of inference problems involving group actions (inspired by
applications such as cryo-electron microscopy). Orbit recovery problems over
finite groups can often be solved via standard tensor methods. However, for
infinite groups, no general algorithms are known. We give a new spectral
algorithm based on tensor networks for one such problem: continuous
multi-reference alignment over the infinite group SO(2). Our algorithm extends
to the more general heterogeneous case.Comment: 30 pages, 8 figure
Lower Bounds for the Convergence of Tensor Power Iteration on Random Overcomplete Models
Tensor decomposition serves as a powerful primitive in statistics and machine
learning. In this paper, we focus on using power iteration to decompose an
overcomplete random tensor. Past work studying the properties of tensor power
iteration either requires a non-trivial data-independent initialization, or is
restricted to the undercomplete regime. Moreover, several papers implicitly
suggest that logarithmically many iterations (in terms of the input dimension)
are sufficient for the power method to recover one of the tensor components. In
this paper, we analyze the dynamics of tensor power iteration from random
initialization in the overcomplete regime. Surprisingly, we show that
polynomially many steps are necessary for convergence of tensor power iteration
to any of the true component, which refutes the previous conjecture. On the
other hand, our numerical experiments suggest that tensor power iteration
successfully recovers tensor components for a broad range of parameters,
despite that it takes at least polynomially many steps to converge. To further
complement our empirical evidence, we prove that a popular objective function
for tensor decomposition is strictly increasing along the power iteration path.
Our proof is based on the Gaussian conditioning technique, which has been
applied to analyze the approximate message passing (AMP) algorithm. The major
ingredient of our argument is a conditioning lemma that allows us to generalize
AMP-type analysis to non-proportional limit and polynomially many iterations of
the power method.Comment: 40 pages, 3 figure
Average-Case Complexity of Tensor Decomposition for Low-Degree Polynomials
Suppose we are given an -dimensional order-3 symmetric tensor that is the sum of random rank-1 terms. The
problem of recovering the rank-1 components is possible in principle when but polynomial-time algorithms are only known in the regime . Similar "statistical-computational gaps" occur in many
high-dimensional inference tasks, and in recent years there has been a flurry
of work on explaining the apparent computational hardness in these problems by
proving lower bounds against restricted (yet powerful) models of computation
such as statistical queries (SQ), sum-of-squares (SoS), and low-degree
polynomials (LDP). However, no such prior work exists for tensor decomposition,
largely because its hardness does not appear to be explained by a "planted
versus null" testing problem.
We consider a model for random order-3 tensor decomposition where one
component is slightly larger in norm than the rest (to break symmetry), and the
components are drawn uniformly from the hypercube. We resolve the computational
complexity in the LDP model: -degree polynomial functions of the
tensor entries can accurately estimate the largest component when but fail to do so when . This provides rigorous
evidence suggesting that the best known algorithms for tensor decomposition
cannot be improved, at least by known approaches. A natural extension of the
result holds for tensors of any fixed order , in which case the LDP
threshold is .Comment: 42 pages; STOC 202
- β¦