97 research outputs found
Near-optimal bounds for phase synchronization
The problem of phase synchronization is to estimate the phases (angles) of a
complex unit-modulus vector from their noisy pairwise relative measurements
, where is a complex-valued Gaussian random matrix.
The maximum likelihood estimator (MLE) is a solution to a unit-modulus
constrained quadratic programming problem, which is nonconvex. Existing works
have proposed polynomial-time algorithms such as a semidefinite relaxation
(SDP) approach or the generalized power method (GPM) to solve it. Numerical
experiments suggest both of these methods succeed with high probability for
up to , yet, existing analyses only
confirm this observation for up to . In this
paper, we bridge the gap, by proving SDP is tight for , and GPM converges to the global optimum under
the same regime. Moreover, we establish a linear convergence rate for GPM, and
derive a tighter bound for the MLE. A novel technique we develop
in this paper is to track (theoretically) closely related sequences of
iterates, in addition to the sequence of iterates GPM actually produces. As a
by-product, we obtain an perturbation bound for leading
eigenvectors. Our result also confirms intuitions that use techniques from
statistical mechanics.Comment: 34 pages, 1 figur
Computational Hardness of Certifying Bounds on Constrained PCA Problems
Given a random n×n symmetric matrix W drawn from the Gaussian orthogonal ensemble (GOE), we consider the problem of certifying an upper bound on the maximum value of the quadratic form x⊤Wx over all vectors x in a constraint set S⊂Rn. For a certain class of normalized constraint sets S we show that, conditional on certain complexity-theoretic assumptions, there is no polynomial-time algorithm certifying a better upper bound than the largest eigenvalue of W. A notable special case included in our results is the hypercube S={±1/n−−√}n, which corresponds to the problem of certifying bounds on the Hamiltonian of the Sherrington-Kirkpatrick spin glass model from statistical physics.
Our proof proceeds in two steps. First, we give a reduction from the detection problem in the negatively-spiked Wishart model to the above certification problem. We then give evidence that this Wishart detection problem is computationally hard below the classical spectral threshold, by showing that no low-degree polynomial can (in expectation) distinguish the spiked and unspiked models. This method for identifying computational thresholds was proposed in a sequence of recent works on the sum-of-squares hierarchy, and is believed to be correct for a large class of problems. Our proof can be seen as constructing a distribution over symmetric matrices that appears computationally indistinguishable from the GOE, yet is supported on matrices whose maximum quadratic form over x∈S is much larger than that of a GOE matrix.ISSN:1868-896
Spectral Method for Multiplexed Phase Retrieval and Application in Optical Imaging in Complex Media
We introduce a generalized version of phase retrieval called multiplexed
phase retrieval. We want to recover the phase of amplitude-only measurements
from linear combinations of them. This corresponds to the case in which
multiple incoherent sources are sampled jointly, and one would like to recover
their individual contributions. We show that a recent spectral method developed
for phase retrieval can be generalized to this setting, and that its
performance follows a phase transition behavior. We apply this new technique to
light focusing at depth in a complex medium. Experimentally, although we only
have access to the sum of the intensities on multiple targets, we are able to
separately focus on each ones, thus opening potential applications in deep
fluorescence imaging and light deliver
Subexponential-Time Algorithms for Sparse PCA
We study the computational cost of recovering a unit-norm sparse principal
component planted in a random matrix, in either the Wigner
or Wishart spiked model (observing either with drawn
from the Gaussian orthogonal ensemble, or independent samples from
, respectively). Prior work has shown that
when the signal-to-noise ratio ( or , respectively)
is a small constant and the fraction of nonzero entries in the planted vector
is , it is possible to recover in polynomial time if
. While it is possible to recover in exponential
time under the weaker condition , it is believed that
polynomial-time recovery is impossible unless . We
investigate the precise amount of time required for recovery in the "possible
but hard" regime by exploring the power of
subexponential-time algorithms, i.e., algorithms running in time
for some constant . For any , we give a recovery algorithm with runtime roughly , demonstrating a smooth tradeoff between sparsity and runtime. Our family
of algorithms interpolates smoothly between two existing algorithms: the
polynomial-time diagonal thresholding algorithm and the -time
exhaustive search algorithm. Furthermore, by analyzing the low-degree
likelihood ratio, we give rigorous evidence suggesting that the tradeoff
achieved by our algorithms is optimal.Comment: 44 page
- …