97 research outputs found

    Near-optimal bounds for phase synchronization

    Full text link
    The problem of phase synchronization is to estimate the phases (angles) of a complex unit-modulus vector zz from their noisy pairwise relative measurements C=zz+σWC = zz^* + \sigma W, where WW is a complex-valued Gaussian random matrix. The maximum likelihood estimator (MLE) is a solution to a unit-modulus constrained quadratic programming problem, which is nonconvex. Existing works have proposed polynomial-time algorithms such as a semidefinite relaxation (SDP) approach or the generalized power method (GPM) to solve it. Numerical experiments suggest both of these methods succeed with high probability for σ\sigma up to O~(n1/2)\tilde{\mathcal{O}}(n^{1/2}), yet, existing analyses only confirm this observation for σ\sigma up to O(n1/4)\mathcal{O}(n^{1/4}). In this paper, we bridge the gap, by proving SDP is tight for σ=O(n/logn)\sigma = \mathcal{O}(\sqrt{n /\log n}), and GPM converges to the global optimum under the same regime. Moreover, we establish a linear convergence rate for GPM, and derive a tighter \ell_\infty bound for the MLE. A novel technique we develop in this paper is to track (theoretically) nn closely related sequences of iterates, in addition to the sequence of iterates GPM actually produces. As a by-product, we obtain an \ell_\infty perturbation bound for leading eigenvectors. Our result also confirms intuitions that use techniques from statistical mechanics.Comment: 34 pages, 1 figur

    Computational Hardness of Certifying Bounds on Constrained PCA Problems

    Get PDF
    Given a random n×n symmetric matrix W drawn from the Gaussian orthogonal ensemble (GOE), we consider the problem of certifying an upper bound on the maximum value of the quadratic form x⊤Wx over all vectors x in a constraint set S⊂Rn. For a certain class of normalized constraint sets S we show that, conditional on certain complexity-theoretic assumptions, there is no polynomial-time algorithm certifying a better upper bound than the largest eigenvalue of W. A notable special case included in our results is the hypercube S={±1/n−−√}n, which corresponds to the problem of certifying bounds on the Hamiltonian of the Sherrington-Kirkpatrick spin glass model from statistical physics. Our proof proceeds in two steps. First, we give a reduction from the detection problem in the negatively-spiked Wishart model to the above certification problem. We then give evidence that this Wishart detection problem is computationally hard below the classical spectral threshold, by showing that no low-degree polynomial can (in expectation) distinguish the spiked and unspiked models. This method for identifying computational thresholds was proposed in a sequence of recent works on the sum-of-squares hierarchy, and is believed to be correct for a large class of problems. Our proof can be seen as constructing a distribution over symmetric matrices that appears computationally indistinguishable from the GOE, yet is supported on matrices whose maximum quadratic form over x∈S is much larger than that of a GOE matrix.ISSN:1868-896

    Spectral Method for Multiplexed Phase Retrieval and Application in Optical Imaging in Complex Media

    Full text link
    We introduce a generalized version of phase retrieval called multiplexed phase retrieval. We want to recover the phase of amplitude-only measurements from linear combinations of them. This corresponds to the case in which multiple incoherent sources are sampled jointly, and one would like to recover their individual contributions. We show that a recent spectral method developed for phase retrieval can be generalized to this setting, and that its performance follows a phase transition behavior. We apply this new technique to light focusing at depth in a complex medium. Experimentally, although we only have access to the sum of the intensities on multiple targets, we are able to separately focus on each ones, thus opening potential applications in deep fluorescence imaging and light deliver

    Subexponential-Time Algorithms for Sparse PCA

    Full text link
    We study the computational cost of recovering a unit-norm sparse principal component xRnx \in \mathbb{R}^n planted in a random matrix, in either the Wigner or Wishart spiked model (observing either W+λxxW + \lambda xx^\top with WW drawn from the Gaussian orthogonal ensemble, or NN independent samples from N(0,In+βxx)\mathcal{N}(0, I_n + \beta xx^\top), respectively). Prior work has shown that when the signal-to-noise ratio (λ\lambda or βN/n\beta\sqrt{N/n}, respectively) is a small constant and the fraction of nonzero entries in the planted vector is x0/n=ρ\|x\|_0 / n = \rho, it is possible to recover xx in polynomial time if ρ1/n\rho \lesssim 1/\sqrt{n}. While it is possible to recover xx in exponential time under the weaker condition ρ1\rho \ll 1, it is believed that polynomial-time recovery is impossible unless ρ1/n\rho \lesssim 1/\sqrt{n}. We investigate the precise amount of time required for recovery in the "possible but hard" regime 1/nρ11/\sqrt{n} \ll \rho \ll 1 by exploring the power of subexponential-time algorithms, i.e., algorithms running in time exp(nδ)\exp(n^\delta) for some constant δ(0,1)\delta \in (0,1). For any 1/nρ11/\sqrt{n} \ll \rho \ll 1, we give a recovery algorithm with runtime roughly exp(ρ2n)\exp(\rho^2 n), demonstrating a smooth tradeoff between sparsity and runtime. Our family of algorithms interpolates smoothly between two existing algorithms: the polynomial-time diagonal thresholding algorithm and the exp(ρn)\exp(\rho n)-time exhaustive search algorithm. Furthermore, by analyzing the low-degree likelihood ratio, we give rigorous evidence suggesting that the tradeoff achieved by our algorithms is optimal.Comment: 44 page
    corecore