102 research outputs found

    Optimal Rates of Convergence for Noisy Sparse Phase Retrieval via Thresholded Wirtinger Flow

    Get PDF
    This paper considers the noisy sparse phase retrieval problem: recovering a sparse signal x∈Rpx \in \mathbb{R}^p from noisy quadratic measurements yj=(aj′x)2+ϵjy_j = (a_j' x )^2 + \epsilon_j, j=1,…,mj=1, \ldots, m, with independent sub-exponential noise ϵj\epsilon_j. The goals are to understand the effect of the sparsity of xx on the estimation precision and to construct a computationally feasible estimator to achieve the optimal rates. Inspired by the Wirtinger Flow [12] proposed for noiseless and non-sparse phase retrieval, a novel thresholded gradient descent algorithm is proposed and it is shown to adaptively achieve the minimax optimal rates of convergence over a wide range of sparsity levels when the aja_j's are independent standard Gaussian random vectors, provided that the sample size is sufficiently large compared to the sparsity of xx.Comment: 28 pages, 4 figure

    DOLPHIn - Dictionary Learning for Phase Retrieval

    Get PDF
    We propose a new algorithm to learn a dictionary for reconstructing and sparsely encoding signals from measurements without phase. Specifically, we consider the task of estimating a two-dimensional image from squared-magnitude measurements of a complex-valued linear transformation of the original image. Several recent phase retrieval algorithms exploit underlying sparsity of the unknown signal in order to improve recovery performance. In this work, we consider such a sparse signal prior in the context of phase retrieval, when the sparsifying dictionary is not known in advance. Our algorithm jointly reconstructs the unknown signal - possibly corrupted by noise - and learns a dictionary such that each patch of the estimated image can be sparsely represented. Numerical experiments demonstrate that our approach can obtain significantly better reconstructions for phase retrieval problems with noise than methods that cannot exploit such "hidden" sparsity. Moreover, on the theoretical side, we provide a convergence result for our method

    Solving Quadratic Systems with Full-Rank Matrices Using Sparse or Generative Priors

    Full text link
    The problem of recovering a signal x∈Rn\boldsymbol{x} \in \mathbb{R}^n from a quadratic system $\{y_i=\boldsymbol{x}^\top\boldsymbol{A}_i\boldsymbol{x},\ i=1,\ldots,m\}withfull−rankmatrices with full-rank matrices \boldsymbol{A}_ifrequentlyarisesinapplicationssuchasunassigneddistancegeometryandsub−wavelengthimaging.Withi.i.d.standardGaussianmatrices frequently arises in applications such as unassigned distance geometry and sub-wavelength imaging. With i.i.d. standard Gaussian matrices \boldsymbol{A}_i,thispaperaddressesthehigh−dimensionalcasewhere, this paper addresses the high-dimensional case where m\ll nbyincorporatingpriorknowledgeof by incorporating prior knowledge of \boldsymbol{x}.First,weconsidera. First, we consider a k−sparse-sparse \boldsymbol{x}andintroducethethresholdedWirtingerflow(TWF)algorithmthatdoesnotrequirethesparsitylevel and introduce the thresholded Wirtinger flow (TWF) algorithm that does not require the sparsity level k.TWFcomprisestwosteps:thespectralinitializationthatidentifiesapointsufficientlycloseto. TWF comprises two steps: the spectral initialization that identifies a point sufficiently close to \boldsymbol{x}(uptoasignflip)when (up to a sign flip) when m=O(k^2\log n),andthethresholdedgradientdescent(withagoodinitialization)thatproducesasequencelinearlyconvergingto, and the thresholded gradient descent (with a good initialization) that produces a sequence linearly converging to \boldsymbol{x}with with m=O(k\log n)measurements.Second,weexplorethegenerativeprior,assumingthat measurements. Second, we explore the generative prior, assuming that \boldsymbol{x}liesintherangeofan lies in the range of an L−Lipschitzcontinuousgenerativemodelwith-Lipschitz continuous generative model with k−dimensionalinputsinan-dimensional inputs in an \ell_2−ballofradius-ball of radius r.Wedeveloptheprojectedgradientdescent(PGD)algorithmthatalsocomprisestwosteps:theprojectedpowermethodthatprovidesaninitialvectorwith. We develop the projected gradient descent (PGD) algorithm that also comprises two steps: the projected power method that provides an initial vector with O\big(\sqrt{\frac{k \log L}{m}}\big) \ell_2−errorgiven-error given m=O(k\log(Lnr))measurements,andtheprojectedgradientdescentthatrefinesthe measurements, and the projected gradient descent that refines the \ell_2−errorto-error to O(\delta)atageometricratewhen at a geometric rate when m=O(k\log\frac{Lrn}{\delta^2})$. Experimental results corroborate our theoretical findings and show that: (i) our approach for the sparse case notably outperforms the existing provable algorithm sparse power factorization; (ii) leveraging the generative prior allows for precise image recovery in the MNIST dataset from a small number of quadratic measurements
    • …
    corecore