This paper considers the noisy sparse phase retrieval problem: recovering a
sparse signal x∈Rp from noisy quadratic measurements yj​=(aj′​x)2+ϵj​, j=1,…,m, with independent sub-exponential
noise ϵj​. The goals are to understand the effect of the sparsity of
x on the estimation precision and to construct a computationally feasible
estimator to achieve the optimal rates. Inspired by the Wirtinger Flow [12]
proposed for noiseless and non-sparse phase retrieval, a novel thresholded
gradient descent algorithm is proposed and it is shown to adaptively achieve
the minimax optimal rates of convergence over a wide range of sparsity levels
when the aj​'s are independent standard Gaussian random vectors, provided
that the sample size is sufficiently large compared to the sparsity of x.Comment: 28 pages, 4 figure
'Institute of Electrical and Electronics Engineers (IEEE)'
Publication date
03/08/2016
Field of study
We propose a new algorithm to learn a dictionary for reconstructing and
sparsely encoding signals from measurements without phase. Specifically, we
consider the task of estimating a two-dimensional image from squared-magnitude
measurements of a complex-valued linear transformation of the original image.
Several recent phase retrieval algorithms exploit underlying sparsity of the
unknown signal in order to improve recovery performance. In this work, we
consider such a sparse signal prior in the context of phase retrieval, when the
sparsifying dictionary is not known in advance. Our algorithm jointly
reconstructs the unknown signal - possibly corrupted by noise - and learns a
dictionary such that each patch of the estimated image can be sparsely
represented. Numerical experiments demonstrate that our approach can obtain
significantly better reconstructions for phase retrieval problems with noise
than methods that cannot exploit such "hidden" sparsity. Moreover, on the
theoretical side, we provide a convergence result for our method
The problem of recovering a signal x∈Rn from a
quadratic system $\{y_i=\boldsymbol{x}^\top\boldsymbol{A}_i\boldsymbol{x},\
i=1,\ldots,m\}withfull−rankmatrices\boldsymbol{A}_ifrequentlyarisesinapplicationssuchasunassigneddistancegeometryandsub−wavelengthimaging.Withi.i.d.standardGaussianmatrices\boldsymbol{A}_i,thispaperaddressesthehigh−dimensionalcasewherem\ll nbyincorporatingpriorknowledgeof\boldsymbol{x}.First,weconsiderak−sparse\boldsymbol{x}andintroducethethresholdedWirtingerflow(TWF)algorithmthatdoesnotrequirethesparsitylevelk.TWFcomprisestwosteps:thespectralinitializationthatidentifiesapointsufficientlycloseto\boldsymbol{x}(uptoasignflip)whenm=O(k^2\log n),andthethresholdedgradientdescent(withagoodinitialization)thatproducesasequencelinearlyconvergingto\boldsymbol{x}withm=O(k\log n)measurements.Second,weexplorethegenerativeprior,assumingthat\boldsymbol{x}liesintherangeofanL−Lipschitzcontinuousgenerativemodelwithk−dimensionalinputsinan\ell_2−ballofradiusr.Wedeveloptheprojectedgradientdescent(PGD)algorithmthatalsocomprisestwosteps:theprojectedpowermethodthatprovidesaninitialvectorwithO\big(\sqrt{\frac{k \log L}{m}}\big)\ell_2−errorgivenm=O(k\log(Lnr))measurements,andtheprojectedgradientdescentthatrefinesthe\ell_2−errortoO(\delta)atageometricratewhenm=O(k\log\frac{Lrn}{\delta^2})$. Experimental results corroborate our
theoretical findings and show that: (i) our approach for the sparse case
notably outperforms the existing provable algorithm sparse power factorization;
(ii) leveraging the generative prior allows for precise image recovery in the
MNIST dataset from a small number of quadratic measurements