The Local Landscape of Phase Retrieval Under Limited Samples

Abstract

In this paper, we provide a fine-grained analysis of the local landscape of phase retrieval under the regime with limited samples. Our aim is to ascertain the minimal sample size necessary to guarantee a benign local landscape surrounding global minima in high dimensions. Let nn and dd denote the sample size and input dimension, respectively. We first explore the local convexity and establish that when n=o(dlog⁑d)n=o(d\log d), for almost every fixed point in the local ball, the Hessian matrix must have negative eigenvalues as long as dd is sufficiently large. Consequently, the local landscape is highly non-convex. We next consider the one-point strong convexity and show that as long as n=Ο‰(d)n=\omega(d), with high probability, the landscape is one-point strongly convex in the local annulus: {w∈Rd:od(1)β©½βˆ₯wβˆ’wβˆ—βˆ₯β©½c}\{w\in\mathbb{R}^d: o_d(1)\leqslant \|w-w^*\|\leqslant c\}, where wβˆ—w^* is the ground truth and cc is an absolute constant. This implies that gradient descent initialized from any point in this domain can converge to an od(1)o_d(1)-loss solution exponentially fast. Furthermore, we show that when n=o(dlog⁑d)n=o(d\log d), there is a radius of Θ~(1/d)\widetilde\Theta\left(\sqrt{1/d}\right) such that one-point convexity breaks in the corresponding smaller local ball. This indicates an impossibility to establish a convergence to exact wβˆ—w^* for gradient descent under limited samples by relying solely on one-point convexity.Comment: 41 page

    Similar works

    Full text

    thumbnail-image

    Available Versions