1,262 research outputs found

    Phase Retrieval using Lipschitz Continuous Maps

    Full text link
    In this note we prove that reconstruction from magnitudes of frame coefficients (the so called "phase retrieval problem") can be performed using Lipschitz continuous maps. Specifically we show that when the nonlinear analysis map α:H→Rm\alpha:{\mathcal H}\rightarrow\mathbb{R}^m is injective, with (α(x))k=∣∣2(\alpha(x))_k=||^2, where {f1,…,fm}\{f_1,\ldots,f_m\} is a frame for the Hilbert space H{\mathcal H}, then there exists a left inverse map ω:Rm→H\omega:\mathbb{R}^m\rightarrow {\mathcal H} that is Lipschitz continuous. Additionally we obtain the Lipschitz constant of this inverse map in terms of the lower Lipschitz constant of α\alpha. Surprisingly the increase in Lipschitz constant is independent of the space dimension or frame redundancy.Comment: 12 pages, 1 figur

    On Lipschitz Analysis and Lipschitz Synthesis for the Phase Retrieval Problem

    Full text link
    In this paper we prove two results regarding reconstruction from magnitudes of frame coefficients (the so called "phase retrieval problem"). First we show that phase retrievability as an algebraic property implies that nonlinear maps are bi-Lipschitz with respect to appropriate metrics on the quotient space. Second we prove that reconstruction can be performed using Lipschitz continuous maps. Specifically we show that when nonlinear analysis maps α,β:H^→Rm\alpha,\beta:\hat{H}\rightarrow R^m are injective, with α(x)=(∣⟨x,fk⟩∣)k=1m\alpha(x)=(|\langle x,f_k\rangle |)_{k=1}^m and β(x)=(∣⟨x,fk⟩∣2)k=1m\beta(x)=(|\langle x,f_k \rangle|^2)_{k=1}^m, where {f1,…,fm}\{f_1,\ldots,f_m\} is a frame for a Hilbert space HH and H^=H/T1\hat{H}=H/T^1, then α\alpha is bi-Lipschitz with respect to the class of "natural metrics" Dp(x,y)=minφ∣∣x−eiφy∣∣pD_p(x,y)= min_{\varphi} || x-e^{i\varphi}y {||}_p, whereas β\beta is bi-Lipschitz with respect to the class of matrix-norm induced metrics dp(x,y)=∣∣xx∗−yy∗∣∣pd_p(x,y)=|| xx^*-yy^*{||}_p. Furthermore, there exist left inverse maps ω,ψ:Rm→H^\omega,\psi:R^m\rightarrow \hat{H} of α\alpha and β\beta respectively, that are Lipschitz continuous with respect to the appropriate metric. Additionally we obtain the Lipschitz constants of these inverse maps in terms of the lower Lipschitz constants of α\alpha and β\beta. Surprisingly the increase in Lipschitz constant is a relatively small factor, independent of the space dimension or the frame redundancy.Comment: 26 pages, 1 figure; presented in part at ICHAA 2015 Conference, N

    Frames and Phaseless Reconstruction

    Full text link
    Frame design for phaseless reconstruction is now part of the broader problem of nonlinear reconstruction and is an emerging topic in harmonic analysis. The problem of phaseless reconstruction can be simply stated as follows. Given the magnitudes of the coefficients generated by a linear redundant system (frame), we want to reconstruct the unknown input. This problem first occurred in X-ray crystallography starting in the early 20th century. The same nonlinear reconstruction problem shows up in speech processing, particularly in speech recognition. In this lecture we shall cover existing analysis results as well as stability bounds for signal recovery including: necessary and sufficient conditions for injectivity, Lipschitz bounds of the nonlinear map and its left inverses, stochastic performance bounds, and algorithms for signal recovery.Comment: Lecture Notes for the 2015 AMS Short Course "Finite Frame Theory: A Complete Introduction to Overcompleteness", Jan. 2015, San Antonio. To appear in Proceedings of Symposia in Applied Mathematic

    Representation and Coding of Signal Geometry

    Full text link
    Approaches to signal representation and coding theory have traditionally focused on how to best represent signals using parsimonious representations that incur the lowest possible distortion. Classical examples include linear and non-linear approximations, sparse representations, and rate-distortion theory. Very often, however, the goal of processing is to extract specific information from the signal, and the distortion should be measured on the extracted information. The corresponding representation should, therefore, represent that information as parsimoniously as possible, without necessarily accurately representing the signal itself. In this paper, we examine the problem of encoding signals such that sufficient information is preserved about their pairwise distances and their inner products. For that goal, we consider randomized embeddings as an encoding mechanism and provide a framework to analyze their performance. We also demonstrate that it is possible to design the embedding such that it represents different ranges of distances with different precision. These embeddings also allow the computation of kernel inner products with control on their inner product-preserving properties. Our results provide a broad framework to design and analyze embeddins, and generalize existing results in this area, such as random Fourier kernels and universal embeddings

    Stochastic model-based minimization of weakly convex functions

    Full text link
    We consider a family of algorithms that successively sample and minimize simple stochastic models of the objective function. We show that under reasonable conditions on approximation quality and regularity of the models, any such algorithm drives a natural stationarity measure to zero at the rate O(k−1/4)O(k^{-1/4}). As a consequence, we obtain the first complexity guarantees for the stochastic proximal point, proximal subgradient, and regularized Gauss-Newton methods for minimizing compositions of convex functions with smooth maps. The guiding principle, underlying the complexity guarantees, is that all algorithms under consideration can be interpreted as approximate descent methods on an implicit smoothing of the problem, given by the Moreau envelope. Specializing to classical circumstances, we obtain the long-sought convergence rate of the stochastic projected gradient method, without batching, for minimizing a smooth function on a closed convex set.Comment: 33 pages, 4 figure

    Nonlinear frames and sparse reconstructions in Banach spaces

    Full text link
    In the first part of this paper, we consider nonlinear extension of frame theory by introducing bi-Lipschitz maps FF between Banach spaces. Our linear model of bi-Lipschitz maps is the analysis operator associated with Hilbert frames, pp-frames, Banach frames, g-frames and fusion frames. In general Banach space setting, stable algorithm to reconstruct a signal xx from its noisy measurement F(x)+ϵF(x)+\epsilon may not exist. In this paper, we establish exponential convergence of two iterative reconstruction algorithms when FF is not too far from some bounded below linear operator with bounded pseudo-inverse, and when FF is a well-localized map between two Banach spaces with dense Hilbert subspaces. The crucial step to prove the later conclusion is a novel fixed point theorem for a well-localized map on a Banach space. In the second part of this paper, we consider stable reconstruction of sparse signals in a union A{\bf A} of closed linear subspaces of a Hilbert space H{\bf H} from their nonlinear measurements. We create an optimization framework called sparse approximation triple (A,M,H)({\bf A}, {\bf M}, {\bf H}), and show that the minimizer x∗=argminx^∈M with ∥F(x^)−F(x0)∥≤ϵ∥x^∥Mx^*={\rm argmin}_{\hat x\in {\mathbf M}\ {\rm with} \ \|F(\hat x)-F(x^0)\|\le \epsilon} \|\hat x\|_{\mathbf M} provides a suboptimal approximation to the original sparse signal x0∈Ax^0\in {\bf A} when the measurement map FF has the sparse Riesz property and almost linear property on A{\mathbf A}. The above two new properties is also discussed in this paper when FF is not far away from a linear measurement operator TT having the restricted isometry property

    The proximal point method revisited

    Full text link
    In this short survey, I revisit the role of the proximal point method in large scale optimization. I focus on three recent examples: a proximally guided subgradient method for weakly convex stochastic approximation, the prox-linear algorithm for minimizing compositions of convex functions and smooth maps, and Catalyst generic acceleration for regularized Empirical Risk Minimization.Comment: 11 pages, submitted to SIAG/OPT Views and New

    Quasi-Linear Compressed Sensing

    Full text link
    Inspired by significant real-life applications, in particular, sparse phase retrieval and sparse pulsation frequency detection in Asteroseismology, we investigate a general framework for compressed sensing, where the measurements are quasi-linear. We formulate natural generalizations of the well-known Restricted Isometry Property (RIP) towards nonlinear measurements, which allow us to prove both unique identifiability of sparse signals as well as the convergence of recovery algorithms to compute them efficiently. We show that for certain randomized quasi-linear measurements, including Lipschitz perturbations of classical RIP matrices and phase retrieval from random projections, the proposed restricted isometry properties hold with high probability. We analyze a generalized Orthogonal Least Squares (OLS) under the assumption that magnitudes of signal entries to be recovered decay fast. Greed is good again, as we show that this algorithm performs efficiently in phase retrieval and asteroseismology. For situations where the decay assumption on the signal does not necessarily hold, we propose two alternative algorithms, which are natural generalizations of the well-known iterative hard and soft-thresholding. While these algorithms are rarely successful for the mentioned applications, we show their strong recovery guarantees for quasi-linear measurements which are Lipschitz perturbations of RIP matrices

    Stochastic Methods for Composite and Weakly Convex Optimization Problems

    Full text link
    We consider minimization of stochastic functionals that are compositions of a (potentially) non-smooth convex function hh and smooth function cc and, more generally, stochastic weakly-convex functionals. We develop a family of stochastic methods---including a stochastic prox-linear algorithm and a stochastic (generalized) sub-gradient procedure---and prove that, under mild technical conditions, each converges to first-order stationary points of the stochastic objective. We provide experiments further investigating our methods on non-smooth phase retrieval problems; the experiments indicate the practical effectiveness of the procedures

    Graphical Convergence of Subgradients in Nonconvex Optimization and Learning

    Full text link
    We investigate the stochastic optimization problem of minimizing population risk, where the loss defining the risk is assumed to be weakly convex. Compositions of Lipschitz convex functions with smooth maps are the primary examples of such losses. We analyze the estimation quality of such nonsmooth and nonconvex problems by their sample average approximations. Our main results establish dimension-dependent rates on subgradient estimation in full generality and dimension-independent rates when the loss is a generalized linear model. As an application of the developed techniques, we analyze the nonsmooth landscape of a robust nonlinear regression problem.Comment: 36 page
    • …
    corecore