1,262 research outputs found
Phase Retrieval using Lipschitz Continuous Maps
In this note we prove that reconstruction from magnitudes of frame
coefficients (the so called "phase retrieval problem") can be performed using
Lipschitz continuous maps. Specifically we show that when the nonlinear
analysis map is injective, with
, where is a frame for the
Hilbert space , then there exists a left inverse map
that is Lipschitz continuous.
Additionally we obtain the Lipschitz constant of this inverse map in terms of
the lower Lipschitz constant of . Surprisingly the increase in
Lipschitz constant is independent of the space dimension or frame redundancy.Comment: 12 pages, 1 figur
On Lipschitz Analysis and Lipschitz Synthesis for the Phase Retrieval Problem
In this paper we prove two results regarding reconstruction from magnitudes
of frame coefficients (the so called "phase retrieval problem"). First we show
that phase retrievability as an algebraic property implies that nonlinear maps
are bi-Lipschitz with respect to appropriate metrics on the quotient space.
Second we prove that reconstruction can be performed using Lipschitz continuous
maps. Specifically we show that when nonlinear analysis maps
are injective, with and ,
where is a frame for a Hilbert space and
, then is bi-Lipschitz with respect to the class of
"natural metrics" , whereas
is bi-Lipschitz with respect to the class of matrix-norm induced
metrics . Furthermore, there exist left inverse
maps of and respectively,
that are Lipschitz continuous with respect to the appropriate metric.
Additionally we obtain the Lipschitz constants of these inverse maps in terms
of the lower Lipschitz constants of and . Surprisingly the
increase in Lipschitz constant is a relatively small factor, independent of the
space dimension or the frame redundancy.Comment: 26 pages, 1 figure; presented in part at ICHAA 2015 Conference, N
Frames and Phaseless Reconstruction
Frame design for phaseless reconstruction is now part of the broader problem
of nonlinear reconstruction and is an emerging topic in harmonic analysis. The
problem of phaseless reconstruction can be simply stated as follows. Given the
magnitudes of the coefficients generated by a linear redundant system (frame),
we want to reconstruct the unknown input. This problem first occurred in X-ray
crystallography starting in the early 20th century. The same nonlinear
reconstruction problem shows up in speech processing, particularly in speech
recognition.
In this lecture we shall cover existing analysis results as well as stability
bounds for signal recovery including: necessary and sufficient conditions for
injectivity, Lipschitz bounds of the nonlinear map and its left inverses,
stochastic performance bounds, and algorithms for signal recovery.Comment: Lecture Notes for the 2015 AMS Short Course "Finite Frame Theory: A
Complete Introduction to Overcompleteness", Jan. 2015, San Antonio. To appear
in Proceedings of Symposia in Applied Mathematic
Representation and Coding of Signal Geometry
Approaches to signal representation and coding theory have traditionally
focused on how to best represent signals using parsimonious representations
that incur the lowest possible distortion. Classical examples include linear
and non-linear approximations, sparse representations, and rate-distortion
theory. Very often, however, the goal of processing is to extract specific
information from the signal, and the distortion should be measured on the
extracted information. The corresponding representation should, therefore,
represent that information as parsimoniously as possible, without necessarily
accurately representing the signal itself.
In this paper, we examine the problem of encoding signals such that
sufficient information is preserved about their pairwise distances and their
inner products. For that goal, we consider randomized embeddings as an encoding
mechanism and provide a framework to analyze their performance. We also
demonstrate that it is possible to design the embedding such that it represents
different ranges of distances with different precision. These embeddings also
allow the computation of kernel inner products with control on their inner
product-preserving properties. Our results provide a broad framework to design
and analyze embeddins, and generalize existing results in this area, such as
random Fourier kernels and universal embeddings
Stochastic model-based minimization of weakly convex functions
We consider a family of algorithms that successively sample and minimize
simple stochastic models of the objective function. We show that under
reasonable conditions on approximation quality and regularity of the models,
any such algorithm drives a natural stationarity measure to zero at the rate
. As a consequence, we obtain the first complexity guarantees for
the stochastic proximal point, proximal subgradient, and regularized
Gauss-Newton methods for minimizing compositions of convex functions with
smooth maps. The guiding principle, underlying the complexity guarantees, is
that all algorithms under consideration can be interpreted as approximate
descent methods on an implicit smoothing of the problem, given by the Moreau
envelope. Specializing to classical circumstances, we obtain the long-sought
convergence rate of the stochastic projected gradient method, without batching,
for minimizing a smooth function on a closed convex set.Comment: 33 pages, 4 figure
Nonlinear frames and sparse reconstructions in Banach spaces
In the first part of this paper, we consider nonlinear extension of frame
theory by introducing bi-Lipschitz maps between Banach spaces. Our linear
model of bi-Lipschitz maps is the analysis operator associated with Hilbert
frames, -frames, Banach frames, g-frames and fusion frames. In general
Banach space setting, stable algorithm to reconstruct a signal from its
noisy measurement may not exist. In this paper, we establish
exponential convergence of two iterative reconstruction algorithms when is
not too far from some bounded below linear operator with bounded
pseudo-inverse, and when is a well-localized map between two Banach spaces
with dense Hilbert subspaces. The crucial step to prove the later conclusion is
a novel fixed point theorem for a well-localized map on a Banach space.
In the second part of this paper, we consider stable reconstruction of sparse
signals in a union of closed linear subspaces of a Hilbert space
from their nonlinear measurements. We create an optimization
framework called sparse approximation triple , and
show that the minimizer provides a
suboptimal approximation to the original sparse signal when
the measurement map has the sparse Riesz property and almost linear
property on . The above two new properties is also discussed in
this paper when is not far away from a linear measurement operator
having the restricted isometry property
The proximal point method revisited
In this short survey, I revisit the role of the proximal point method in
large scale optimization. I focus on three recent examples: a proximally guided
subgradient method for weakly convex stochastic approximation, the prox-linear
algorithm for minimizing compositions of convex functions and smooth maps, and
Catalyst generic acceleration for regularized Empirical Risk Minimization.Comment: 11 pages, submitted to SIAG/OPT Views and New
Quasi-Linear Compressed Sensing
Inspired by significant real-life applications, in particular, sparse phase
retrieval and sparse pulsation frequency detection in Asteroseismology, we
investigate a general framework for compressed sensing, where the measurements
are quasi-linear. We formulate natural generalizations of the well-known
Restricted Isometry Property (RIP) towards nonlinear measurements, which allow
us to prove both unique identifiability of sparse signals as well as the
convergence of recovery algorithms to compute them efficiently. We show that
for certain randomized quasi-linear measurements, including Lipschitz
perturbations of classical RIP matrices and phase retrieval from random
projections, the proposed restricted isometry properties hold with high
probability. We analyze a generalized Orthogonal Least Squares (OLS) under the
assumption that magnitudes of signal entries to be recovered decay fast. Greed
is good again, as we show that this algorithm performs efficiently in phase
retrieval and asteroseismology. For situations where the decay assumption on
the signal does not necessarily hold, we propose two alternative algorithms,
which are natural generalizations of the well-known iterative hard and
soft-thresholding. While these algorithms are rarely successful for the
mentioned applications, we show their strong recovery guarantees for
quasi-linear measurements which are Lipschitz perturbations of RIP matrices
Stochastic Methods for Composite and Weakly Convex Optimization Problems
We consider minimization of stochastic functionals that are compositions of a
(potentially) non-smooth convex function and smooth function and, more
generally, stochastic weakly-convex functionals. We develop a family of
stochastic methods---including a stochastic prox-linear algorithm and a
stochastic (generalized) sub-gradient procedure---and prove that, under mild
technical conditions, each converges to first-order stationary points of the
stochastic objective. We provide experiments further investigating our methods
on non-smooth phase retrieval problems; the experiments indicate the practical
effectiveness of the procedures
Graphical Convergence of Subgradients in Nonconvex Optimization and Learning
We investigate the stochastic optimization problem of minimizing population
risk, where the loss defining the risk is assumed to be weakly convex.
Compositions of Lipschitz convex functions with smooth maps are the primary
examples of such losses. We analyze the estimation quality of such nonsmooth
and nonconvex problems by their sample average approximations. Our main results
establish dimension-dependent rates on subgradient estimation in full
generality and dimension-independent rates when the loss is a generalized
linear model. As an application of the developed techniques, we analyze the
nonsmooth landscape of a robust nonlinear regression problem.Comment: 36 page
- …