55 research outputs found
Sparse Inverse Problems Over Measures: Equivalence of the Conditional Gradient and Exchange Methods
We study an optimization program over nonnegative Borel measures that
encourages sparsity in its solution. Efficient solvers for this program are in
increasing demand, as it arises when learning from data generated by a
`continuum-of-subspaces' model, a recent trend with applications in signal
processing, machine learning, and high-dimensional statistics. We prove that
the conditional gradient method (CGM) applied to this infinite-dimensional
program, as proposed recently in the literature, is equivalent to the exchange
method (EM) applied to its Lagrangian dual, which is a semi-infinite program.
In doing so, we formally connect such infinite-dimensional programs to the
well-established field of semi-infinite programming.
On the one hand, the equivalence established in this paper allows us to
provide a rate of convergence for EM which is more general than those existing
in the literature. On the other hand, this connection and the resulting
geometric insights might in the future lead to the design of improved variants
of CGM for infinite-dimensional programs, which has been an active research
topic. CGM is also known as the Frank-Wolfe algorithm
Sparse non-negative super-resolution -- simplified and stabilised
The convolution of a discrete measure, , with
a local window function, , is a common model for a measurement
device whose resolution is substantially lower than that of the objects being
observed. Super-resolution concerns localising the point sources
with an accuracy beyond the essential support of
, typically from samples , where indicates an inexactness in the sample
value. We consider the setting of being non-negative and seek to
characterise all non-negative measures approximately consistent with the
samples. We first show that is the unique non-negative measure consistent
with the samples provided the samples are exact, i.e. ,
samples are available, and generates a Chebyshev system. This is
independent of how close the sample locations are and {\em does not rely on any
regulariser beyond non-negativity}; as such, it extends and clarifies the work
by Schiebinger et al. and De Castro et al., who achieve the same results but
require a total variation regulariser, which we show is unnecessary.
Moreover, we characterise non-negative solutions consistent with
the samples within the bound . Any such
non-negative measure is within of the discrete
measure generating the samples in the generalised Wasserstein distance,
converging to one another as approaches zero. We also show how to make
these general results, for windows that form a Chebyshev system, precise for
the case of being a Gaussian window. The main innovation of these
results is that non-negativity alone is sufficient to localise point sources
beyond the essential sensor resolution.Comment: 59 pages, 7 figure
Computing second-order points under equality constraints: revisiting Fletcher's augmented Lagrangian
We address the problem of minimizing a smooth function under smooth equality
constraints. Under regularity assumptions on the feasible set, we consider a
smooth exact penalty function known as Fletcher's augmented Lagrangian. We
propose an algorithm to minimize the penalized cost function which reaches
-approximate second-order critical points of the original
optimization problem in at most iterations.
This improves on current best theoretical bounds. Along the way, we show new
properties of Fletcher's augmented Lagrangian, which may be of independent
interest
- …