50 research outputs found
Sparse non-negative super-resolution -- simplified and stabilised
The convolution of a discrete measure, , with
a local window function, , is a common model for a measurement
device whose resolution is substantially lower than that of the objects being
observed. Super-resolution concerns localising the point sources
with an accuracy beyond the essential support of
, typically from samples , where indicates an inexactness in the sample
value. We consider the setting of being non-negative and seek to
characterise all non-negative measures approximately consistent with the
samples. We first show that is the unique non-negative measure consistent
with the samples provided the samples are exact, i.e. ,
samples are available, and generates a Chebyshev system. This is
independent of how close the sample locations are and {\em does not rely on any
regulariser beyond non-negativity}; as such, it extends and clarifies the work
by Schiebinger et al. and De Castro et al., who achieve the same results but
require a total variation regulariser, which we show is unnecessary.
Moreover, we characterise non-negative solutions consistent with
the samples within the bound . Any such
non-negative measure is within of the discrete
measure generating the samples in the generalised Wasserstein distance,
converging to one another as approaches zero. We also show how to make
these general results, for windows that form a Chebyshev system, precise for
the case of being a Gaussian window. The main innovation of these
results is that non-negativity alone is sufficient to localise point sources
beyond the essential sensor resolution.Comment: 59 pages, 7 figure
Computing second-order points under equality constraints: revisiting Fletcher's augmented Lagrangian
We address the problem of minimizing a smooth function under smooth equality
constraints. Under regularity assumptions on the feasible set, we consider a
smooth exact penalty function known as Fletcher's augmented Lagrangian. We
propose an algorithm to minimize the penalized cost function which reaches
-approximate second-order critical points of the original
optimization problem in at most iterations.
This improves on current best theoretical bounds. Along the way, we show new
properties of Fletcher's augmented Lagrangian, which may be of independent
interest