50 research outputs found

    Sparse non-negative super-resolution -- simplified and stabilised

    Full text link
    The convolution of a discrete measure, x=∑i=1kaiδtix=\sum_{i=1}^ka_i\delta_{t_i}, with a local window function, ϕ(s−t)\phi(s-t), is a common model for a measurement device whose resolution is substantially lower than that of the objects being observed. Super-resolution concerns localising the point sources {ai,ti}i=1k\{a_i,t_i\}_{i=1}^k with an accuracy beyond the essential support of ϕ(s−t)\phi(s-t), typically from mm samples y(sj)=∑i=1kaiϕ(sj−ti)+ηjy(s_j)=\sum_{i=1}^k a_i\phi(s_j-t_i)+\eta_j, where ηj\eta_j indicates an inexactness in the sample value. We consider the setting of xx being non-negative and seek to characterise all non-negative measures approximately consistent with the samples. We first show that xx is the unique non-negative measure consistent with the samples provided the samples are exact, i.e. ηj=0\eta_j=0, m≥2k+1m\ge 2k+1 samples are available, and ϕ(s−t)\phi(s-t) generates a Chebyshev system. This is independent of how close the sample locations are and {\em does not rely on any regulariser beyond non-negativity}; as such, it extends and clarifies the work by Schiebinger et al. and De Castro et al., who achieve the same results but require a total variation regulariser, which we show is unnecessary. Moreover, we characterise non-negative solutions x^\hat{x} consistent with the samples within the bound ∑j=1mηj2≤δ2\sum_{j=1}^m\eta_j^2\le \delta^2. Any such non-negative measure is within O(δ1/7){\mathcal O}(\delta^{1/7}) of the discrete measure xx generating the samples in the generalised Wasserstein distance, converging to one another as δ\delta approaches zero. We also show how to make these general results, for windows that form a Chebyshev system, precise for the case of ϕ(s−t)\phi(s-t) being a Gaussian window. The main innovation of these results is that non-negativity alone is sufficient to localise point sources beyond the essential sensor resolution.Comment: 59 pages, 7 figure

    Computing second-order points under equality constraints: revisiting Fletcher's augmented Lagrangian

    Full text link
    We address the problem of minimizing a smooth function under smooth equality constraints. Under regularity assumptions on the feasible set, we consider a smooth exact penalty function known as Fletcher's augmented Lagrangian. We propose an algorithm to minimize the penalized cost function which reaches ε\varepsilon-approximate second-order critical points of the original optimization problem in at most O(ε−3)\mathcal{O}(\varepsilon^{-3}) iterations. This improves on current best theoretical bounds. Along the way, we show new properties of Fletcher's augmented Lagrangian, which may be of independent interest
    corecore