188 research outputs found

    A Class of Nonconvex Penalties Preserving Overall Convexity in Optimization-Based Mean Filtering

    Full text link
    ℓ1\ell_1 mean filtering is a conventional, optimization-based method to estimate the positions of jumps in a piecewise constant signal perturbed by additive noise. In this method, the ℓ1\ell_1 norm penalizes sparsity of the first-order derivative of the signal. Theoretical results, however, show that in some situations, which can occur frequently in practice, even when the jump amplitudes tend to ∞\infty, the conventional method identifies false change points. This issue is referred to as stair-casing problem and restricts practical importance of ℓ1\ell_1 mean filtering. In this paper, sparsity is penalized more tightly than the ℓ1\ell_1 norm by exploiting a certain class of nonconvex functions, while the strict convexity of the consequent optimization problem is preserved. This results in a higher performance in detecting change points. To theoretically justify the performance improvements over ℓ1\ell_1 mean filtering, deterministic and stochastic sufficient conditions for exact change point recovery are derived. In particular, theoretical results show that in the stair-casing problem, our approach might be able to exclude the false change points, while ℓ1\ell_1 mean filtering may fail. A number of numerical simulations assist to show superiority of our method over ℓ1\ell_1 mean filtering and another state-of-the-art algorithm that promotes sparsity tighter than the ℓ1\ell_1 norm. Specifically, it is shown that our approach can consistently detect change points when the jump amplitudes become sufficiently large, while the two other competitors cannot.Comment: Submitted to IEEE Transactions on Signal Processin

    On Solving SAR Imaging Inverse Problems Using Non-Convex Regularization with a Cauchy-based Penalty

    Full text link
    Synthetic aperture radar (SAR) imagery can provide useful information in a multitude of applications, including climate change, environmental monitoring, meteorology, high dimensional mapping, ship monitoring, or planetary exploration. In this paper, we investigate solutions to a number of inverse problems encountered in SAR imaging. We propose a convex proximal splitting method for the optimization of a cost function that includes a non-convex Cauchy-based penalty. The convergence of the overall cost function optimization is ensured through careful selection of model parameters within a forward-backward (FB) algorithm. The performance of the proposed penalty function is evaluated by solving three standard SAR imaging inverse problems, including super-resolution, image formation, and despeckling, as well as ship wake detection for maritime applications. The proposed method is compared to several methods employing classical penalty functions such as total variation (TVTV) and L1L_1 norms, and to the generalized minimax-concave (GMC) penalty. We show that the proposed Cauchy-based penalty function leads to better image reconstruction results when compared to the reference penalty functions for all SAR imaging inverse problems in this paper.Comment: 18 pages, 7 figure

    Non-Convex Methods for Compressed Sensing and Low-Rank Matrix Problems

    Get PDF
    In this thesis we study functionals of the type \mathcal{K}_{f,A,\b}(\x)= \mathcal{Q}(f)(\x) + \|A\x - \b \| ^2 , where AA is a linear map, \b a measurements vector and Q \mathcal{Q} is a functional transform called \emph{quadratic envelope}; this object is a very close relative of the \emph{Lasry-Lions envelope} and its use is meant to regularize the functionals ff. Carlsson and Olsson investigated in earlier works the connections between the functionals \mathcal{K}_{f,A,\b} and their unregularized counterparts f(\x) + \|A\x - \b \| ^2 . For certain choices of ff the penalty Q(f)(â‹…) \mathcal{Q}(f)(\cdot) acts as sparsifying agent and the minimization of \mathcal{K}_{f,A,\b}(\x) delivers sparse solutions to the linear system of equations A\x = \b . We prove existence and uniqueness results of the sparse (or low rank, since the functional ff can have any Hilbert space as domain) global minimizer of \mathcal{K}_{f,A,\b}(\x) for some instances of ff, under Restricted Isometry Property conditions on AA. The theory is complemented with robustness results and a wide range of numerical experiments, both synthetic and from real world problems
    • …
    corecore