research

A Class of Nonconvex Penalties Preserving Overall Convexity in Optimization-Based Mean Filtering

Abstract

β„“1\ell_1 mean filtering is a conventional, optimization-based method to estimate the positions of jumps in a piecewise constant signal perturbed by additive noise. In this method, the β„“1\ell_1 norm penalizes sparsity of the first-order derivative of the signal. Theoretical results, however, show that in some situations, which can occur frequently in practice, even when the jump amplitudes tend to ∞\infty, the conventional method identifies false change points. This issue is referred to as stair-casing problem and restricts practical importance of β„“1\ell_1 mean filtering. In this paper, sparsity is penalized more tightly than the β„“1\ell_1 norm by exploiting a certain class of nonconvex functions, while the strict convexity of the consequent optimization problem is preserved. This results in a higher performance in detecting change points. To theoretically justify the performance improvements over β„“1\ell_1 mean filtering, deterministic and stochastic sufficient conditions for exact change point recovery are derived. In particular, theoretical results show that in the stair-casing problem, our approach might be able to exclude the false change points, while β„“1\ell_1 mean filtering may fail. A number of numerical simulations assist to show superiority of our method over β„“1\ell_1 mean filtering and another state-of-the-art algorithm that promotes sparsity tighter than the β„“1\ell_1 norm. Specifically, it is shown that our approach can consistently detect change points when the jump amplitudes become sufficiently large, while the two other competitors cannot.Comment: Submitted to IEEE Transactions on Signal Processin

    Similar works