β1β mean filtering is a conventional, optimization-based method to
estimate the positions of jumps in a piecewise constant signal perturbed by
additive noise. In this method, the β1β norm penalizes sparsity of the
first-order derivative of the signal. Theoretical results, however, show that
in some situations, which can occur frequently in practice, even when the jump
amplitudes tend to β, the conventional method identifies false change
points. This issue is referred to as stair-casing problem and restricts
practical importance of β1β mean filtering. In this paper, sparsity is
penalized more tightly than the β1β norm by exploiting a certain class of
nonconvex functions, while the strict convexity of the consequent optimization
problem is preserved. This results in a higher performance in detecting change
points. To theoretically justify the performance improvements over β1β
mean filtering, deterministic and stochastic sufficient conditions for exact
change point recovery are derived. In particular, theoretical results show that
in the stair-casing problem, our approach might be able to exclude the false
change points, while β1β mean filtering may fail. A number of numerical
simulations assist to show superiority of our method over β1β mean
filtering and another state-of-the-art algorithm that promotes sparsity tighter
than the β1β norm. Specifically, it is shown that our approach can
consistently detect change points when the jump amplitudes become sufficiently
large, while the two other competitors cannot.Comment: Submitted to IEEE Transactions on Signal Processin