40 research outputs found

    Robust sparse analysis regularization

    Get PDF
    ABSTRACT This work studies some properties of 1 -analysis regularization for the resolution of linear inverse problems. Analysis regularization minimizes the 1 norm of the correlations between the signal and the atoms in the dictionary. The corresponding variational problem includes several well-known regularizations such as the discrete total variation and the fused lasso. We give sufficient conditions such that analysis regularization is robust to noise. ANALYSIS VERSUS SYNTHESIS Regularization through variational analysis is a popular way to compute an approximation of x 0 ∈ R N from the measurements y ∈ R Q as defined by an inverse problem y = Φx 0 + w where w is some additive noise and Φ is a linear operator, for instance a super-resolution or an inpainting operator. N which is used to synthesize a signal Common examples in signal processing of dictionary include the wavelet transform or a finite-difference operator. Synthesis regularization corresponds to the following minimization problem where Ψ = ΦD, and x = Dα. Properties of synthesis prior had been studied intensively, see for instance Analysis regularization corresponds to the following minimization problem In the noiseless case, w = 0, one uses the constrained optimization which reads min x∈R N ||D * x|| 1 subject to Φx = y. This prior had been less studied than the synthesis prior, see for instance UNION OF SUBSPACES MODEL It is natural to keep track of the support of this correlation vector, as done in the following definition. A signal x such that D * x is sparse lives in a cospace G J of small dimension where G J is defined as follow. Definition 2. Given a dictionary D, and J a subset of {1 · · · P }, the cospace G J is defined as where D J is the subdictionary whose columns are indexed by J. The signal space can thus be decomposed as a union of subspaces of increasing dimensions For the 1-D total variation prior, Θ k is the set of piecewise constant signals with k − 1 steps

    Sparse Support Recovery with Non-smooth Loss Functions

    Get PDF
    In this paper, we study the support recovery guarantees of underdetermined sparse regression using the ℓ1\ell_1-norm as a regularizer and a non-smooth loss function for data fidelity. More precisely, we focus in detail on the cases of ℓ1\ell_1 and ℓ∞\ell_\infty losses, and contrast them with the usual ℓ2\ell_2 loss. While these losses are routinely used to account for either sparse (ℓ1\ell_1 loss) or uniform (ℓ∞\ell_\infty loss) noise models, a theoretical analysis of their performance is still lacking. In this article, we extend the existing theory from the smooth ℓ2\ell_2 case to these non-smooth cases. We derive a sharp condition which ensures that the support of the vector to recover is stable to small additive noise in the observations, as long as the loss constraint size is tuned proportionally to the noise level. A distinctive feature of our theory is that it also explains what happens when the support is unstable. While the support is not stable anymore, we identify an "extended support" and show that this extended support is stable to small additive noise. To exemplify the usefulness of our theory, we give a detailed numerical analysis of the support stability/instability of compressed sensing recovery with these different losses. This highlights different parameter regimes, ranging from total support stability to progressively increasing support instability.Comment: in Proc. NIPS 201

    Model Consistency of Partly Smooth Regularizers

    Full text link
    This paper studies least-square regression penalized with partly smooth convex regularizers. This class of functions is very large and versatile allowing to promote solutions conforming to some notion of low-complexity. Indeed, they force solutions of variational problems to belong to a low-dimensional manifold (the so-called model) which is stable under small perturbations of the function. This property is crucial to make the underlying low-complexity model robust to small noise. We show that a generalized "irrepresentable condition" implies stable model selection under small noise perturbations in the observations and the design matrix, when the regularization parameter is tuned proportionally to the noise level. This condition is shown to be almost a necessary condition. We then show that this condition implies model consistency of the regularized estimator. That is, with a probability tending to one as the number of measurements increases, the regularized estimator belongs to the correct low-dimensional model manifold. This work unifies and generalizes several previous ones, where model consistency is known to hold for sparse, group sparse, total variation and low-rank regularizations

    The degrees of freedom of the Lasso for general design matrix

    Full text link
    In this paper, we investigate the degrees of freedom (\dof) of penalized ℓ1\ell_1 minimization (also known as the Lasso) for linear regression models. We give a closed-form expression of the \dof of the Lasso response. Namely, we show that for any given Lasso regularization parameter λ\lambda and any observed data yy belonging to a set of full (Lebesgue) measure, the cardinality of the support of a particular solution of the Lasso problem is an unbiased estimator of the degrees of freedom. This is achieved without the need of uniqueness of the Lasso solution. Thus, our result holds true for both the underdetermined and the overdetermined case, where the latter was originally studied in \cite{zou}. We also show, by providing a simple counterexample, that although the \dof theorem of \cite{zou} is correct, their proof contains a flaw since their divergence formula holds on a different set of a full measure than the one that they claim. An effective estimator of the number of degrees of freedom may have several applications including an objectively guided choice of the regularization parameter in the Lasso through the \sure framework. Our theoretical findings are illustrated through several numerical simulations.Comment: A short version appeared in SPARS'11, June 2011 Previously entitled "The degrees of freedom of penalized l1 minimization

    GSplit LBI: Taming the Procedural Bias in Neuroimaging for Disease Prediction

    Full text link
    In voxel-based neuroimage analysis, lesion features have been the main focus in disease prediction due to their interpretability with respect to the related diseases. However, we observe that there exists another type of features introduced during the preprocessing steps and we call them "\textbf{Procedural Bias}". Besides, such bias can be leveraged to improve classification accuracy. Nevertheless, most existing models suffer from either under-fit without considering procedural bias or poor interpretability without differentiating such bias from lesion ones. In this paper, a novel dual-task algorithm namely \emph{GSplit LBI} is proposed to resolve this problem. By introducing an augmented variable enforced to be structural sparsity with a variable splitting term, the estimators for prediction and selecting lesion features can be optimized separately and mutually monitored by each other following an iterative scheme. Empirical experiments have been evaluated on the Alzheimer's Disease Neuroimaging Initiative\thinspace(ADNI) database. The advantage of proposed model is verified by improved stability of selected lesion features and better classification results.Comment: Conditional Accepted by Miccai,201

    Stable image reconstruction using total variation minimization

    Get PDF
    This article presents near-optimal guarantees for accurate and robust image recovery from under-sampled noisy measurements using total variation minimization. In particular, we show that from O(slog(N)) nonadaptive linear measurements, an image can be reconstructed to within the best s-term approximation of its gradient up to a logarithmic factor, and this factor can be removed by taking slightly more measurements. Along the way, we prove a strengthened Sobolev inequality for functions lying in the null space of suitably incoherent matrices.Comment: 25 page
    corecore