2,516 research outputs found

    A quasi-Newton proximal splitting method

    Get PDF
    A new result in convex analysis on the calculation of proximity operators in certain scaled norms is derived. We describe efficient implementations of the proximity calculation for a useful class of functions; the implementations exploit the piece-wise linear nature of the dual problem. The second part of the paper applies the previous result to acceleration of convex minimization problems, and leads to an elegant quasi-Newton method. The optimization method compares favorably against state-of-the-art alternatives. The algorithm has extensive applications including signal processing, sparse recovery and machine learning and classification

    The effect of cost of credit on money demand: empirical evidence from Malaysia

    Get PDF
    This paper investigates the dynamic of long-run relationship between cost of credit and real money balances in Malaysia. The Johansen-Juselius (1990) likelihood ratio tests support the importance of the cost of credit in the real broad money demand function. The sample period spans from 1978:q1 through 1997:q4. The results provide empirical evidence for the long-run relationship between cost of credit and broad money balances in Malaysia

    Iteration-Complexity of a Generalized Forward Backward Splitting Algorithm

    Full text link
    In this paper, we analyze the iteration-complexity of Generalized Forward--Backward (GFB) splitting algorithm, as proposed in \cite{gfb2011}, for minimizing a large class of composite objectives f+∑i=1nhif + \sum_{i=1}^n h_i on a Hilbert space, where ff has a Lipschitz-continuous gradient and the hih_i's are simple (\ie their proximity operators are easy to compute). We derive iteration-complexity bounds (pointwise and ergodic) for the inexact version of GFB to obtain an approximate solution based on an easily verifiable termination criterion. Along the way, we prove complexity bounds for relaxed and inexact fixed point iterations built from composition of nonexpansive averaged operators. These results apply more generally to GFB when used to find a zero of a sum of n>0n > 0 maximal monotone operators and a co-coercive operator on a Hilbert space. The theoretical findings are exemplified with experiments on video processing.Comment: 5 pages, 2 figure

    Model Consistency of Partly Smooth Regularizers

    Full text link
    This paper studies least-square regression penalized with partly smooth convex regularizers. This class of functions is very large and versatile allowing to promote solutions conforming to some notion of low-complexity. Indeed, they force solutions of variational problems to belong to a low-dimensional manifold (the so-called model) which is stable under small perturbations of the function. This property is crucial to make the underlying low-complexity model robust to small noise. We show that a generalized "irrepresentable condition" implies stable model selection under small noise perturbations in the observations and the design matrix, when the regularization parameter is tuned proportionally to the noise level. This condition is shown to be almost a necessary condition. We then show that this condition implies model consistency of the regularized estimator. That is, with a probability tending to one as the number of measurements increases, the regularized estimator belongs to the correct low-dimensional model manifold. This work unifies and generalizes several previous ones, where model consistency is known to hold for sparse, group sparse, total variation and low-rank regularizations

    Sparse Support Recovery with Non-smooth Loss Functions

    Get PDF
    In this paper, we study the support recovery guarantees of underdetermined sparse regression using the ℓ1\ell_1-norm as a regularizer and a non-smooth loss function for data fidelity. More precisely, we focus in detail on the cases of ℓ1\ell_1 and ℓ∞\ell_\infty losses, and contrast them with the usual ℓ2\ell_2 loss. While these losses are routinely used to account for either sparse (ℓ1\ell_1 loss) or uniform (ℓ∞\ell_\infty loss) noise models, a theoretical analysis of their performance is still lacking. In this article, we extend the existing theory from the smooth ℓ2\ell_2 case to these non-smooth cases. We derive a sharp condition which ensures that the support of the vector to recover is stable to small additive noise in the observations, as long as the loss constraint size is tuned proportionally to the noise level. A distinctive feature of our theory is that it also explains what happens when the support is unstable. While the support is not stable anymore, we identify an "extended support" and show that this extended support is stable to small additive noise. To exemplify the usefulness of our theory, we give a detailed numerical analysis of the support stability/instability of compressed sensing recovery with these different losses. This highlights different parameter regimes, ranging from total support stability to progressively increasing support instability.Comment: in Proc. NIPS 201
    • …
    corecore