48 research outputs found

    Sparse Prediction with the kk-Support Norm

    Full text link
    We derive a novel norm that corresponds to the tightest convex relaxation of sparsity combined with an â„“2\ell_2 penalty. We show that this new {\em kk-support norm} provides a tighter relaxation than the elastic net and is thus a good replacement for the Lasso or the elastic net in sparse prediction problems. Through the study of the kk-support norm, we also bound the looseness of the elastic net, thus shedding new light on it and providing justification for its use

    Generalized Dantzig Selector: Application to the k-support norm

    Full text link
    We propose a Generalized Dantzig Selector (GDS) for linear models, in which any norm encoding the parameter structure can be leveraged for estimation. We investigate both computational and statistical aspects of the GDS. Based on conjugate proximal operator, a flexible inexact ADMM framework is designed for solving GDS, and non-asymptotic high-probability bounds are established on the estimation error, which rely on Gaussian width of unit norm ball and suitable set encompassing estimation error. Further, we consider a non-trivial example of the GDS using kk-support norm. We derive an efficient method to compute the proximal operator for kk-support norm since existing methods are inapplicable in this setting. For statistical analysis, we provide upper bounds for the Gaussian widths needed in the GDS analysis, yielding the first statistical recovery guarantee for estimation with the kk-support norm. The experimental results confirm our theoretical analysis.Comment: Updates to boun

    A Note on k-support Norm Regularized Risk Minimization

    Get PDF
    The k-support norm has been recently introduced to perform correlated sparsity regularization. Although Argyriou et al. only reported experiments using squared loss, here we apply it to several other commonly used settings resulting in novel machine learning algorithms with interesting and familiar limit cases. Source code for the algorithms described here is available
    corecore