48 research outputs found
Sparse Prediction with the -Support Norm
We derive a novel norm that corresponds to the tightest convex relaxation of
sparsity combined with an penalty. We show that this new {\em
-support norm} provides a tighter relaxation than the elastic net and is
thus a good replacement for the Lasso or the elastic net in sparse prediction
problems. Through the study of the -support norm, we also bound the
looseness of the elastic net, thus shedding new light on it and providing
justification for its use
Generalized Dantzig Selector: Application to the k-support norm
We propose a Generalized Dantzig Selector (GDS) for linear models, in which
any norm encoding the parameter structure can be leveraged for estimation. We
investigate both computational and statistical aspects of the GDS. Based on
conjugate proximal operator, a flexible inexact ADMM framework is designed for
solving GDS, and non-asymptotic high-probability bounds are established on the
estimation error, which rely on Gaussian width of unit norm ball and suitable
set encompassing estimation error. Further, we consider a non-trivial example
of the GDS using -support norm. We derive an efficient method to compute the
proximal operator for -support norm since existing methods are inapplicable
in this setting. For statistical analysis, we provide upper bounds for the
Gaussian widths needed in the GDS analysis, yielding the first statistical
recovery guarantee for estimation with the -support norm. The experimental
results confirm our theoretical analysis.Comment: Updates to boun
A Note on k-support Norm Regularized Risk Minimization
The k-support norm has been recently introduced to perform correlated
sparsity regularization. Although Argyriou et al. only reported experiments
using squared loss, here we apply it to several other commonly used settings
resulting in novel machine learning algorithms with interesting and familiar
limit cases. Source code for the algorithms described here is available