We derive a novel norm that corresponds to the tightest convex relaxation of
sparsity combined with an ℓ2​ penalty. We show that this new {\em
k-support norm} provides a tighter relaxation than the elastic net and is
thus a good replacement for the Lasso or the elastic net in sparse prediction
problems. Through the study of the k-support norm, we also bound the
looseness of the elastic net, thus shedding new light on it and providing
justification for its use