Despite widespread adoption in practice, guarantees for the LASSO and Group
LASSO are strikingly lacking in settings beyond statistical problems, and these
algorithms are usually considered to be a heuristic in the context of sparse
convex optimization on deterministic inputs. We give the first recovery
guarantees for the Group LASSO for sparse convex optimization with
vector-valued features. We show that if a sufficiently large Group LASSO
regularization is applied when minimizing a strictly convex function l, then
the minimizer is a sparse vector supported on vector-valued features with the
largest β2β norm of the gradient. Thus, repeating this procedure selects
the same set of features as the Orthogonal Matching Pursuit algorithm, which
admits recovery guarantees for any function l with restricted strong
convexity and smoothness via weak submodularity arguments. This answers open
questions of Tibshirani et al. and Yasuda et al. Our result is the first to
theoretically explain the empirical success of the Group LASSO for convex
functions under general input instances assuming only restricted strong
convexity and smoothness. Our result also generalizes provable guarantees for
the Sequential Attention algorithm, which is a feature selection algorithm
inspired by the attention mechanism proposed by Yasuda et al.
As an application of our result, we give new results for the column subset
selection problem, which is well-studied when the loss is the Frobenius norm or
other entrywise matrix losses. We give the first result for general loss
functions for this problem that requires only restricted strong convexity and
smoothness