7,380 research outputs found
Two-Layer Feature Reduction for Sparse-Group Lasso via Decomposition of Convex Sets
Sparse-Group Lasso (SGL) has been shown to be a powerful regression technique
for simultaneously discovering group and within-group sparse patterns by using
a combination of the and norms. However, in large-scale
applications, the complexity of the regularizers entails great computational
challenges. In this paper, we propose a novel Two-Layer Feature REduction
method (TLFre) for SGL via a decomposition of its dual feasible set. The
two-layer reduction is able to quickly identify the inactive groups and the
inactive features, respectively, which are guaranteed to be absent from the
sparse representation and can be removed from the optimization. Existing
feature reduction methods are only applicable for sparse models with one
sparsity-inducing regularizer. To our best knowledge, TLFre is the first one
that is capable of dealing with multiple sparsity-inducing regularizers.
Moreover, TLFre has a very low computational cost and can be integrated with
any existing solvers. We also develop a screening method---called DPC
(DecomPosition of Convex set)---for the nonnegative Lasso problem. Experiments
on both synthetic and real data sets show that TLFre and DPC improve the
efficiency of SGL and nonnegative Lasso by several orders of magnitude
Safe Screening With Variational Inequalities and Its Application to LASSO
Sparse learning techniques have been routinely used for feature selection as
the resulting model usually has a small number of non-zero entries. Safe
screening, which eliminates the features that are guaranteed to have zero
coefficients for a certain value of the regularization parameter, is a
technique for improving the computational efficiency. Safe screening is gaining
increasing attention since 1) solving sparse learning formulations usually has
a high computational cost especially when the number of features is large and
2) one needs to try several regularization parameters to select a suitable
model. In this paper, we propose an approach called "Sasvi" (Safe screening
with variational inequalities). Sasvi makes use of the variational inequality
that provides the sufficient and necessary optimality condition for the dual
problem. Several existing approaches for Lasso screening can be casted as
relaxed versions of the proposed Sasvi, thus Sasvi provides a stronger safe
screening rule. We further study the monotone properties of Sasvi for Lasso,
based on which a sure removal regularization parameter can be identified for
each feature. Experimental results on both synthetic and real data sets are
reported to demonstrate the effectiveness of the proposed Sasvi for Lasso
screening.Comment: Accepted by International Conference on Machine Learning 201
- …