323 research outputs found
Two-Layer Feature Reduction for Sparse-Group Lasso via Decomposition of Convex Sets
Sparse-Group Lasso (SGL) has been shown to be a powerful regression technique
for simultaneously discovering group and within-group sparse patterns by using
a combination of the and norms. However, in large-scale
applications, the complexity of the regularizers entails great computational
challenges. In this paper, we propose a novel Two-Layer Feature REduction
method (TLFre) for SGL via a decomposition of its dual feasible set. The
two-layer reduction is able to quickly identify the inactive groups and the
inactive features, respectively, which are guaranteed to be absent from the
sparse representation and can be removed from the optimization. Existing
feature reduction methods are only applicable for sparse models with one
sparsity-inducing regularizer. To our best knowledge, TLFre is the first one
that is capable of dealing with multiple sparsity-inducing regularizers.
Moreover, TLFre has a very low computational cost and can be integrated with
any existing solvers. We also develop a screening method---called DPC
(DecomPosition of Convex set)---for the nonnegative Lasso problem. Experiments
on both synthetic and real data sets show that TLFre and DPC improve the
efficiency of SGL and nonnegative Lasso by several orders of magnitude
Forward-Backward Greedy Algorithms for General Convex Smooth Functions over A Cardinality Constraint
We consider forward-backward greedy algorithms for solving sparse feature
selection problems with general convex smooth functions. A state-of-the-art
greedy method, the Forward-Backward greedy algorithm (FoBa-obj) requires to
solve a large number of optimization problems, thus it is not scalable for
large-size problems. The FoBa-gdt algorithm, which uses the gradient
information for feature selection at each forward iteration, significantly
improves the efficiency of FoBa-obj. In this paper, we systematically analyze
the theoretical properties of both forward-backward greedy algorithms. Our main
contributions are: 1) We derive better theoretical bounds than existing
analyses regarding FoBa-obj for general smooth convex functions; 2) We show
that FoBa-gdt achieves the same theoretical performance as FoBa-obj under the
same condition: restricted strong convexity condition. Our new bounds are
consistent with the bounds of a special case (least squares) and fills a
previously existing theoretical gap for general convex smooth functions; 3) We
show that the restricted strong convexity condition is satisfied if the number
of independent samples is more than where is the
sparsity number and is the dimension of the variable; 4) We apply FoBa-gdt
(with the conditional random field objective) to the sensor selection problem
for human indoor activity recognition and our results show that FoBa-gdt
outperforms other methods (including the ones based on forward greedy selection
and L1-regularization)
- β¦