47,281 research outputs found
Multi-stage Convex Relaxation for Feature Selection
A number of recent work studied the effectiveness of feature selection using
Lasso. It is known that under the restricted isometry properties (RIP), Lasso
does not generally lead to the exact recovery of the set of nonzero
coefficients, due to the looseness of convex relaxation. This paper considers
the feature selection property of nonconvex regularization, where the solution
is given by a multi-stage convex relaxation scheme. Under appropriate
conditions, we show that the local solution obtained by this procedure recovers
the set of nonzero coefficients without suffering from the bias of Lasso
relaxation, which complements parameter estimation results of this procedure
Gradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization
Hard Thresholding Pursuit (HTP) is an iterative greedy selection procedure
for finding sparse solutions of underdetermined linear systems. This method has
been shown to have strong theoretical guarantee and impressive numerical
performance. In this paper, we generalize HTP from compressive sensing to a
generic problem setup of sparsity-constrained convex optimization. The proposed
algorithm iterates between a standard gradient descent step and a hard
thresholding step with or without debiasing. We prove that our method enjoys
the strong guarantees analogous to HTP in terms of rate of convergence and
parameter estimation accuracy. Numerical evidences show that our method is
superior to the state-of-the-art greedy selection methods in sparse logistic
regression and sparse precision matrix estimation tasks
- …