44 research outputs found
Individualized Rank Aggregation using Nuclear Norm Regularization
In recent years rank aggregation has received significant attention from the
machine learning community. The goal of such a problem is to combine the
(partially revealed) preferences over objects of a large population into a
single, relatively consistent ordering of those objects. However, in many
cases, we might not want a single ranking and instead opt for individual
rankings. We study a version of the problem known as collaborative ranking. In
this problem we assume that individual users provide us with pairwise
preferences (for example purchasing one item over another). From those
preferences we wish to obtain rankings on items that the users have not had an
opportunity to explore. The results here have a very interesting connection to
the standard matrix completion problem. We provide a theoretical justification
for a nuclear norm regularized optimization procedure, and provide
high-dimensional scaling results that show how the error in estimating user
preferences behaves as the number of observations increase
Learning Model-Based Sparsity via Projected Gradient Descent
Several convex formulation methods have been proposed previously for
statistical estimation with structured sparsity as the prior. These methods
often require a carefully tuned regularization parameter, often a cumbersome or
heuristic exercise. Furthermore, the estimate that these methods produce might
not belong to the desired sparsity model, albeit accurately approximating the
true parameter. Therefore, greedy-type algorithms could often be more desirable
in estimating structured-sparse parameters. So far, these greedy methods have
mostly focused on linear statistical models. In this paper we study the
projected gradient descent with non-convex structured-sparse parameter model as
the constraint set. Should the cost function have a Stable Model-Restricted
Hessian the algorithm produces an approximation for the desired minimizer. As
an example we elaborate on application of the main results to estimation in
Generalized Linear Model
Gradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization
Hard Thresholding Pursuit (HTP) is an iterative greedy selection procedure
for finding sparse solutions of underdetermined linear systems. This method has
been shown to have strong theoretical guarantee and impressive numerical
performance. In this paper, we generalize HTP from compressive sensing to a
generic problem setup of sparsity-constrained convex optimization. The proposed
algorithm iterates between a standard gradient descent step and a hard
thresholding step with or without debiasing. We prove that our method enjoys
the strong guarantees analogous to HTP in terms of rate of convergence and
parameter estimation accuracy. Numerical evidences show that our method is
superior to the state-of-the-art greedy selection methods in sparse logistic
regression and sparse precision matrix estimation tasks