2,336 research outputs found
Fast algorithms for large scale generalized distance weighted discrimination
High dimension low sample size statistical analysis is important in a wide
range of applications. In such situations, the highly appealing discrimination
method, support vector machine, can be improved to alleviate data piling at the
margin. This leads naturally to the development of distance weighted
discrimination (DWD), which can be modeled as a second-order cone programming
problem and solved by interior-point methods when the scale (in sample size and
feature dimension) of the data is moderate. Here, we design a scalable and
robust algorithm for solving large scale generalized DWD problems. Numerical
experiments on real data sets from the UCI repository demonstrate that our
algorithm is highly efficient in solving large scale problems, and sometimes
even more efficient than the highly optimized LIBLINEAR and LIBSVM for solving
the corresponding SVM problems
On Quasi-Newton Forward--Backward Splitting: Proximal Calculus and Convergence
We introduce a framework for quasi-Newton forward--backward splitting
algorithms (proximal quasi-Newton methods) with a metric induced by diagonal
rank- symmetric positive definite matrices. This special type of
metric allows for a highly efficient evaluation of the proximal mapping. The
key to this efficiency is a general proximal calculus in the new metric. By
using duality, formulas are derived that relate the proximal mapping in a
rank- modified metric to the original metric. We also describe efficient
implementations of the proximity calculation for a large class of functions;
the implementations exploit the piece-wise linear nature of the dual problem.
Then, we apply these results to acceleration of composite convex minimization
problems, which leads to elegant quasi-Newton methods for which we prove
convergence. The algorithm is tested on several numerical examples and compared
to a comprehensive list of alternatives in the literature. Our quasi-Newton
splitting algorithm with the prescribed metric compares favorably against
state-of-the-art. The algorithm has extensive applications including signal
processing, sparse recovery, machine learning and classification to name a few.Comment: arXiv admin note: text overlap with arXiv:1206.115
Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
We provide theoretical analysis of the statistical and computational
properties of penalized -estimators that can be formulated as the solution
to a possibly nonconvex optimization problem. Many important estimators fall in
this category, including least squares regression with nonconvex
regularization, generalized linear models with nonconvex regularization and
sparse elliptical random design regression. For these problems, it is
intractable to calculate the global solution due to the nonconvex formulation.
In this paper, we propose an approximate regularization path-following method
for solving a variety of learning problems with nonconvex objective functions.
Under a unified analytic framework, we simultaneously provide explicit
statistical and computational rates of convergence for any local solution
attained by the algorithm. Computationally, our algorithm attains a global
geometric rate of convergence for calculating the full regularization path,
which is optimal among all first-order algorithms. Unlike most existing methods
that only attain geometric rates of convergence for one single regularization
parameter, our algorithm calculates the full regularization path with the same
iteration complexity. In particular, we provide a refined iteration complexity
bound to sharply characterize the performance of each stage along the
regularization path. Statistically, we provide sharp sample complexity analysis
for all the approximate local solutions along the regularization path. In
particular, our analysis improves upon existing results by providing a more
refined sample complexity bound as well as an exact support recovery result for
the final estimator. These results show that the final estimator attains an
oracle statistical property due to the usage of nonconvex penalty.Comment: Published in at http://dx.doi.org/10.1214/14-AOS1238 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Implicit Langevin Algorithms for Sampling From Log-concave Densities
For sampling from a log-concave density, we study implicit integrators
resulting from -method discretization of the overdamped Langevin
diffusion stochastic differential equation. Theoretical and algorithmic
properties of the resulting sampling methods for and a
range of step sizes are established. Our results generalize and extend prior
works in several directions. In particular, for , we prove
geometric ergodicity and stability of the resulting methods for all step sizes.
We show that obtaining subsequent samples amounts to solving a strongly-convex
optimization problem, which is readily achievable using one of numerous
existing methods. Numerical examples supporting our theoretical analysis are
also presented
- …