4,422 research outputs found
Proximal boosting and its acceleration
Gradient boosting is a prediction method that iteratively combines weak
learners to produce a complex and accurate model. From an optimization point of
view, the learning procedure of gradient boosting mimics a gradient descent on
a functional variable. This paper proposes to build upon the proximal point
algorithm when the empirical risk to minimize is not differentiable to
introduce a novel boosting approach, called proximal boosting. Besides being
motivated by non-differentiable optimization, the proposed algorithm benefits
from Nesterov's acceleration in the same way as gradient boosting [Biau et al.,
2018]. This leads to a variant, called accelerated proximal boosting.
Advantages of leveraging proximal methods for boosting are illustrated by
numerical experiments on simulated and real-world data. In particular, we
exhibit a favorable comparison over gradient boosting regarding convergence
rate and prediction accuracy
Efficient Inexact Proximal Gradient Algorithm for Nonconvex Problems
The proximal gradient algorithm has been popularly used for convex
optimization. Recently, it has also been extended for nonconvex problems, and
the current state-of-the-art is the nonmonotone accelerated proximal gradient
algorithm. However, it typically requires two exact proximal steps in each
iteration, and can be inefficient when the proximal step is expensive. In this
paper, we propose an efficient proximal gradient algorithm that requires only
one inexact (and thus less expensive) proximal step in each iteration.
Convergence to a critical point %of the nonconvex problem is still guaranteed
and has a convergence rate, which is the best rate for nonconvex
problems with first-order methods. Experiments on a number of problems
demonstrate that the proposed algorithm has comparable performance as the
state-of-the-art, but is much faster
- …