24,782 research outputs found
Proximal boosting and its acceleration
Gradient boosting is a prediction method that iteratively combines weak
learners to produce a complex and accurate model. From an optimization point of
view, the learning procedure of gradient boosting mimics a gradient descent on
a functional variable. This paper proposes to build upon the proximal point
algorithm when the empirical risk to minimize is not differentiable to
introduce a novel boosting approach, called proximal boosting. Besides being
motivated by non-differentiable optimization, the proposed algorithm benefits
from Nesterov's acceleration in the same way as gradient boosting [Biau et al.,
2018]. This leads to a variant, called accelerated proximal boosting.
Advantages of leveraging proximal methods for boosting are illustrated by
numerical experiments on simulated and real-world data. In particular, we
exhibit a favorable comparison over gradient boosting regarding convergence
rate and prediction accuracy
Estimation and Regularization Techniques for Regression Models with Multidimensional Prediction Functions
Boosting is one of the most important methods for fitting
regression models and building prediction rules from
high-dimensional data. A notable feature of boosting is that the
technique has a built-in mechanism for shrinking coefficient
estimates and variable selection. This regularization mechanism
makes boosting a suitable method for analyzing data characterized by
small sample sizes and large numbers of predictors. We extend the
existing methodology by developing a boosting method for prediction
functions with multiple components. Such multidimensional functions
occur in many types of statistical models, for example in count data
models and in models involving outcome variables with a mixture
distribution. As will be demonstrated, the new algorithm is suitable
for both the estimation of the prediction function and
regularization of the estimates. In addition, nuisance parameters
can be estimated simultaneously with the prediction function
An update on statistical boosting in biomedicine
Statistical boosting algorithms have triggered a lot of research during the
last decade. They combine a powerful machine-learning approach with classical
statistical modelling, offering various practical advantages like automated
variable selection and implicit regularization of effect estimates. They are
extremely flexible, as the underlying base-learners (regression functions
defining the type of effect for the explanatory variables) can be combined with
any kind of loss function (target function to be optimized, defining the type
of regression setting). In this review article, we highlight the most recent
methodological developments on statistical boosting regarding variable
selection, functional regression and advanced time-to-event modelling.
Additionally, we provide a short overview on relevant applications of
statistical boosting in biomedicine
Model-based Boosting in R: A Hands-on Tutorial Using the R Package mboost
We provide a detailed hands-on tutorial for the R add-on package mboost. The package implements boosting for optimizing general risk functions utilizing component-wise (penalized) least squares estimates as base-learners for fitting various kinds of generalized linear and generalized additive models to potentially high-dimensional data. We give a theoretical background and demonstrate how mboost can be used to fit interpretable models of different complexity. As an example we use mboost to predict the body fat based on anthropometric measurements throughout the tutorial
- ā¦