15,291 research outputs found

    Generalized Boosting Algorithms for Convex Optimization

    Full text link
    Boosting is a popular way to derive powerful learners from simpler hypothesis classes. Following previous work (Mason et al., 1999; Friedman, 2000) on general boosting frameworks, we analyze gradient-based descent algorithms for boosting with respect to any convex objective and introduce a new measure of weak learner performance into this setting which generalizes existing work. We present the weak to strong learning guarantees for the existing gradient boosting work for strongly-smooth, strongly-convex objectives under this new measure of performance, and also demonstrate that this work fails for non-smooth objectives. To address this issue, we present new algorithms which extend this boosting approach to arbitrary convex loss functions and give corresponding weak to strong convergence results. In addition, we demonstrate experimental results that support our analysis and demonstrate the need for the new algorithms we present.Comment: Extended version of paper presented at the International Conference on Machine Learning, 2011. 9 pages + appendix with proof

    Model-based boosting in high dimensions

    Get PDF
    Summary: The R add-on package mboost implements functional gradient descent algorithms (boosting) for optimizing general loss functions utilizing componentwise least squares, either of parametric linear form or smoothing splines, or regression trees as base learners for fitting generalized linear, additive and interaction models to potentially high-dimensional data. Availability: Package mboost is available from the Comprehensive R Archive Network () under the terms of the General Public Licence (GPL). Contact: [email protected]
    corecore