Traditional non-parametric statistical learning\ud techniques are often computationally attractive,\ud but lack the same generalization and\ud model selection abilities as state-of-the-art\ud Bayesian algorithms which, however, are usually\ud computationally prohibitive. This paper\ud makes several important contributions that\ud allow Bayesian learning to scale to more complex,\ud real-world learning scenarios. Firstly,\ud we show that back tting | a traditional\ud non-parametric, yet highly e cient regression\ud tool | can be derived in a novel formulation\ud within an expectation maximization\ud (EM) framework and thus can nally\ud be given a probabilistic interpretation. Secondly,\ud we show that the general framework\ud of sparse Bayesian learning and in particular\ud the relevance vector machine (RVM), can\ud be derived as a highly e cient algorithm using\ud a Bayesian version of back tting at its\ud core. As we demonstrate on several regression\ud and classi cation benchmarks, Bayesian\ud back tting o ers a compelling alternative to\ud current regression methods, especially when\ud the size and dimensionality of the data challenge\ud computational resources
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.