122 research outputs found

    An optimal aggregation type classifier

    Full text link
    We introduce a nonlinear aggregation type classifier for functional data defined on a separable and complete metric space. The new rule is built up from a collection of MM arbitrary training classifiers. If the classifiers are consistent, then so is the aggregation rule. Moreover, asymptotically the aggregation rule behaves as well as the best of the MM classifiers. The results of a small si\-mu\-lation are reported both, for high dimensional and functional data

    Remembrance of Leo Breiman

    Full text link
    In 1994, I came to Berkeley and was fortunate to stay there three years, first as a postdoctoral researcher and then as Neyman Visiting Assistant Professor. For me, this period was a unique opportunity to see other aspects and learn many more things about statistics: the Department of Statistics at Berkeley was much bigger and hence broader than my home at ETH Z\"urich and I enjoyed very much that the science was perhaps a bit more speculative. As soon as I settled in the department, I tried to get in touch with the local faculty. Leo Breiman started a reading group on topics in machine learning and I didn't hesitate to participate together with other Ph.D. students. Leo spread a tremendous amount of enthusiasm, telling us about the vast opportunity we now had by taking advantage of computational power. Hearing his views and opinions and listening to his thoughts and ideas has been very exciting, stimulating and entertaining as well. This was my first occasion to get to know Leo. And there was, at least a bit, a vice-versa implication: now, Leo knew my name and who I am. Whenever we saw each other on the 4th floor in Evans Hall, I got a very gentle smile and "hello" from Leo. And in fact, this happened quite often: I often walked around while thinking about a problem, and it seemed to me, that Leo had a similar habit.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS381 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Risk bounds for purely uniformly random forests

    Get PDF
    Random forests, introduced by Leo Breiman in 2001, are a very effective statistical method. The complex mechanism of the method makes theoretical analysis difficult. Therefore, a simplified version of random forests, called purely random forests, which can be theoretically handled more easily, has been considered. In this paper we introduce a variant of this kind of random forests, that we call purely uniformly random forests. In the context of regression problems with a one-dimensional predictor space, we show that both random trees and random forests reach minimax rate of convergence. In addition, we prove that compared to random trees, random forests improve accuracy by reducing the estimator variance by a factor of three fourths
    corecore