16,265 research outputs found
An Introduction to Recursive Partitioning: Rationale, Application and Characteristics of Classification and Regression Trees, Bagging and Random Forests
Recursive partitioning methods have become popular and widely used tools for nonparametric regression and classification in many scientific fields. Especially random forests, that can deal with large numbers of predictor variables even in the presence of complex interactions, have been applied successfully in genetics, clinical medicine and bioinformatics within the past few years.
High dimensional problems are common not only in genetics, but also in some areas of psychological research, where only few subjects can be measured due to time or cost constraints, yet a large amount of data is generated for each subject. Random forests have been shown to achieve a high prediction accuracy in such applications, and provide descriptive variable importance measures reflecting the impact of each variable in both main effects and interactions.
The aim of this work is to introduce the principles of the standard recursive partitioning methods as well as recent methodological improvements, to illustrate their usage for low and high dimensional data exploration, but also to point out limitations of the methods and potential pitfalls in their practical application.
Application of the methods is illustrated using freely available implementations in the R system for statistical computing
Percolation-like Scaling Exponents for Minimal Paths and Trees in the Stochastic Mean Field Model
In the mean field (or random link) model there are points and inter-point
distances are independent random variables. For and in the
limit, let (maximum number of steps
in a path whose average step-length is ). The function
is analogous to the percolation function in percolation theory:
there is a critical value at which becomes
non-zero, and (presumably) a scaling exponent in the sense
. Recently developed probabilistic
methodology (in some sense a rephrasing of the cavity method of Mezard-Parisi)
provides a simple albeit non-rigorous way of writing down such functions in
terms of solutions of fixed-point equations for probability distributions.
Solving numerically gives convincing evidence that . A parallel
study with trees instead of paths gives scaling exponent . The new
exponents coincide with those found in a different context (comparing optimal
and near-optimal solutions of mean-field TSP and MST) and reinforce the
suggestion that these scaling exponents determine universality classes for
optimization problems on random points.Comment: 19 page
- âŚ