75,410 research outputs found
Invaded cluster algorithm for critical properties of periodic and aperiodic planar Ising models
We demonstrate that the invaded cluster algorithm, recently introduced by
Machta et al, is a fast and reliable tool for determining the critical
temperature and the magnetic critical exponent of periodic and aperiodic
ferromagnetic Ising models in two dimensions. The algorithm is shown to
reproduce the known values of the critical temperature on various periodic and
quasiperiodic graphs with an accuracy of more than three significant digits. On
two quasiperiodic graphs which were not investigated in this respect before,
the twelvefold symmetric square-triangle tiling and the tenfold symmetric
T\"ubingen triangle tiling, we determine the critical temperature. Furthermore,
a generalization of the algorithm to non-identical coupling strengths is
presented and applied to a class of Ising models on the Labyrinth tiling. For
generic cases in which the heuristic Harris-Luck criterion predicts deviations
from the Onsager universality class, we find a magnetic critical exponent
different from the Onsager value. But also notable exceptions to the criterion
are found which consist not only of the exactly solvable cases, in agreement
with a recent exact result, but also of the self-dual ones and maybe more.Comment: 15 pages, 5 figures; v2: Fig. 5b replaced, minor change
The Theory Behind Overfitting, Cross Validation, Regularization, Bagging, and Boosting: Tutorial
In this tutorial paper, we first define mean squared error, variance,
covariance, and bias of both random variables and classification/predictor
models. Then, we formulate the true and generalization errors of the model for
both training and validation/test instances where we make use of the Stein's
Unbiased Risk Estimator (SURE). We define overfitting, underfitting, and
generalization using the obtained true and generalization errors. We introduce
cross validation and two well-known examples which are -fold and
leave-one-out cross validations. We briefly introduce generalized cross
validation and then move on to regularization where we use the SURE again. We
work on both and norm regularizations. Then, we show that
bootstrap aggregating (bagging) reduces the variance of estimation. Boosting,
specifically AdaBoost, is introduced and it is explained as both an additive
model and a maximum margin model, i.e., Support Vector Machine (SVM). The upper
bound on the generalization error of boosting is also provided to show why
boosting prevents from overfitting. As examples of regularization, the theory
of ridge and lasso regressions, weight decay, noise injection to input/weights,
and early stopping are explained. Random forest, dropout, histogram of oriented
gradients, and single shot multi-box detector are explained as examples of
bagging in machine learning and computer vision. Finally, boosting tree and SVM
models are mentioned as examples of boosting.Comment: 23 pages, 9 figure
- …