4,531 research outputs found

    Statistical Mechanics of Nonlinear On-line Learning for Ensemble Teachers

    Full text link
    We analyze the generalization performance of a student in a model composed of nonlinear perceptrons: a true teacher, ensemble teachers, and the student. We calculate the generalization error of the student analytically or numerically using statistical mechanics in the framework of on-line learning. We treat two well-known learning rules: Hebbian learning and perceptron learning. As a result, it is proven that the nonlinear model shows qualitatively different behaviors from the linear model. Moreover, it is clarified that Hebbian learning and perceptron learning show qualitatively different behaviors from each other. In Hebbian learning, we can analytically obtain the solutions. In this case, the generalization error monotonically decreases. The steady value of the generalization error is independent of the learning rate. The larger the number of teachers is and the more variety the ensemble teachers have, the smaller the generalization error is. In perceptron learning, we have to numerically obtain the solutions. In this case, the dynamical behaviors of the generalization error are non-monotonic. The smaller the learning rate is, the larger the number of teachers is; and the more variety the ensemble teachers have, the smaller the minimum value of the generalization error is.Comment: 13 pages, 9 figure

    Statistical Mechanics of Linear and Nonlinear Time-Domain Ensemble Learning

    Full text link
    Conventional ensemble learning combines students in the space domain. In this paper, however, we combine students in the time domain and call it time-domain ensemble learning. We analyze, compare, and discuss the generalization performances regarding time-domain ensemble learning of both a linear model and a nonlinear model. Analyzing in the framework of online learning using a statistical mechanical method, we show the qualitatively different behaviors between the two models. In a linear model, the dynamical behaviors of the generalization error are monotonic. We analytically show that time-domain ensemble learning is twice as effective as conventional ensemble learning. Furthermore, the generalization error of a nonlinear model features nonmonotonic dynamical behaviors when the learning rate is small. We numerically show that the generalization performance can be improved remarkably by using this phenomenon and the divergence of students in the time domain.Comment: 11 pages, 7 figure

    Statistical mechanics, generalisation and regularisation of neural network models

    Get PDF

    Statistical Mechanics of Soft Margin Classifiers

    Full text link
    We study the typical learning properties of the recently introduced Soft Margin Classifiers (SMCs), learning realizable and unrealizable tasks, with the tools of Statistical Mechanics. We derive analytically the behaviour of the learning curves in the regime of very large training sets. We obtain exponential and power laws for the decay of the generalization error towards the asymptotic value, depending on the task and on general characteristics of the distribution of stabilities of the patterns to be learned. The optimal learning curves of the SMCs, which give the minimal generalization error, are obtained by tuning the coefficient controlling the trade-off between the error and the regularization terms in the cost function. If the task is realizable by the SMC, the optimal performance is better than that of a hard margin Support Vector Machine and is very close to that of a Bayesian classifier.Comment: 26 pages, 12 figures, submitted to Physical Review

    2004 Graduate Bulletin

    Get PDF
    After 2003 the University of Dayton Bulletin went exclusively online. This copy was printed from the web and scanned by the Registrar’s Office. For general information about the university please see the Undergraduate Bulletin.https://ecommons.udayton.edu/bulletin_grad/1000/thumbnail.jp

    Neural networks: from the perceptron to deep nets

    Full text link
    Artificial networks have been studied through the prism of statistical mechanics as disordered systems since the 80s, starting from the simple models of Hopfield's associative memory and the single-neuron perceptron classifier. Assuming data is generated by a teacher model, asymptotic generalisation predictions were originally derived using the replica method and the online learning dynamics has been described in the large system limit. In this chapter, we review the key original ideas of this literature along with their heritage in the ongoing quest to understand the efficiency of modern deep learning algorithms. One goal of current and future research is to characterize the bias of the learning algorithms toward well-generalising minima in a complex overparametrized loss landscapes with many solutions perfectly interpolating the training data. Works on perceptrons, two-layer committee machines and kernel-like learning machines shed light on these benefits of overparametrization. Another goal is to understand the advantage of depth while models now commonly feature tens or hundreds of layers. If replica computations apparently fall short in describing general deep neural networks learning, studies of simplified linear or untrained models, as well as the derivation of scaling laws provide the first elements of answers.Comment: Contribution to the book Spin Glass Theory and Far Beyond: Replica Symmetry Breaking after 40 Years; Chap. 2

    Teaching and Learning of Fluid Mechanics, Volume II

    Get PDF
    This book is devoted to the teaching and learning of fluid mechanics. Fluid mechanics occupies a privileged position in the sciences; it is taught in various science departments including physics, mathematics, mechanical, chemical and civil engineering and environmental sciences, each highlighting a different aspect or interpretation of the foundation and applications of fluids. While scholarship in fluid mechanics is vast, expanding into the areas of experimental, theoretical and computational fluid mechanics, there is little discussion among scientists about the different possible ways of teaching this subject. We think there is much to be learned, for teachers and students alike, from an interdisciplinary dialogue about fluids. This volume therefore highlights articles which have bearing on the pedagogical aspects of fluid mechanics at the undergraduate and graduate level

    University of New Hampshire, The graduate school 1977-78

    Get PDF
    Includes Graduate School catalog; Title varie

    Undergraduate and Graduate Course Descriptions, 2013 Summer

    Get PDF
    Wright State University undergraduate and graduate course descriptions from Summer 2013
    corecore