14,571 research outputs found

    Optimization of the Asymptotic Property of Mutual Learning Involving an Integration Mechanism of Ensemble Learning

    Full text link
    We propose an optimization method of mutual learning which converges into the identical state of optimum ensemble learning within the framework of on-line learning, and have analyzed its asymptotic property through the statistical mechanics method.The proposed model consists of two learning steps: two students independently learn from a teacher, and then the students learn from each other through the mutual learning. In mutual learning, students learn from each other and the generalization error is improved even if the teacher has not taken part in the mutual learning. However, in the case of different initial overlaps(direction cosine) between teacher and students, a student with a larger initial overlap tends to have a larger generalization error than that of before the mutual learning. To overcome this problem, our proposed optimization method of mutual learning optimizes the step sizes of two students to minimize the asymptotic property of the generalization error. Consequently, the optimized mutual learning converges to a generalization error identical to that of the optimal ensemble learning. In addition, we show the relationship between the optimum step size of the mutual learning and the integration mechanism of the ensemble learning.Comment: 13 pages, 3 figures, submitted to Journal of Physical Society of Japa

    Ensemble learning of linear perceptron; Online learning theory

    Full text link
    Within the framework of on-line learning, we study the generalization error of an ensemble learning machine learning from a linear teacher perceptron. The generalization error achieved by an ensemble of linear perceptrons having homogeneous or inhomogeneous initial weight vectors is precisely calculated at the thermodynamic limit of a large number of input elements and shows rich behavior. Our main findings are as follows. For learning with homogeneous initial weight vectors, the generalization error using an infinite number of linear student perceptrons is equal to only half that of a single linear perceptron, and converges with that of the infinite case with O(1/K) for a finite number of K linear perceptrons. For learning with inhomogeneous initial weight vectors, it is advantageous to use an approach of weighted averaging over the output of the linear perceptrons, and we show the conditions under which the optimal weights are constant during the learning process. The optimal weights depend on only correlation of the initial weight vectors.Comment: 14 pages, 3 figures, submitted to Physical Review

    Statistical Mechanics of Nonlinear On-line Learning for Ensemble Teachers

    Full text link
    We analyze the generalization performance of a student in a model composed of nonlinear perceptrons: a true teacher, ensemble teachers, and the student. We calculate the generalization error of the student analytically or numerically using statistical mechanics in the framework of on-line learning. We treat two well-known learning rules: Hebbian learning and perceptron learning. As a result, it is proven that the nonlinear model shows qualitatively different behaviors from the linear model. Moreover, it is clarified that Hebbian learning and perceptron learning show qualitatively different behaviors from each other. In Hebbian learning, we can analytically obtain the solutions. In this case, the generalization error monotonically decreases. The steady value of the generalization error is independent of the learning rate. The larger the number of teachers is and the more variety the ensemble teachers have, the smaller the generalization error is. In perceptron learning, we have to numerically obtain the solutions. In this case, the dynamical behaviors of the generalization error are non-monotonic. The smaller the learning rate is, the larger the number of teachers is; and the more variety the ensemble teachers have, the smaller the minimum value of the generalization error is.Comment: 13 pages, 9 figure

    Theoretical study of the (3x2) reconstruction of beta-SiC(001)

    Full text link
    By means of ab initio molecular dynamics and band structure calculations, as well as using calculated STM images, we have singled out one structural model for the (3x2) reconstruction of the Si-terminated (001) surface of cubic SiC, amongst several proposed in the literature. This is an alternate dimer-row model, with an excess Si coverage of 1/3, yielding STM images in good accord with recent measurements [F.Semond et al. Phys. Rev. Lett. 77, 2013 (1996)].Comment: To be published in PRB Rapid. Com

    On balanced complementation for regular t-wise balanced designs

    Get PDF
    AbstractVanstone has shown a procedure, called r-complementation, to construct a regular pairwise balanced design from an existing regular pairwise balanced design. In this paper, we give a generalization of r-complementation, called balanced complementation. Necessary and sufficient conditions for balanced complementation which gives a regular t-wise balanced design from an existing regular t-wise balanced design are shown. We characterize those aspects of designs which permit balanced complementation. Results obtained here will be applied to construct regular t-wise balanced designs which are useful in Statistics

    The scaling limit of the incipient infinite cluster in high-dimensional percolation. II. Integrated super-Brownian excursion

    Full text link
    For independent nearest-neighbour bond percolation on Z^d with d >> 6, we prove that the incipient infinite cluster's two-point function and three-point function converge to those of integrated super-Brownian excursion (ISE) in the scaling limit. The proof is based on an extension of the new expansion for percolation derived in a previous paper, and involves treating the magnetic field as a complex variable. A special case of our result for the two-point function implies that the probability that the cluster of the origin consists of n sites, at the critical point, is given by a multiple of n^{-3/2}, plus an error term of order n^{-3/2-\epsilon} with \epsilon >0. This is a strong statement that the critical exponent delta is given by delta =2.Comment: 56 pages, 3 Postscript figures, in AMS-LaTeX, with graphicx, epic, and xr package

    Statistical Mechanics of Time Domain Ensemble Learning

    Full text link
    Conventional ensemble learning combines students in the space domain. On the other hand, in this paper we combine students in the time domain and call it time domain ensemble learning. In this paper, we analyze the generalization performance of time domain ensemble learning in the framework of online learning using a statistical mechanical method. We treat a model in which both the teacher and the student are linear perceptrons with noises. Time domain ensemble learning is twice as effective as conventional space domain ensemble learning.Comment: 10 pages, 10 figure

    Climate change amplifies plant invasion hotspots in Nepal

    Get PDF
    Aim Climate change has increased the risk of biological invasions, particularly by increasing the climatically suitable regions for invasive alien species. The distribution of many native and invasive species has been predicted to change under future climate. We performed species distribution modelling of invasive alien plants (IAPs) to identify hotspots under current and future climate scenarios in Nepal, a country ranked among the most vulnerable countries to biological invasions and climate change in the world. Location Nepal. Methods We predicted climatically suitable niches of 24 out of the total 26 reported IAPs in Nepal under current and future climate (2050 for RCP 6.0) using an ensemble of species distribution models. We also conducted hotspot analysis to highlight the geographic hotspots for IAPs in different climatic zones, land cover, ecoregions, physiography and federal states. Results Under future climate, climatically suitable regions for 75% of IAPs will expand in contrast to a contraction of the climatically suitable regions for the remaining 25% of the IAPs. A high proportion of the modelled suitable niches of IAPs occurred on agricultural lands followed by forests. In aggregation, both extent and intensity (invasion hotspots) of the climatically suitable regions for IAPs will increase in Nepal under future climate scenarios. The invasion hotspots will expand towards the high‐elevation mountainous regions. In these regions, land use is rapidly transforming due to the development of infrastructure and expansion of tourism and trade. Main conclusions Negative impacts on livelihood, biodiversity and ecosystem services, as well as economic loss caused by IAPs in the future, may be amplified if preventive and control measures are not immediately initiated. Therefore, the management of IAPs in Nepal should account for the vulnerability of climate change‐induced biological invasions into new areas, primarily in the mountains

    Statistical Mechanics of Linear and Nonlinear Time-Domain Ensemble Learning

    Full text link
    Conventional ensemble learning combines students in the space domain. In this paper, however, we combine students in the time domain and call it time-domain ensemble learning. We analyze, compare, and discuss the generalization performances regarding time-domain ensemble learning of both a linear model and a nonlinear model. Analyzing in the framework of online learning using a statistical mechanical method, we show the qualitatively different behaviors between the two models. In a linear model, the dynamical behaviors of the generalization error are monotonic. We analytically show that time-domain ensemble learning is twice as effective as conventional ensemble learning. Furthermore, the generalization error of a nonlinear model features nonmonotonic dynamical behaviors when the learning rate is small. We numerically show that the generalization performance can be improved remarkably by using this phenomenon and the divergence of students in the time domain.Comment: 11 pages, 7 figure
    corecore