3,903 research outputs found

    Ensemble learning of linear perceptron; Online learning theory

    Full text link
    Within the framework of on-line learning, we study the generalization error of an ensemble learning machine learning from a linear teacher perceptron. The generalization error achieved by an ensemble of linear perceptrons having homogeneous or inhomogeneous initial weight vectors is precisely calculated at the thermodynamic limit of a large number of input elements and shows rich behavior. Our main findings are as follows. For learning with homogeneous initial weight vectors, the generalization error using an infinite number of linear student perceptrons is equal to only half that of a single linear perceptron, and converges with that of the infinite case with O(1/K) for a finite number of K linear perceptrons. For learning with inhomogeneous initial weight vectors, it is advantageous to use an approach of weighted averaging over the output of the linear perceptrons, and we show the conditions under which the optimal weights are constant during the learning process. The optimal weights depend on only correlation of the initial weight vectors.Comment: 14 pages, 3 figures, submitted to Physical Review

    Statistical Mechanics of Nonlinear On-line Learning for Ensemble Teachers

    Full text link
    We analyze the generalization performance of a student in a model composed of nonlinear perceptrons: a true teacher, ensemble teachers, and the student. We calculate the generalization error of the student analytically or numerically using statistical mechanics in the framework of on-line learning. We treat two well-known learning rules: Hebbian learning and perceptron learning. As a result, it is proven that the nonlinear model shows qualitatively different behaviors from the linear model. Moreover, it is clarified that Hebbian learning and perceptron learning show qualitatively different behaviors from each other. In Hebbian learning, we can analytically obtain the solutions. In this case, the generalization error monotonically decreases. The steady value of the generalization error is independent of the learning rate. The larger the number of teachers is and the more variety the ensemble teachers have, the smaller the generalization error is. In perceptron learning, we have to numerically obtain the solutions. In this case, the dynamical behaviors of the generalization error are non-monotonic. The smaller the learning rate is, the larger the number of teachers is; and the more variety the ensemble teachers have, the smaller the minimum value of the generalization error is.Comment: 13 pages, 9 figure

    Dyons in N=4 Supersymmetric Theories and Three-Pronged Strings

    Full text link
    We construct and explore BPS states that preserve 1/4 of supersymmetry in N=4 Yang-Mills theories. Such states are also realized as three-pronged strings ending on D3-branes. We correct the electric part of the BPS equation and relate its solutions to the unbroken abelian gauge group generators. Generic 1/4-BPS solitons are not spherically symmetric, but consist of two or more dyonic components held apart by a delicate balance between static electromagnetic force and scalar Higgs force. The instability previously found in three-pronged string configurations is due to excessive repulsion by one of these static forces. We also present an alternate construction of these 1/4-BPS states from quantum excitations around a magnetic monopole, and build up the supermultiplet for arbitrary (quantized) electric charge. The degeneracy and the highest spin of the supermultiplet increase linearly with a relative electric charge. We conclude with comments.Comment: 33 pages, two figures, LaTex, a footnote added, the figure caption of Fig.2 expanded, one more referenc

    Statistical Mechanics of Time Domain Ensemble Learning

    Full text link
    Conventional ensemble learning combines students in the space domain. On the other hand, in this paper we combine students in the time domain and call it time domain ensemble learning. In this paper, we analyze the generalization performance of time domain ensemble learning in the framework of online learning using a statistical mechanical method. We treat a model in which both the teacher and the student are linear perceptrons with noises. Time domain ensemble learning is twice as effective as conventional space domain ensemble learning.Comment: 10 pages, 10 figure

    Statistical Mechanics of Linear and Nonlinear Time-Domain Ensemble Learning

    Full text link
    Conventional ensemble learning combines students in the space domain. In this paper, however, we combine students in the time domain and call it time-domain ensemble learning. We analyze, compare, and discuss the generalization performances regarding time-domain ensemble learning of both a linear model and a nonlinear model. Analyzing in the framework of online learning using a statistical mechanical method, we show the qualitatively different behaviors between the two models. In a linear model, the dynamical behaviors of the generalization error are monotonic. We analytically show that time-domain ensemble learning is twice as effective as conventional ensemble learning. Furthermore, the generalization error of a nonlinear model features nonmonotonic dynamical behaviors when the learning rate is small. We numerically show that the generalization performance can be improved remarkably by using this phenomenon and the divergence of students in the time domain.Comment: 11 pages, 7 figure

    The RNA helicase Dbp7 promotes domain V/VI compaction and stabilization of inter-domain interactions during early 60S assembly

    Get PDF
    Early pre-60S ribosomal particles are poorly characterized, highly dynamic complexes that undergo extensive rRNA folding and compaction concomitant with assembly of ribosomal proteins and exchange of assembly factors. Pre-60S particles contain numerous RNA helicases, which are likely regulators of accurate and efficient formation of appropriate rRNA structures. Here we reveal binding of the RNA helicase Dbp7 to domain V/VI of early pre- 60S particles in yeast and show that in the absence of this protein, dissociation of the Npa1 scaffolding complex, release of the snR190 folding chaperone, recruitment of the A3 cluster factors and binding of the ribosomal protein uL3 are impaired. uL3 is critical for formation of the peptidyltransferase center (PTC) and is responsible for stabilizing interac- tions between the 5′ and 3′ ends of the 25S, an essential pre-requisite for subsequent pre- 60S maturation events. Highlighting the importance of pre-ribosome remodeling by Dbp7, our data suggest that in the absence of Dbp7 or its catalytic activity, early pre-ribosomal particles are targeted for degradation

    Zero-Mode Dynamics of String Webs

    Get PDF
    At sufficiently low energy the dynamics of a string web is dominated by zero modes involving rigid motion of the internal strings. The dimension of the associated moduli space equals the maximal number of internal faces in the web. The generic web moduli space has boundaries and multiple branches, and for webs with three or more faces the geometry is curved. Webs can also be studied in a lift to M-theory, where a string web is replaced by a membrane wrapped on a holomorphic curve in spacetime. In this case the moduli space is complexified and admits a Kaehler metric.Comment: LaTeX, 17 pages, 5 eps figures; v2: references adde

    Moduli Space Dimensions of Multi-Pronged Strings

    Get PDF
    The numbers of bosonic and fermionic zero modes of multi-pronged strings are counted in N=4{\cal N}=4 super-Yang-Mills theory and compared with those of the IIB string theory. We obtain a nice agreement for the fermionic zero modes, while our result for the bosonic zero modes differs from that obtained in the IIB string theory. The possible origin of the discrepancy is discussedComment: 15 pages, 2 figure

    Momentum modes of M5-branes in a 2d space

    Get PDF
    We study M5 branes by considering the selfdual strings parallel to a plane. With the internal oscillation frozen, each selfdual string gives a 5d SYM field. All selfdual strings together give a 6d field with 5 scalars, 3 gauge degrees of freedom and 8 fermionic degrees of freedom in adjoint representation of U(N). Selfdual strings with the same orientation have the SYM-type interaction. For selfdual strings with the different orientations, which could also be taken as the unparallel momentum modes of the 6d field on that plane or the (p,q) (r,s) strings on D3 with (p,q)\neq (r,s), the [i,j]+[j,k]\rightarrow [i,k] relation is not valid, so the coupling cannot be written in terms of the standard N \times N matrix multiplication. 3-string junction, which is the bound state of the unparallel [i,j] [j,k] selfdual strings, may play a role here.Comment: 37 pages, 5 figures, to appear in JHEP; v2: reference adde

    Social Interactions vs Revisions, What is important for Promotion in Wikipedia?

    Full text link
    In epistemic community, people are said to be selected on their knowledge contribution to the project (articles, codes, etc.) However, the socialization process is an important factor for inclusion, sustainability as a contributor, and promotion. Finally, what does matter to be promoted? being a good contributor? being a good animator? knowing the boss? We explore this question looking at the process of election for administrator in the English Wikipedia community. We modeled the candidates according to their revisions and/or social attributes. These attributes are used to construct a predictive model of promotion success, based on the candidates's past behavior, computed thanks to a random forest algorithm. Our model combining knowledge contribution variables and social networking variables successfully explain 78% of the results which is better than the former models. It also helps to refine the criterion for election. If the number of knowledge contributions is the most important element, social interactions come close second to explain the election. But being connected with the future peers (the admins) can make the difference between success and failure, making this epistemic community a very social community too
    • …
    corecore