6,775 research outputs found

    Statistical Mechanics of Time Domain Ensemble Learning

    Full text link
    Conventional ensemble learning combines students in the space domain. On the other hand, in this paper we combine students in the time domain and call it time domain ensemble learning. In this paper, we analyze the generalization performance of time domain ensemble learning in the framework of online learning using a statistical mechanical method. We treat a model in which both the teacher and the student are linear perceptrons with noises. Time domain ensemble learning is twice as effective as conventional space domain ensemble learning.Comment: 10 pages, 10 figure

    Statistical Mechanics of Nonlinear On-line Learning for Ensemble Teachers

    Full text link
    We analyze the generalization performance of a student in a model composed of nonlinear perceptrons: a true teacher, ensemble teachers, and the student. We calculate the generalization error of the student analytically or numerically using statistical mechanics in the framework of on-line learning. We treat two well-known learning rules: Hebbian learning and perceptron learning. As a result, it is proven that the nonlinear model shows qualitatively different behaviors from the linear model. Moreover, it is clarified that Hebbian learning and perceptron learning show qualitatively different behaviors from each other. In Hebbian learning, we can analytically obtain the solutions. In this case, the generalization error monotonically decreases. The steady value of the generalization error is independent of the learning rate. The larger the number of teachers is and the more variety the ensemble teachers have, the smaller the generalization error is. In perceptron learning, we have to numerically obtain the solutions. In this case, the dynamical behaviors of the generalization error are non-monotonic. The smaller the learning rate is, the larger the number of teachers is; and the more variety the ensemble teachers have, the smaller the minimum value of the generalization error is.Comment: 13 pages, 9 figure

    On-line Learning of an Unlearnable True Teacher through Mobile Ensemble Teachers

    Full text link
    On-line learning of a hierarchical learning model is studied by a method from statistical mechanics. In our model a student of a simple perceptron learns from not a true teacher directly, but ensemble teachers who learn from the true teacher with a perceptron learning rule. Since the true teacher and the ensemble teachers are expressed as non-monotonic perceptron and simple ones, respectively, the ensemble teachers go around the unlearnable true teacher with the distance between them fixed in an asymptotic steady state. The generalization performance of the student is shown to exceed that of the ensemble teachers in a transient state, as was shown in similar ensemble-teachers models. Further, it is found that moving the ensemble teachers even in the steady state, in contrast to the fixed ensemble teachers, is efficient for the performance of the student.Comment: 18 pages, 8 figure

    Statistical Mechanics of Linear and Nonlinear Time-Domain Ensemble Learning

    Full text link
    Conventional ensemble learning combines students in the space domain. In this paper, however, we combine students in the time domain and call it time-domain ensemble learning. We analyze, compare, and discuss the generalization performances regarding time-domain ensemble learning of both a linear model and a nonlinear model. Analyzing in the framework of online learning using a statistical mechanical method, we show the qualitatively different behaviors between the two models. In a linear model, the dynamical behaviors of the generalization error are monotonic. We analytically show that time-domain ensemble learning is twice as effective as conventional ensemble learning. Furthermore, the generalization error of a nonlinear model features nonmonotonic dynamical behaviors when the learning rate is small. We numerically show that the generalization performance can be improved remarkably by using this phenomenon and the divergence of students in the time domain.Comment: 11 pages, 7 figure

    A Measurement of Proper Motions of SiO Maser Sources in the Galactic Center with the VLBA

    Full text link
    We report on the high-precision astrometric observations of maser sources around the Galactic Center in the SiO J=1--0 v=1 and 2 lines with the VLBA during 2001 -- 2004. With phase-referencing interferometry referred to the radio continuum source Sgr A*, accurate positions of masers were obtained for three detected objects: IRS 10 EE (7 epochs), IRS 15NE (2 epochs), and SiO 6 (only 1 epoch). Because circumstellar masers of these objects were resolved into several components, proper motions for the maser sources were derived with several different methods. Combining our VLBA results with those of the previous VLA observations, we obtained the IRS 10EE proper motion of 76+-3 km s^{-1} (at 8 kpc) to the south relative to Sgr A*. Almost null proper motion of this star in the east--west direction results in a net transverse motion of the infrared reference frame of about 30+-9 km s^{-1} to the west relative to Sgr A*. The proper-motion data also suggests that IRS 10EE is an astrometric binary with an unseen massive companion.Comment: High-res. figures are available at ftp://ftp.nro.nao.ac.jp/nroreport/no656.pdf.gz . PASJ 60, No. 1 (2008) in pres

    Magnetic and Electronic Properties of Lix_xCoO2_2 Single Crystals

    Full text link
    Measurements of electrical resistivity (ρ\rho), DC magnetization (MM) and specific heat (CC) have been performed on layered oxide Lix_xCoO2_2 (0.25\leqxx\leq0.99) using single crystal specimens. The ρ\rho versus temperature (TT) curve for xx=0.90 and 0.99 is found to be insulating but a metallic behavior is observed for 0.25\leqxx\leq0.71. At TST_{\rm S}\sim155 K, a sharp anomaly is observed in the ρ\rho-TT, MM-TT and CC//TT-TT curves for xx=0.66 with thermal hysteresis, indicating the first-order charactor of the transition. The transition at TST_{\rm S}\sim155 K is observed for the wide range of xx=0.46-0.71. It is found that the MM-TT curve measured after rapid cool becomes different from that after slow cool below TFT_{\rm F}, which is \sim130 K for xx=0.46-0.71. TFT_{\rm F} is found to agree with the temperature at which the motional narrowing in the 7^7Li NMR line width is observed, indicating that the Li ions stop diffusing and order at the regular site below TFT_{\rm F}. The ordering of Li ions below TFT_{\rm F}\sim130 K is likely to be triggered and stabilized by the charge ordering in CoO2_2 layers below TST_{\rm S}.Comment: 8 pages, 7 figure

    An extracellular serine protease produced by Vibrio vulnificus NCIMB 2137, a metalloprotease-gene negative strain isolated from a diseased eel

    Get PDF
    Vibrio vulnificus is a ubiquitous estuarine microorganism but causes fatal systemic infections in immunocompromised humans, cultured eels or shrimps. An extracellular metalloprotease VVP/VvpE has been reported to be a potential virulence factor of the bacterium; however, a few strains isolated from a diseased eel or shrimp were recently found to produce a serine protease termed VvsA, but not VVP/VvpE. In the present study, we found that these strains had lost the 80 kb genomic region including the gene encoding VVP/VvpE. We also purified VvsA from the culture supernatant through ammonium sulfate fractionation, gel filtration and ion-exchange column chromatography, and the enzyme was demonstrated to be a chymotrypsin-like protease, as well as those from some vibrios. The gene vvsA was shown to constitute an operon with a downstream gene vvsB, and several Vibrio species were found to have orthologues of vvsAB. These findings indicate that the genes vvp/vvpE and vvsAB might be mobile genetic elements

    Ensemble learning of linear perceptron; Online learning theory

    Full text link
    Within the framework of on-line learning, we study the generalization error of an ensemble learning machine learning from a linear teacher perceptron. The generalization error achieved by an ensemble of linear perceptrons having homogeneous or inhomogeneous initial weight vectors is precisely calculated at the thermodynamic limit of a large number of input elements and shows rich behavior. Our main findings are as follows. For learning with homogeneous initial weight vectors, the generalization error using an infinite number of linear student perceptrons is equal to only half that of a single linear perceptron, and converges with that of the infinite case with O(1/K) for a finite number of K linear perceptrons. For learning with inhomogeneous initial weight vectors, it is advantageous to use an approach of weighted averaging over the output of the linear perceptrons, and we show the conditions under which the optimal weights are constant during the learning process. The optimal weights depend on only correlation of the initial weight vectors.Comment: 14 pages, 3 figures, submitted to Physical Review
    corecore