5,589 research outputs found

    On Zone-Based Analysis of Duration Probabilistic Automata

    Full text link
    We propose an extension of the zone-based algorithmics for analyzing timed automata to handle systems where timing uncertainty is considered as probabilistic rather than set-theoretic. We study duration probabilistic automata (DPA), expressing multiple parallel processes admitting memoryfull continuously-distributed durations. For this model we develop an extension of the zone-based forward reachability algorithm whose successor operator is a density transformer, thus providing a solution to verification and performance evaluation problems concerning acyclic DPA (or the bounded-horizon behavior of cyclic DPA).Comment: In Proceedings INFINITY 2010, arXiv:1010.611

    Ensemble learning of linear perceptron; Online learning theory

    Full text link
    Within the framework of on-line learning, we study the generalization error of an ensemble learning machine learning from a linear teacher perceptron. The generalization error achieved by an ensemble of linear perceptrons having homogeneous or inhomogeneous initial weight vectors is precisely calculated at the thermodynamic limit of a large number of input elements and shows rich behavior. Our main findings are as follows. For learning with homogeneous initial weight vectors, the generalization error using an infinite number of linear student perceptrons is equal to only half that of a single linear perceptron, and converges with that of the infinite case with O(1/K) for a finite number of K linear perceptrons. For learning with inhomogeneous initial weight vectors, it is advantageous to use an approach of weighted averaging over the output of the linear perceptrons, and we show the conditions under which the optimal weights are constant during the learning process. The optimal weights depend on only correlation of the initial weight vectors.Comment: 14 pages, 3 figures, submitted to Physical Review

    Statistical Mechanics of Nonlinear On-line Learning for Ensemble Teachers

    Full text link
    We analyze the generalization performance of a student in a model composed of nonlinear perceptrons: a true teacher, ensemble teachers, and the student. We calculate the generalization error of the student analytically or numerically using statistical mechanics in the framework of on-line learning. We treat two well-known learning rules: Hebbian learning and perceptron learning. As a result, it is proven that the nonlinear model shows qualitatively different behaviors from the linear model. Moreover, it is clarified that Hebbian learning and perceptron learning show qualitatively different behaviors from each other. In Hebbian learning, we can analytically obtain the solutions. In this case, the generalization error monotonically decreases. The steady value of the generalization error is independent of the learning rate. The larger the number of teachers is and the more variety the ensemble teachers have, the smaller the generalization error is. In perceptron learning, we have to numerically obtain the solutions. In this case, the dynamical behaviors of the generalization error are non-monotonic. The smaller the learning rate is, the larger the number of teachers is; and the more variety the ensemble teachers have, the smaller the minimum value of the generalization error is.Comment: 13 pages, 9 figure

    A hierarchical approach to energy management in data centers

    Get PDF
    Abstract — This paper concerns the management of energy in data centers using a cyber-physical model that supports the coordinated control of both computational and thermal (cooling) resources. On the basis of the structure of the proposed model and practical issues related to the data center layout and distribution of information, we propose a hierarchical optimization scheme in which the higher level chooses goals for regulation at the lower level. Linear programming is applied to solve sequences of one-step look-ahead problems at both the top level and in the lower-level controllers to solve. The approach is illustrated with simulation results. I

    On-Line Learning with Restricted Training Sets: An Exactly Solvable Case

    Full text link
    We solve the dynamics of on-line Hebbian learning in large perceptrons exactly, for the regime where the size of the training set scales linearly with the number of inputs. We consider both noiseless and noisy teachers. Our calculation cannot be extended to non-Hebbian rules, but the solution provides a convenient and welcome benchmark with which to test more general and advanced theories for solving the dynamics of learning with restricted training sets.Comment: 19 pages, eps figures included, uses epsfig macr

    Usual energy and macronutrient intakes in 2-9-year-old European children

    Get PDF
    OBJECTIVE: Valid estimates of population intakes are essential for monitoring trends as well as for nutritional interventions, but such data are rare in young children. In particular, the problem of misreporting in dietary data is usually not accounted for. Therefore, this study aims to provide accurate estimates of intake distributions in European children. DESIGN: Cross-sectional setting-based multi-centre study. SUBJECTS: A total of 9560 children aged 2-9 years from eight European countries with at least one 24-h dietary recall (24-HDR). METHODS: The 24-HDRs were classified in three reporting groups based on age- and sex-specific Goldberg cutoffs (underreports, plausible reports, overreports). Only plausible reports were considered in the final analysis (N=8611 children). The National Cancer Institute (NCI)-Method was applied to estimate population distributions of usual intakes correcting for the variance inflation in short-term dietary data. RESULTS: The prevalence of underreporting (9.5%) was higher compared with overreporting (3.4%). Exclusion of misreports resulted in a shift of the energy and absolute macronutrient intake distributions to the right, and further led to the exclusion of extreme values, that is, mean values and lower percentiles increased, whereas upper percentiles decreased. The distributions of relative macronutrient intakes (% energy intake from fat/carbohydrates/proteins) remained almost unchanged when excluding misreports. Application of the NCI-Method resulted in markedly narrower intake distributions compared with estimates based on single 24-HDRs. Mean percentages of usual energy intake from fat, carbohydrates and proteins were 32.2, 52.1 and 15.7%, respectively, suggesting the majority of European children are complying with common macronutrient intake recommendations. In contrast, total water intake (mean: 1216.7 ml per day) lay below the recommended value for >90% of the children. CONCLUSION: This study provides recent estimates of intake distributions of European children correcting for misreporting as well as for the daily variation in dietary data. These data may help to assess the adequacy of young children's diets in Europe

    Dynamics of Learning with Restricted Training Sets I: General Theory

    Get PDF
    We study the dynamics of supervised learning in layered neural networks, in the regime where the size pp of the training set is proportional to the number NN of inputs. Here the local fields are no longer described by Gaussian probability distributions and the learning dynamics is of a spin-glass nature, with the composition of the training set playing the role of quenched disorder. We show how dynamical replica theory can be used to predict the evolution of macroscopic observables, including the two relevant performance measures (training error and generalization error), incorporating the old formalism developed for complete training sets in the limit α=p/N→∞\alpha=p/N\to\infty as a special case. For simplicity we restrict ourselves in this paper to single-layer networks and realizable tasks.Comment: 39 pages, LaTe

    Statistical Mechanics of Time Domain Ensemble Learning

    Full text link
    Conventional ensemble learning combines students in the space domain. On the other hand, in this paper we combine students in the time domain and call it time domain ensemble learning. In this paper, we analyze the generalization performance of time domain ensemble learning in the framework of online learning using a statistical mechanical method. We treat a model in which both the teacher and the student are linear perceptrons with noises. Time domain ensemble learning is twice as effective as conventional space domain ensemble learning.Comment: 10 pages, 10 figure
    • …
    corecore