420 research outputs found

    Field Theoretical Analysis of On-line Learning of Probability Distributions

    Full text link
    On-line learning of probability distributions is analyzed from the field theoretical point of view. We can obtain an optimal on-line learning algorithm, since renormalization group enables us to control the number of degrees of freedom of a system according to the number of examples. We do not learn parameters of a model, but probability distributions themselves. Therefore, the algorithm requires no a priori knowledge of a model.Comment: 4 pages, 1 figure, RevTe

    Retarded Learning: Rigorous Results from Statistical Mechanics

    Full text link
    We study learning of probability distributions characterized by an unknown symmetry direction. Based on an entropic performance measure and the variational method of statistical mechanics we develop exact upper and lower bounds on the scaled critical number of examples below which learning of the direction is impossible. The asymptotic tightness of the bounds suggests an asymptotically optimal method for learning nonsmooth distributions.Comment: 8 pages, 1 figur

    Efficient statistical inference for stochastic reaction processes

    Full text link
    We address the problem of estimating unknown model parameters and state variables in stochastic reaction processes when only sparse and noisy measurements are available. Using an asymptotic system size expansion for the backward equation we derive an efficient approximation for this problem. We demonstrate the validity of our approach on model systems and generalize our method to the case when some state variables are not observed.Comment: 4 pages, 2 figures, 2 tables; typos corrected, remark about Kalman smoother adde

    Generalization properties of finite size polynomial Support Vector Machines

    Full text link
    The learning properties of finite size polynomial Support Vector Machines are analyzed in the case of realizable classification tasks. The normalization of the high order features acts as a squeezing factor, introducing a strong anisotropy in the patterns distribution in feature space. As a function of the training set size, the corresponding generalization error presents a crossover, more or less abrupt depending on the distribution's anisotropy and on the task to be learned, between a fast-decreasing and a slowly decreasing regime. This behaviour corresponds to the stepwise decrease found by Dietrich et al.[Phys. Rev. Lett. 82 (1999) 2975-2978] in the thermodynamic limit. The theoretical results are in excellent agreement with the numerical simulations.Comment: 12 pages, 7 figure

    Customising agent based analysis towards analysis of disaster management knowledge

    Get PDF
    © 2016 Dedi Iskandar Inan, Ghassan Beydoun and Simon Opper. In developed countries such as Australia, for recurring disasters (e.g. floods), there are dedicated document repositories of Disaster Management Plans (DISPLANs), and supporting doctrine and processes that are used to prepare organisations and communities for disasters. They are maintained on an ongoing cyclical basis and form a key information source for community education, engagement and awareness programme in the preparation for and mitigation of disasters. DISPLANS, generally in semi-structured text document format, are then accessed and activated during the response and recovery to incidents to coordinate emergency service and community safety actions. However, accessing the appropriate plan and the specific knowledge within the text document from across its conceptual areas in a timely manner and sharing activities between stakeholders requires intimate domain knowledge of the plan contents and its development. This paper describes progress on an ongoing project with NSW State Emergency Service (NSW SES) to convert DISPLANs into a collection of knowledge units that can be stored in a unified repository with the goal to form the basis of a future knowledge sharing capability. All Australian emergency services covering a wide range of hazards develop DISPLANs of various structure and intent, in general the plans are created as instances of a template, for example those which are developed centrally by the NSW and Victorian SES’s State planning policies. In this paper, we illustrate how by using selected templates as part of an elaborate agent-based process, we can apply agent-oriented analysis more efficiently to convert extant DISPLANs into a centralised repository. The repository is structured as a layered abstraction according to Meta Object Facility (MOF). The work is illustrated using DISPLANs along the flood-prone Murrumbidgee River in central NSW

    Gradient descent learning in and out of equilibrium

    Full text link
    Relations between the off thermal equilibrium dynamical process of on-line learning and the thermally equilibrated off-line learning are studied for potential gradient descent learning. The approach of Opper to study on-line Bayesian algorithms is extended to potential based or maximum likelihood learning. We look at the on-line learning algorithm that best approximates the off-line algorithm in the sense of least Kullback-Leibler information loss. It works by updating the weights along the gradient of an effective potential different from the parent off-line potential. The interpretation of this off equilibrium dynamics holds some similarities to the cavity approach of Griniasty. We are able to analyze networks with non-smooth transfer functions and transfer the smoothness requirement to the potential.Comment: 08 pages, submitted to the Journal of Physics

    DM model transformations framework

    Get PDF
    Metamodelling produces a \u27metamodel\u27 capable of generalizing the domain. A metamodel gathers all domain concepts and their relationships. It enables partitioning a domain problem into sub-problems. Decision makers can then develop a variety of domain solutions models based on mixing and matching solutions for sub-problems indentified using the metamodel. A repository of domain knowledge structured using the metamodel would allow the transformation of models generated from a higher level to a lower level according to scope of the problem on hand. In this paper, we reveal how a process of mixing and matching disaster management actions can be accomplished using our Disaster Management Metamodel (DMM). The paper describes DM model transformations underpinned by DMM. They are illustrated benefiting DM users creating appropriate DM solution models from extant partial solutions

    Dynamical transitions in the evolution of learning algorithms by selection

    Get PDF
    We study the evolution of artificial learning systems by means of selection. Genetic programming is used to generate a sequence of populations of algorithms which can be used by neural networks for supervised learning of a rule that generates examples. In opposition to concentrating on final results, which would be the natural aim while designing good learning algorithms, we study the evolution process and pay particular attention to the temporal order of appearance of functional structures responsible for the improvements in the learning process, as measured by the generalization capabilities of the resulting algorithms. The effect of such appearances can be described as dynamical phase transitions. The concepts of phenotypic and genotypic entropies, which serve to describe the distribution of fitness in the population and the distribution of symbols respectively, are used to monitor the dynamics. In different runs the phase transitions might be present or not, with the system finding out good solutions, or staying in poor regions of algorithm space. Whenever phase transitions occur, the sequence of appearances are the same. We identify combinations of variables and operators which are useful in measuring experience or performance in rule extraction and can thus implement useful annealing of the learning schedule.Comment: 11 pages, 11 figures, 2 table

    Statistical mechanics of random two-player games

    Full text link
    Using methods from the statistical mechanics of disordered systems we analyze the properties of bimatrix games with random payoffs in the limit where the number of pure strategies of each player tends to infinity. We analytically calculate quantities such as the number of equilibrium points, the expected payoff, and the fraction of strategies played with non-zero probability as a function of the correlation between the payoff matrices of both players and compare the results with numerical simulations.Comment: 16 pages, 6 figures, for further information see http://itp.nat.uni-magdeburg.de/~jberg/games.htm

    Inferring hidden states in Langevin dynamics on large networks: Average case performance

    Get PDF
    We present average performance results for dynamical inference problems in large networks, where a set of nodes is hidden while the time trajectories of the others are observed. Examples of this scenario can occur in signal transduction and gene regulation networks. We focus on the linear stochastic dynamics of continuous variables interacting via random Gaussian couplings of generic symmetry. We analyze the inference error, given by the variance of the posterior distribution over hidden paths, in the thermodynamic limit and as a function of the system parameters and the ratio {\alpha} between the number of hidden and observed nodes. By applying Kalman filter recursions we find that the posterior dynamics is governed by an "effective" drift that incorporates the effect of the observations. We present two approaches for characterizing the posterior variance that allow us to tackle, respectively, equilibrium and nonequilibrium dynamics. The first appeals to Random Matrix Theory and reveals average spectral properties of the inference error and typical posterior relaxation times, the second is based on dynamical functionals and yields the inference error as the solution of an algebraic equation.Comment: 20 pages, 5 figure
    • …
    corecore