10,035 research outputs found

    Learning by a nerual net in a noisy environment - The pseudo-inverse solution revisited

    Full text link
    A recurrent neural net is described that learns a set of patterns in the presence of noise. The learning rule is of Hebbian type, and, if noise would be absent during the learning process, the resulting final values of the weights would correspond to the pseudo-inverse solution of the fixed point equation in question. For a non-vanishing noise parameter, an explicit expression for the expectation value of the weights is obtained. This result turns out to be unequal to the pseudo-inverse solution. Furthermore, the stability properties of the system are discussed.Comment: 16 pages, 3 figure

    Probing the basins of attraction of a recurrent neural network

    Full text link
    A recurrent neural network is considered that can retrieve a collection of patterns, as well as slightly perturbed versions of this `pure' set of patterns via fixed points of its dynamics. By replacing the set of dynamical constraints, i.e., the fixed point equations, by an extended collection of fixed-point-like equations, analytical expressions are found for the weights w_ij(b) of the net, which depend on a certain parameter b. This so-called basin parameter b is such that for b=0 there are, a priori, no perturbed patterns to be recognized by the net. It is shown by a numerical study, via probing sets, that a net constructed to recognize perturbed patterns, i.e., with values of the connections w_ij(b) with b unequal zero, possesses larger basins of attraction than a net made with the help of a pure set of patterns, i.e., with connections w_ij(b=0). The mathematical results obtained can, in principle, be realized by an actual, biological neural net.Comment: 17 pages, LaTeX, 2 figure

    Conserving Approximations in Time-Dependent Density Functional Theory

    Get PDF
    In the present work we propose a theory for obtaining successively better approximations to the linear response functions of time-dependent density or current-density functional theory. The new technique is based on the variational approach to many-body perturbation theory (MBPT) as developed during the sixties and later expanded by us in the mid nineties. Due to this feature the resulting response functions obey a large number of conservation laws such as particle and momentum conservation and sum rules. The quality of the obtained results is governed by the physical processes built in through MBPT but also by the choice of variational expressions. We here present several conserving response functions of different sophistication to be used in the calculation of the optical response of solids and nano-scale systems.Comment: 11 pages, 4 figures, revised versio

    Tumbling of a rigid rod in a shear flow

    Full text link
    The tumbling of a rigid rod in a shear flow is analyzed in the high viscosity limit. Following Burgers, the Master Equation is derived for the probability distribution of the orientation of the rod. The equation contains one dimensionless number, the Weissenberg number, which is the ratio of the shear rate and the orientational diffusion constant. The equation is solved for the stationary state distribution for arbitrary Weissenberg numbers, in particular for the limit of high Weissenberg numbers. The stationary state gives an interesting flow pattern for the orientation of the rod, showing the interplay between flow due to the driving shear force and diffusion due to the random thermal forces of the fluid. The average tumbling time and tumbling frequency are calculated as a function of the Weissenberg number. A simple cross-over function is proposed which covers the whole regime from small to large Weissenberg numbers.Comment: 22 pages, 9 figure

    Combining Hebbian and reinforcement learning in a minibrain model

    Full text link
    A toy model of a neural network in which both Hebbian learning and reinforcement learning occur is studied. The problem of `path interference', which makes that the neural net quickly forgets previously learned input-output relations is tackled by adding a Hebbian term (proportional to the learning rate η\eta) to the reinforcement term (proportional to ρ\rho) in the learning rule. It is shown that the number of learning steps is reduced considerably if 1/4<η/ρ<1/21/4 < \eta/\rho < 1/2, i.e., if the Hebbian term is neither too small nor too large compared to the reinforcement term
    corecore