234 research outputs found

    A computational implementation of a Hebbian learning network and its application to configural forms of acquired equivalence

    Get PDF
    We describe and report the results of computer simulations of the three-layer Hebbian network informally described by Honey, Close, and Lin (2010): A general account of discrimination that has been shaped by data from configural acquired equivalence experiments that are beyond the scope of alternative models. Simulations implemented a conditional principle components analysis (CPCA) Hebbian learning algorithm and were of four published experimental demonstrations of configural acquired equivalence. Experiments involved training rats on appetitive bi-conditional discriminations in which discrete cues, (w and x) signaled food delivery (+) or its absence (-) in four different contexts (A, B, C and D): Aw+ Bw- Cw+ Dw- Ax- Bx+ Cx- Dx+. Contexts A and C acquired equivalence. In three of the experiments acquired equivalence was evident from subsequent revaluation, from compound testing or from whole-/part-reversal training. The fourth experiment added concurrent bi-conditional discriminations with the same contexts but a pair of additional discrete cues (y and z). The congruent form of the discrimination, in which A and C provided the same information about y and z, was solved relatively readily. Parametric variation allowed the network to successfully simulate the results of each of the four experiments

    A differential Hebbian framework for biologically-plausible motor control

    Get PDF
    In the realm of motor control, artificial agents cannot match the performance of their biological counterparts. We thus explore a neural control architecture that is both biologically plausible, and capable of fully autonomous learning. The architecture consists of feedback controllers that learn to achieve a desired state by selecting the errors that should drive them. This selection happens through a family of differential Hebbian learning rules that, through interaction with the environment, can learn to control systems where the error responds monotonically to the control signal. We next show that in a more general case, neural reinforcement learning can be coupled with a feedback controller to reduce errors that arise non-monotonically from the control signal. The use of feedback control reduces the complexity of the reinforcement learning problem, because only a desired value must be learned, with the controller handling the details of how it is reached. This makes the function to be learned simpler, potentially allowing to learn more complex actions. We discuss how this approach could be extended to hierarchical architectures.Comment: 35 pages, 10 figures. Appendix: 9 pages, 2 figure

    The computational magic of the ventral stream: sketch of a theory (and why some deep architectures work).

    Get PDF
    This paper explores the theoretical consequences of a simple assumption: the computational goal of the feedforward path in the ventral stream -- from V1, V2, V4 and to IT -- is to discount image transformations, after learning them during development

    Large Deviations of a Spatially-Stationary Network of Interacting Neurons

    Get PDF
    In this work we determine a process-level Large Deviation Principle (LDP) for a model of interacting neurons indexed by a lattice Zd\mathbb{Z}^d. The neurons are subject to noise, which is modelled as a correlated martingale. The probability law governing the noise is strictly stationary, and we are therefore able to find a LDP for the probability laws Πn\Pi^n governing the stationary empirical measure μ^n\hat{\mu}^n generated by the neurons in a cube of length (2n+1)(2n+1). We use this LDP to determine an LDP for the neural network model. The connection weights between the neurons evolve according to a learning rule / neuronal plasticity, and these results are adaptable to a large variety of neural network models. This LDP is of great use in the mathematical modelling of neural networks, because it allows a quantification of the likelihood of the system deviating from its limit, and also a determination of which direction the system is likely to deviate. The work is also of interest because there are nontrivial correlations between the neurons even in the asymptotic limit, thereby presenting itself as a generalisation of traditional mean-field models
    • …
    corecore