61 research outputs found

    On the Biological Plausibility of Artificial Metaplasticity

    Get PDF
    The training algorithm studied in this paper is inspired by the biological metaplasticity property of neurons. Tested on different multidisciplinary applications, it achieves a more efficient training and improves Artificial Neural Network Performance. The algorithm has been recently proposed for Artificial Neural Networks in general, although for the purpose of discussing its biological plausibility, a Multilayer Perceptron has been used. During the training phase, the artificial metaplasticity multilayer perceptron could be considered a new probabilistic version of the presynaptic rule, as during the training phase the algorithm assigns higher values for updating the weights in the less probable activations than in the ones with higher probabilit

    A Neural Network Approach for Analyzing the Illusion of Movement in Static Images

    Full text link
    The purpose of this work is to analyze the illusion of movement that appears when seeing certain static images. This analysis is accomplished by using a biologically plausible neural network that learned (in a unsupervised manner) to identify the movement direction of shifting training patterns. Some of the biological features that characterizes this neural network are: intrinsic plasticity to adapt firing probability, metaplasticity to regulate synaptic weights and firing adaptation of simulated pyramidal networks. After analyzing the results, we hypothesize that the illusion is due to cinematographic perception mechanisms in the brain due to which each visual frame is renewed approximately each 100 msec. Blurring of moving object in visual frames might be interpreted by the brain as movement, the same as if we present a static blurred object

    Using Neural Networks to Simulate the Alzheimer's Disease

    Get PDF
    Making use of biologically plausible artificial neural networks that implement Grossberg’s presynaptic learning rule, we simulate the possible effects of calcium dysregulation in the neuron’s activation function, to represent the most accepted model of Alzheimer's Disease: the calcium dysregulation hypothesis. According to Cudmore and Turrigiano calcium dysregulation alters the shifting dynamic of the neuron’s activation function (intrinsic plasticity). We propose that this alteration might affect the stability of synaptic weights in which memories are stored. The results of the simulation supported the theoretical hypothesis, implying that the emergence of Alzheimer's disease's symptoms such as memory loss and learning problems might be correlated to intrinsic neuronal plasticity impairment due to calcium dysregulation

    Bayesian Continual Learning via Spiking Neural Networks

    Full text link
    Among the main features of biological intelligence are energy efficiency, capacity for continual adaptation, and risk management via uncertainty quantification. Neuromorphic engineering has been thus far mostly driven by the goal of implementing energy-efficient machines that take inspiration from the time-based computing paradigm of biological brains. In this paper, we take steps towards the design of neuromorphic systems that are capable of adaptation to changing learning tasks, while producing well-calibrated uncertainty quantification estimates. To this end, we derive online learning rules for spiking neural networks (SNNs) within a Bayesian continual learning framework. In it, each synaptic weight is represented by parameters that quantify the current epistemic uncertainty resulting from prior knowledge and observed data. The proposed online rules update the distribution parameters in a streaming fashion as data are observed. We instantiate the proposed approach for both real-valued and binary synaptic weights. Experimental results using Intel's Lava platform show the merits of Bayesian over frequentist learning in terms of capacity for adaptation and uncertainty quantification.Comment: Accepted for publication in Frontiers in Computational Neuroscienc

    Reconsidering the Imaging Evidence Used to Implicate Prediction Error as the Driving Force behind Learning.

    Get PDF
    In this paper, we review the evidence that learning is driven by signaling of Prediction Error [PE] by some neurons. We model associative learning in artificial neural networks using Hebbian (non-PE) learning algorithms to investigate whether the data used to implicate PE in learning can arise without actual PE computation. We conclude that the metabolic demands of synaptic change during Hebbian learning would produce a PE-correlated component in functional magnetic resonance imaging (fMRI), which suggests that the research used to imply PE in learning is currently inconclusive

    Distinctive properties of biological neural networks and recent advances in bottom-up approaches toward a better biologically plausible neural network

    Get PDF
    Although it may appear infeasible and impractical, building artificial intelligence (AI) using a bottom-up approach based on the understanding of neuroscience is straightforward. The lack of a generalized governing principle for biological neural networks (BNNs) forces us to address this problem by converting piecemeal information on the diverse features of neurons, synapses, and neural circuits into AI. In this review, we described recent attempts to build a biologically plausible neural network by following neuroscientifically similar strategies of neural network optimization or by implanting the outcome of the optimization, such as the properties of single computational units and the characteristics of the network architecture. In addition, we proposed a formalism of the relationship between the set of objectives that neural networks attempt to achieve, and neural network classes categorized by how closely their architectural features resemble those of BNN. This formalism is expected to define the potential roles of top-down and bottom-up approaches for building a biologically plausible neural network and offer a map helping the navigation of the gap between neuroscience and AI engineering

    Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks

    Get PDF
    Biological plastic neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifetime learning. The interplay of these elements leads to the emergence of adaptive behavior and intelligence. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed plastic neural networks with a large variety of dynamics, architectures, and plasticity rules: these artificial systems are composed of inputs, outputs, and plastic components that change in response to experiences in an environment. These systems may autonomously discover novel adaptive algorithms, and lead to hypotheses on the emergence of biological adaptation. EPANNs have seen considerable progress over the last two decades. Current scientific and technological advances in artificial neural networks are now setting the conditions for radically new approaches and results. In particular, the limitations of hand-designed networks could be overcome by more flexible and innovative solutions. This paper brings together a variety of inspiring ideas that define the field of EPANNs. The main methods and results are reviewed. Finally, new opportunities and developments are presented

    Learning as filtering: Implications for spike-based plasticity.

    Get PDF
    Most normative models in computational neuroscience describe the task of learning as the optimisation of a cost function with respect to a set of parameters. However, learning as optimisation fails to account for a time-varying environment during the learning process and the resulting point estimate in parameter space does not account for uncertainty. Here, we frame learning as filtering, i.e., a principled method for including time and parameter uncertainty. We derive the filtering-based learning rule for a spiking neuronal network-the Synaptic Filter-and show its computational and biological relevance. For the computational relevance, we show that filtering improves the weight estimation performance compared to a gradient learning rule with optimal learning rate. The dynamics of the mean of the Synaptic Filter is consistent with spike-timing dependent plasticity (STDP) while the dynamics of the variance makes novel predictions regarding spike-timing dependent changes of EPSP variability. Moreover, the Synaptic Filter explains experimentally observed negative correlations between homo- and heterosynaptic plasticity
    • …
    corecore