7,126 research outputs found

    Optimal learning rules for discrete synapses

    Get PDF
    There is evidence that biological synapses have a limited number of discrete weight states. Memory storage with such synapses behaves quite differently from synapses with unbounded, continuous weights, as old memories are automatically overwritten by new memories. Consequently, there has been substantial discussion about how this affects learning and storage capacity. In this paper, we calculate the storage capacity of discrete, bounded synapses in terms of Shannon information. We use this to optimize the learning rules and investigate how the maximum information capacity depends on the number of synapses, the number of synaptic states, and the coding sparseness. Below a certain critical number of synapses per neuron (comparable to numbers found in biology), we find that storage is similar to unbounded, continuous synapses. Hence, discrete synapses do not necessarily have lower storage capacity

    Pavlov's dog associative learning demonstrated on synaptic-like organic transistors

    Full text link
    In this letter, we present an original demonstration of an associative learning neural network inspired by the famous Pavlov's dogs experiment. A single nanoparticle organic memory field effect transistor (NOMFET) is used to implement each synapse. We show how the physical properties of this dynamic memristive device can be used to perform low power write operations for the learning and implement short-term association using temporal coding and spike timing dependent plasticity based learning. An electronic circuit was built to validate the proposed learning scheme with packaged devices, with good reproducibility despite the complex synaptic-like dynamic of the NOMFET in pulse regime

    Statistical physics of neural systems with non-additive dendritic coupling

    Full text link
    How neurons process their inputs crucially determines the dynamics of biological and artificial neural networks. In such neural and neural-like systems, synaptic input is typically considered to be merely transmitted linearly or sublinearly by the dendritic compartments. Yet, single-neuron experiments report pronounced supralinear dendritic summation of sufficiently synchronous and spatially close-by inputs. Here, we provide a statistical physics approach to study the impact of such non-additive dendritic processing on single neuron responses and the performance of associative memory tasks in artificial neural networks. First, we compute the effect of random input to a neuron incorporating nonlinear dendrites. This approach is independent of the details of the neuronal dynamics. Second, we use those results to study the impact of dendritic nonlinearities on the network dynamics in a paradigmatic model for associative memory, both numerically and analytically. We find that dendritic nonlinearities maintain network convergence and increase the robustness of memory performance against noise. Interestingly, an intermediate number of dendritic branches is optimal for memory functionality

    Adiabatic Quantum Optimization for Associative Memory Recall

    Get PDF
    Hopfield networks are a variant of associative memory that recall information stored in the couplings of an Ising model. Stored memories are fixed points for the network dynamics that correspond to energetic minima of the spin state. We formulate the recall of memories stored in a Hopfield network using energy minimization by adiabatic quantum optimization (AQO). Numerical simulations of the quantum dynamics allow us to quantify the AQO recall accuracy with respect to the number of stored memories and the noise in the input key. We also investigate AQO performance with respect to how memories are stored in the Ising model using different learning rules. Our results indicate that AQO performance varies strongly with learning rule due to the changes in the energy landscape. Consequently, learning rules offer indirect methods for investigating change to the computational complexity of the recall task and the computational efficiency of AQO.Comment: 22 pages, 11 figures. Updated for clarity and figures, to appear in Frontiers of Physic

    Synapse efficiency diverges due to synaptic pruning following over-growth

    Full text link
    In the development of the brain, it is known that synapses are pruned following over-growth. This pruning following over-growth seems to be a universal phenomenon that occurs in almost all areas -- visual cortex, motor area, association area, and so on. It has been shown numerically that the synapse efficiency is increased by systematic deletion. We discuss the synapse efficiency to evaluate the effect of pruning following over-growth, and analytically show that the synapse efficiency diverges as O(log c) at the limit where connecting rate c is extremely small. Under a fixed synapse number criterion, the optimal connecting rate, which maximize memory performance, exists.Comment: 15 pages, 16 figure

    Cortical region interactions and the functional role of apical dendrites

    Get PDF
    The basal and distal apical dendrites of pyramidal cells occupy distinct cortical layers and are targeted by axons originating in different cortical regions. Hence, apical and basal dendrites receive information from distinct sources. Physiological evidence suggests that this anatomically observed segregation of input sources may have functional significance. This possibility has been explored in various connectionist models that employ neurons with functionally distinct apical and basal compartments. A neuron in which separate sets of inputs can be integrated independently has the potential to operate in a variety of ways which are not possible for the conventional model of a neuron in which all inputs are treated equally. This article thus considers how functionally distinct apical and basal dendrites can contribute to the information processing capacities of single neurons and, in particular, how information from different cortical regions could have disparate affects on neural activity and learning
    • …
    corecore