1,107 research outputs found

    Electrical Compartmentalization in Neurons

    Get PDF
    The dendritic tree of neurons plays an important role in information processing in the brain. While it is thought that dendrites require independent subunits to perform most of their computations, it is still not understood how they compartmentalize into functional subunits. Here, we show how these subunits can be deduced from the properties of dendrites. We devised a formalism that links the dendritic arborization to an impedance-based tree graph and show how the topology of this graph reveals independent subunits. This analysis reveals that cooperativity between synapses decreases slowly with increasing electrical separation and thus that few independent subunits coexist. We nevertheless find that balanced inputs or shunting inhibition can modify this topology and increase the number and size of the subunits in a context-dependent manner. We also find that this dynamic recompartmentalization can enable branch-specific learning of stimulus features. Analysis of dendritic patch-clamp recording experiments confirmed our theoretical predictions.Peer reviewe

    Exploring the information processing capabilities of random dendritic neural nets

    Get PDF
    The goal of this research is to investigate to what degree randona artificial dendritic nets can differentiate between temporal patterns after modifying the synaptic weights of certain synapses according to a learning algorithm based on the Fourier transform. A dendritic net is organized into subnets, which provide impulse responses to a function as a basis for Fourier decomposition of the input pattern. Each subnet is randomly generated. According to the simulations, randomly generated subnets with appropriate parameters are good enough to provide the impulse responses for the Fourier decomposition. The electrical potential pattern across the membrane of the dendrites follows the cable equation. The simulations use a linear synapse model, which is an approximation to biologically realistic synapses. Both excitatory and inhibitory synapses are present in a dendritic net. The simulations show that random dendritic nets with a small number of subnets can be modified to differentiate between electrical current patterns to a high degree when the membrane conductance of the dendrites is high, and they also show that the random structures are highly fault-tolerant. The performance of a random dendritic net does not change much after adding or deleting subnets

    Network Plasticity as Bayesian Inference

    Full text link
    General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference. But a model for that has been missing. We propose that inherently stochastic features of synaptic plasticity and spine motility enable cortical networks of neurons to carry out probabilistic inference by sampling from a posterior distribution of network configurations. This model provides a viable alternative to existing models that propose convergence of parameters to maximum likelihood values. It explains how priors on weight distributions and connection probabilities can be merged optimally with learned experience, how cortical networks can generalize learned information so well to novel experiences, and how they can compensate continuously for unforeseen disturbances of the network. The resulting new theory of network plasticity explains from a functional perspective a number of experimental data on stochastic aspects of synaptic plasticity that previously appeared to be quite puzzling.Comment: 33 pages, 5 figures, the supplement is available on the author's web page http://www.igi.tugraz.at/kappe

    Neuro-memristive Circuits for Edge Computing: A review

    Full text link
    The volume, veracity, variability, and velocity of data produced from the ever-increasing network of sensors connected to Internet pose challenges for power management, scalability, and sustainability of cloud computing infrastructure. Increasing the data processing capability of edge computing devices at lower power requirements can reduce several overheads for cloud computing solutions. This paper provides the review of neuromorphic CMOS-memristive architectures that can be integrated into edge computing devices. We discuss why the neuromorphic architectures are useful for edge devices and show the advantages, drawbacks and open problems in the field of neuro-memristive circuits for edge computing

    A Biologically Plausible Learning Rule for Deep Learning in the Brain

    Get PDF
    Researchers have proposed that deep learning, which is providing important progress in a wide range of high complexity tasks, might inspire new insights into learning in the brain. However, the methods used for deep learning by artificial neural networks are biologically unrealistic and would need to be replaced by biologically realistic counterparts. Previous biologically plausible reinforcement learning rules, like AGREL and AuGMEnT, showed promising results but focused on shallow networks with three layers. Will these learning rules also generalize to networks with more layers and can they handle tasks of higher complexity? We demonstrate the learning scheme on classical and hard image-classification benchmarks, namely MNIST, CIFAR10 and CIFAR100, cast as direct reward tasks, both for fully connected, convolutional and locally connected architectures. We show that our learning rule - Q-AGREL - performs comparably to supervised learning via error-backpropagation, with this type of trial-and-error reinforcement learning requiring only 1.5-2.5 times more epochs, even when classifying 100 different classes as in CIFAR100. Our results provide new insights into how deep learning may be implemented in the brain
    corecore