255 research outputs found

    Computing threshold functions using dendrites

    Full text link
    Neurons, modeled as linear threshold unit (LTU), can in theory compute all thresh- old functions. In practice, however, some of these functions require synaptic weights of arbitrary large precision. We show here that dendrites can alleviate this requirement. We introduce here the non-Linear Threshold Unit (nLTU) that integrates synaptic input sub-linearly within distinct subunits to take into account local saturation in dendrites. We systematically search parameter space of the nTLU and TLU to compare them. Firstly, this shows that the nLTU can compute all threshold functions with smaller precision weights than the LTU. Secondly, we show that a nLTU can compute significantly more functions than a LTU when an input can only make a single synapse. This work paves the way for a new generation of network made of nLTU with binary synapses.Comment: 5 pages 3 figure

    CMOS-Memristor Dendrite Threshold Circuits

    Full text link
    Non-linear neuron models overcomes the limitations of linear binary models of neurons that have the inability to compute linearly non-separable functions such as XOR. While several biologically plausible models based on dendrite thresholds are reported in the previous studies, the hardware implementation of such non-linear neuron models remain as an open problem. In this paper, we propose a circuit design for implementing logical dendrite non-linearity response of dendrite spike and saturation types. The proposed dendrite cells are used to build XOR circuit and intensity detection circuit that consists of different combinations of dendrite cells with saturating and spiking responses. The dendrite cells are designed using a set of memristors, Zener diodes, and CMOS NOT gates. The circuits are designed, analyzed and verified on circuit boards.Comment: Zhanbossinov, K. Smagulova, A. P. James, CMOS-Memristor Dendrite Threshold Circuits, 2016 IEEE APCCAS, Jeju, Korea, October 25-28, 201

    Dendrites and conformal symmetry

    Full text link
    Progress toward characterization of structural and biophysical properties of neural dendrites together with recent findings emphasizing their role in neural computation, has propelled growing interest in refining existing theoretical models of electrical propagation in dendrites while advocating novel analytic tools. In this paper we focus on the cable equation describing electric propagation in dendrites with different geometry. When the geometry is cylindrical we show that the cable equation is invariant under the Schr\"odinger group and by using the dendrite parameters, a representation of the Schr\"odinger algebra is provided. Furthermore, when the geometry profile is parabolic we show that the cable equation is equivalent to the Schr\"odinger equation for the 1-dimensional free particle, which is invariant under the Schr\"odinger group. Moreover, we show that there is a family of dendrite geometries for which the cable equation is equivalent to the Schr\"odinger equation for the 1-dimensional conformal quantum mechanics.Comment: 19 page

    Modelling plasticity in dendrites: from single cells to networks

    Get PDF
    One of the key questions in neuroscience is how our brain self-organises to efficiently process information. To answer this question, we need to understand the underlying mechanisms of plasticity and their role in shaping synaptic connectivity. Theoretical neuroscience typically investigates plasticity on the level of neural networks. Neural network models often consist of point neurons, completely neglecting neuronal morphology for reasons of simplicity. However, during the past decades it became increasingly clear that inputs are locally processed in the dendrites before they reach the cell body. Dendritic properties enable local interactions between synapses and location-dependent modulations of inputs, rendering the position of synapses on dendrites highly important. These insights changed our view of neurons, such that we now think of them as small networks of nearly independent subunits instead of a simple point. Here, we propose that understanding how the brain processes information strongly requires that we consider the following properties: which plasticity mechanisms are present in the dendrites and how do they enable the self-organisation of synapses across the dendritic tree for efficient information processing? Ultimately, dendritic plasticity mechanisms can be studied in networks of neurons with dendrites, possibly uncovering unknown mechanisms that shape the connectivity in our brains

    Branch-specific plasticity enables self-organization of nonlinear computation in single neurons

    Get PDF
    It has been conjectured that nonlinear processing in dendritic branches endows individual neurons with the capability to perform complex computational operations that are needed in order to solve for example the binding problem. However, it is not clear how single neurons could acquire such functionality in a self-organized manner, since most theoretical studies of synaptic plasticity and learning concentrate on neuron models without nonlinear dendritic properties. In the meantime, a complex picture of information processing with dendritic spikes and a variety of plasticity mechanisms in single neurons has emerged from experiments. In particular, new experimental data on dendritic branch strength potentiation in rat hippocampus have not yet been incorporated into such models. In this article, we investigate how experimentally observed plasticity mechanisms, such as depolarization-dependent STDP and branch-strength potentiation could be integrated to self-organize nonlinear neural computations with dendritic spikes. We provide a mathematical proof that in a simplified setup these plasticity mechanisms induce a competition between dendritic branches, a novel concept in the analysis of single neuron adaptivity. We show via computer simulations that such dendritic competition enables a single neuron to become member of several neuronal ensembles, and to acquire nonlinear computational capabilities, such as for example the capability to bind multiple input features. Hence our results suggest that nonlinear neural computation may self-organize in single neurons through the interaction of local synaptic and dendritic plasticity mechanisms

    Exploring Transfer Function Nonlinearity in Echo State Networks

    Full text link
    Supralinear and sublinear pre-synaptic and dendritic integration is considered to be responsible for nonlinear computation power of biological neurons, emphasizing the role of nonlinear integration as opposed to nonlinear output thresholding. How, why, and to what degree the transfer function nonlinearity helps biologically inspired neural network models is not fully understood. Here, we study these questions in the context of echo state networks (ESN). ESN is a simple neural network architecture in which a fixed recurrent network is driven with an input signal, and the output is generated by a readout layer from the measurements of the network states. ESN architecture enjoys efficient training and good performance on certain signal-processing tasks, such as system identification and time series prediction. ESN performance has been analyzed with respect to the connectivity pattern in the network structure and the input bias. However, the effects of the transfer function in the network have not been studied systematically. Here, we use an approach tanh on the Taylor expansion of a frequently used transfer function, the hyperbolic tangent function, to systematically study the effect of increasing nonlinearity of the transfer function on the memory, nonlinear capacity, and signal processing performance of ESN. Interestingly, we find that a quadratic approximation is enough to capture the computational power of ESN with tanh function. The results of this study apply to both software and hardware implementation of ESN.Comment: arXiv admin note: text overlap with arXiv:1502.0071

    NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors

    Get PDF
    © 2016 Cheung, Schultz and Luk.NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation

    Signal Perceptron: On the Identifiability of Boolean Function Spaces and Beyond

    Get PDF
    In a seminal book, Minsky and Papert define the perceptron as a limited implementation of what they called “parallel machines.” They showed that some binary Boolean functions including XOR are not definable in a single layer perceptron due to its limited capacity to learn only linearly separable functions. In this work, we propose a new more powerful implementation of such parallel machines. This new mathematical tool is defined using analytic sinusoids—instead of linear combinations—to form an analytic signal representation of the function that we want to learn. We show that this re-formulated parallel mechanism can learn, with a single layer, any non-linear k-ary Boolean function. Finally, to provide an example of its practical applications, we show that it outperforms the single hidden layer multilayer perceptron in both Boolean function learning and image classification tasks, while also being faster and requiring fewer parameters
    • …
    corecore