560,874 research outputs found

    I, NEURON: the neuron as the collective

    Get PDF
    Purpose – In the last half-century, individual sensory neurons have been bestowed with characteristics of the whole human being, such as behavior and its oft-presumed precursor, consciousness. This anthropomorphization is pervasive in the literature. It is also absurd, given what we know about neurons, and it needs to be abolished. This study aims to first understand how it happened, and hence why it persists. Design/methodology/approach – The peer-reviewed sensory-neurophysiology literature extends to hundreds (perhaps thousands) of papers. Here, more than 90 mainstream papers were scrutinized. Findings – Anthropomorphization arose because single neurons were cast as “observers” who “identify”, “categorize”, “recognize”, “distinguish” or “discriminate” the stimuli, using math-based algorithms that reduce (“decode”) the stimulus-evoked spike trains to the particular stimuli inferred to elicit them. Without “decoding”, there is supposedly no perception. However, “decoding” is both unnecessary and unconfirmed. The neuronal “observer” in fact consists of the laboratory staff and the greater society that supports them. In anthropomorphization, the neuron becomes the collective. Research limitations/implications – Anthropomorphization underlies the widespread application to neurons Information Theory and Signal Detection Theory, making both approaches incorrect. Practical implications – A great deal of time, money and effort has been wasted on anthropomorphic Reductionist approaches to understanding perception and consciousness. Those resources should be diverted into more-fruitful approaches. Originality/value – A long-overdue scrutiny of sensory-neuroscience literature reveals that anthropomorphization, a form of Reductionism that involves the presumption of single-neuron consciousness, has run amok in neuroscience. Consciousness is more likely to be an emergent property of the brain

    Leader neurons in leaky integrate and fire neural network simulations

    Full text link
    Several experimental studies show the existence of leader neurons in population bursts of 2D living neural networks. A leader neuron is, basically, a neuron which fires at the beginning of a burst (respectively network spike) more often that we expect by looking at its whole mean neural activity. This means that leader neurons have some burst triggering power beyond a simple statistical effect. In this study, we characterize these leader neuron properties. This naturally leads us to simulate neural 2D networks. To build our simulations, we choose the leaky integrate and fire (lIF) neuron model. Our lIF model has got stable leader neurons in the burst population that we simulate. These leader neurons are excitatory neurons and have a low membrane potential firing threshold. Except for these two first properties, the conditions required for a neuron to be a leader neuron are difficult to identify and seem to depend on several parameters involved in the simulations themself. However, a detailed linear analysis shows a trend of the properties required for a neuron to be a leader neuron. Our main finding is: A leader neuron sends a signal to many excitatory neurons as well as to a few inhibitory neurons and a leader neuron receives only a few signals from other excitatory neurons. Our linear analysis exhibits five essential properties for leader neurons with relative importance. This means that considering a given neural network with a fixed mean number of connections per neuron, our analysis gives us a way of predicting which neuron can be a good leader neuron and which cannot. Our prediction formula gives us a good statistical prediction even if, considering a single given neuron, the success rate does not reach hundred percent.Comment: 25 pages, 13 figures, 2 table

    Timing Control of Single Neuron Spikes with Optogenetic Stimulation

    Get PDF
    This paper predicts the ability to externally control the firing times of a cortical neuron whose behavior follows the Izhikevich neuron model. The Izhikevich neuron model provides an efficient and biologically plausible method to track a cortical neuron's membrane potential and its firing times. The external control is a simple optogenetic model represented by a constant current source that can be turned on or off. This paper considers a firing frequency that is sufficiently low for the membrane potential to return to its resting potential after it fires. The time required for the neuron to charge and for the neuron to recover to the resting potential are fitted to functions of the Izhikevich neuron model parameters. Results show that linear functions of the model parameters can be used to predict the charging times with some accuracy and are sufficient to estimate the highest firing frequency achievable without interspike interference.Comment: 6 pages, 8 figures, 3 tables. To be presented at the 2018 IEEE International Conference on Communications (IEEE ICC 2018) in May 201

    A compact aVLSI conductance-based silicon neuron

    Full text link
    We present an analogue Very Large Scale Integration (aVLSI) implementation that uses first-order lowpass filters to implement a conductance-based silicon neuron for high-speed neuromorphic systems. The aVLSI neuron consists of a soma (cell body) and a single synapse, which is capable of linearly summing both the excitatory and inhibitory postsynaptic potentials (EPSP and IPSP) generated by the spikes arriving from different sources. Rather than biasing the silicon neuron with different parameters for different spiking patterns, as is typically done, we provide digital control signals, generated by an FPGA, to the silicon neuron to obtain different spiking behaviours. The proposed neuron is only ~26.5 um2 in the IBM 130nm process and thus can be integrated at very high density. Circuit simulations show that this neuron can emulate different spiking behaviours observed in biological neurons.Comment: BioCAS-201

    Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex

    Get PDF
    Neocortical neurons have thousands of excitatory synapses. It is a mystery how neurons integrate the input from so many synapses and what kind of large-scale network behavior this enables. It has been previously proposed that non-linear properties of dendrites enable neurons to recognize multiple patterns. In this paper we extend this idea by showing that a neuron with several thousand synapses arranged along active dendrites can learn to accurately and robustly recognize hundreds of unique patterns of cellular activity, even in the presence of large amounts of noise and pattern variation. We then propose a neuron model where some of the patterns recognized by a neuron lead to action potentials and define the classic receptive field of the neuron, whereas the majority of the patterns recognized by a neuron act as predictions by slightly depolarizing the neuron without immediately generating an action potential. We then present a network model based on neurons with these properties and show that the network learns a robust model of time-based sequences. Given the similarity of excitatory neurons throughout the neocortex and the importance of sequence memory in inference and behavior, we propose that this form of sequence memory is a universal property of neocortical tissue. We further propose that cellular layers in the neocortex implement variations of the same sequence memory algorithm to achieve different aspects of inference and behavior. The neuron and network models we introduce are robust over a wide range of parameters as long as the network uses a sparse distributed code of cellular activations. The sequence capacity of the network scales linearly with the number of synapses on each neuron. Thus neurons need thousands of synapses to learn the many temporal patterns in sensory stimuli and motor sequences.Comment: Submitted for publicatio

    CMOS circuit implementations for neuron models

    Get PDF
    The mathematical neuron basic cells used as basic cells in popular neural network architectures and algorithms are discussed. The most popular neuron models (without training) used in neural network architectures and algorithms (NNA) are considered, focusing on hardware implementation of neuron models used in NAA, and in emulation of biological systems. Mathematical descriptions and block diagram representations are utilized in an independent approach. Nonoscillatory and oscillatory models are discusse

    The Optimal Size of Stochastic Hodgkin-Huxley Neuronal Systems for Maximal Energy Efficiency in Coding of Pulse Signals

    Full text link
    The generation and conduction of action potentials represents a fundamental means of communication in the nervous system, and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in a process of transfer pulse signals with action potentials. By computer simulation of a stochastic version of Hodgkin-Huxley model with detailed description of ion channel random gating, and analytically solve a bistable neuron model that mimic the action potential generation with a particle crossing the barrier of a double well, we find optimal number of ion channels that maximize energy efficiency for a neuron. We also investigate the energy efficiency of neuron population in which input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal combination of the number of neurons in neuron population and the number of ion channels in each neuron that maximize the energy efficiency. The energy efficiency depends on the characters of the input signals, e.g., the pulse strength and the inter-pulse intervals. We argue that trade-off between reliability of signal transmission and energy cost may influence the size of the neural systems if energy use is constrained.Comment: 22 pages, 10 figure

    An Adaptive Locally Connected Neuron Model: Focusing Neuron

    Full text link
    This paper presents a new artificial neuron model capable of learning its receptive field in the topological domain of inputs. The model provides adaptive and differentiable local connectivity (plasticity) applicable to any domain. It requires no other tool than the backpropagation algorithm to learn its parameters which control the receptive field locations and apertures. This research explores whether this ability makes the neuron focus on informative inputs and yields any advantage over fully connected neurons. The experiments include tests of focusing neuron networks of one or two hidden layers on synthetic and well-known image recognition data sets. The results demonstrated that the focusing neurons can move their receptive fields towards more informative inputs. In the simple two-hidden layer networks, the focusing layers outperformed the dense layers in the classification of the 2D spatial data sets. Moreover, the focusing networks performed better than the dense networks even when 70%\% of the weights were pruned. The tests on convolutional networks revealed that using focusing layers instead of dense layers for the classification of convolutional features may work better in some data sets.Comment: 45 pages, a national patent filed, submitted to Turkish Patent Office, No: -2017/17601, Date: 09.11.201
    corecore