416,795 research outputs found

    Leader neurons in leaky integrate and fire neural network simulations

    Full text link
    Several experimental studies show the existence of leader neurons in population bursts of 2D living neural networks. A leader neuron is, basically, a neuron which fires at the beginning of a burst (respectively network spike) more often that we expect by looking at its whole mean neural activity. This means that leader neurons have some burst triggering power beyond a simple statistical effect. In this study, we characterize these leader neuron properties. This naturally leads us to simulate neural 2D networks. To build our simulations, we choose the leaky integrate and fire (lIF) neuron model. Our lIF model has got stable leader neurons in the burst population that we simulate. These leader neurons are excitatory neurons and have a low membrane potential firing threshold. Except for these two first properties, the conditions required for a neuron to be a leader neuron are difficult to identify and seem to depend on several parameters involved in the simulations themself. However, a detailed linear analysis shows a trend of the properties required for a neuron to be a leader neuron. Our main finding is: A leader neuron sends a signal to many excitatory neurons as well as to a few inhibitory neurons and a leader neuron receives only a few signals from other excitatory neurons. Our linear analysis exhibits five essential properties for leader neurons with relative importance. This means that considering a given neural network with a fixed mean number of connections per neuron, our analysis gives us a way of predicting which neuron can be a good leader neuron and which cannot. Our prediction formula gives us a good statistical prediction even if, considering a single given neuron, the success rate does not reach hundred percent.Comment: 25 pages, 13 figures, 2 table

    I, NEURON: the neuron as the collective

    Get PDF
    Purpose – In the last half-century, individual sensory neurons have been bestowed with characteristics of the whole human being, such as behavior and its oft-presumed precursor, consciousness. This anthropomorphization is pervasive in the literature. It is also absurd, given what we know about neurons, and it needs to be abolished. This study aims to first understand how it happened, and hence why it persists. Design/methodology/approach – The peer-reviewed sensory-neurophysiology literature extends to hundreds (perhaps thousands) of papers. Here, more than 90 mainstream papers were scrutinized. Findings – Anthropomorphization arose because single neurons were cast as “observers” who “identify”, “categorize”, “recognize”, “distinguish” or “discriminate” the stimuli, using math-based algorithms that reduce (“decode”) the stimulus-evoked spike trains to the particular stimuli inferred to elicit them. Without “decoding”, there is supposedly no perception. However, “decoding” is both unnecessary and unconfirmed. The neuronal “observer” in fact consists of the laboratory staff and the greater society that supports them. In anthropomorphization, the neuron becomes the collective. Research limitations/implications – Anthropomorphization underlies the widespread application to neurons Information Theory and Signal Detection Theory, making both approaches incorrect. Practical implications – A great deal of time, money and effort has been wasted on anthropomorphic Reductionist approaches to understanding perception and consciousness. Those resources should be diverted into more-fruitful approaches. Originality/value – A long-overdue scrutiny of sensory-neuroscience literature reveals that anthropomorphization, a form of Reductionism that involves the presumption of single-neuron consciousness, has run amok in neuroscience. Consciousness is more likely to be an emergent property of the brain

    Output Stream of Binding Neuron with Feedback

    Full text link
    The binding neuron model is inspired by numerical simulation of Hodgkin-Huxley-type point neuron, as well as by the leaky integrate-and-fire model. In the binding neuron, the trace of an input is remembered for a fixed period of time after which it disappears completely. This is in the contrast with the above two models, where the postsynaptic potentials decay exponentially and can be forgotten only after triggering. The finiteness of memory in the binding neuron allows one to construct fast recurrent networks for computer modeling. Recently, the finiteness is utilized for exact mathematical description of the output stochastic process if the binding neuron is driven with the Poissonian input stream. In this paper, the simplest networking is considered for binding neuron. Namely, it is expected that every output spike of single neuron is immediately fed into its input. For this construction, externally fed with Poissonian stream, the output stream is characterized in terms of interspike interval probability density distribution if the binding neuron has threshold 2. For higher thresholds, the distribution is calculated numerically. The distributions are compared with those found for binding neuron without feedback, and for leaky integrator. Sample distributions for leaky integrator with feedback are calculated numerically as well. It is oncluded that even the simplest networking can radically alter spikng statistics. Information condensation at the level of single neuron is discussed.Comment: Version #1: 4 pages, 5 figures, manuscript submitted to Biological Cybernetics. Version #2 (this version): added 3 pages of new text with additional analytical and numerical calculations, 2 more figures, 11 more references, added Discussion sectio

    A compact aVLSI conductance-based silicon neuron

    Full text link
    We present an analogue Very Large Scale Integration (aVLSI) implementation that uses first-order lowpass filters to implement a conductance-based silicon neuron for high-speed neuromorphic systems. The aVLSI neuron consists of a soma (cell body) and a single synapse, which is capable of linearly summing both the excitatory and inhibitory postsynaptic potentials (EPSP and IPSP) generated by the spikes arriving from different sources. Rather than biasing the silicon neuron with different parameters for different spiking patterns, as is typically done, we provide digital control signals, generated by an FPGA, to the silicon neuron to obtain different spiking behaviours. The proposed neuron is only ~26.5 um2 in the IBM 130nm process and thus can be integrated at very high density. Circuit simulations show that this neuron can emulate different spiking behaviours observed in biological neurons.Comment: BioCAS-201

    An Adaptive Locally Connected Neuron Model: Focusing Neuron

    Full text link
    This paper presents a new artificial neuron model capable of learning its receptive field in the topological domain of inputs. The model provides adaptive and differentiable local connectivity (plasticity) applicable to any domain. It requires no other tool than the backpropagation algorithm to learn its parameters which control the receptive field locations and apertures. This research explores whether this ability makes the neuron focus on informative inputs and yields any advantage over fully connected neurons. The experiments include tests of focusing neuron networks of one or two hidden layers on synthetic and well-known image recognition data sets. The results demonstrated that the focusing neurons can move their receptive fields towards more informative inputs. In the simple two-hidden layer networks, the focusing layers outperformed the dense layers in the classification of the 2D spatial data sets. Moreover, the focusing networks performed better than the dense networks even when 70%\% of the weights were pruned. The tests on convolutional networks revealed that using focusing layers instead of dense layers for the classification of convolutional features may work better in some data sets.Comment: 45 pages, a national patent filed, submitted to Turkish Patent Office, No: -2017/17601, Date: 09.11.201

    Hopf Bifurcation and Chaos in Tabu Learning Neuron Models

    Full text link
    In this paper, we consider the nonlinear dynamical behaviors of some tabu leaning neuron models. We first consider a tabu learning single neuron model. By choosing the memory decay rate as a bifurcation parameter, we prove that Hopf bifurcation occurs in the neuron. The stability of the bifurcating periodic solutions and the direction of the Hopf bifurcation are determined by applying the normal form theory. We give a numerical example to verify the theoretical analysis. Then, we demonstrate the chaotic behavior in such a neuron with sinusoidal external input, via computer simulations. Finally, we study the chaotic behaviors in tabu learning two-neuron models, with linear and quadratic proximity functions respectively.Comment: 14 pages, 13 figures, Accepted by International Journal of Bifurcation and Chao

    Defective axonal transport in motor neuron disease

    Get PDF
    Several recent studies have highlighted the role of axonal transport in the pathogenesis of motor neuron diseases. Mutations in genes that control microtubule regulation and dynamics have been shown to cause motor neuron degeneration in mice and in a form of human motor neuron disease. In addition, mutations in the molecular motors dynein and kinesins and several proteins associated with the membranes of intracellular vesicles that undergo transport cause motor neuron degeneration in humans and mice. Paradoxically, evidence from studies on the legs at odd angles (Loa) mouse and a transgenic mouse model for human motor neuron disease suggest that partial limitation of the function of dynein may in fact lead to improved axonal transport in the transgenic mouse, leading to delayed disease onset and increased life span
    • …
    corecore