203,524 research outputs found

    Dynamic threshold neural P systems

    Get PDF
    Pulse coupled neural networks (PCNN, for short) are models abstracting the synchronization behavior observed experimentally for the cortical neurons in the visual cortex of a cat’s brain, and the intersecting cortical model is a simplified version of the PCNN model. Membrane computing (MC) is a kind computation paradigm abstracted from the structure and functioning of biological cells that provide models working in cell-like mode, neural-like mode and tissue-like mode. Inspired from intersecting cortical model, this paper proposes a new kind of neural-like P systems, called dynamic threshold neural P systems (for short, DTNP systems). DTNP systems can be represented as a directed graph, where nodes are dynamic threshold neurons while arcs denote synaptic connections of these neurons. DTNP systems provide a kind of parallel computing models, they have two data units (feeding input unit and dynamic threshold unit) and the neuron firing mechanism is implemented by using a dynamic threshold mechanism. The Turing universality of DTNP systems as number accepting/generating devices is established. In addition, an universal DTNP system having 109 neurons for computing functions is constructed.National Natural Science Foundation of China No 61472328Research Fund of Sichuan Science and Technology Project No. 2018JY0083Chunhui Project Foundation of the Education Department of China No. Z2016143Chunhui Project Foundation of the Education Department of China No. Z2016148Research Foundation of the Education Department of Sichuan province No. 17TD003

    Redistribution of Synaptic Efficacy Supports Stable Pattern Learning in Neural Networks

    Full text link
    Markram and Tsodyks, by showing that the elevated synaptic efficacy observed with single-pulse LTP measurements disappears with higher-frequency test pulses, have critically challenged the conventional assumption that LTP reflects a general gain increase. Redistribution of synaptic efficacy (RSE) is here seen as the local realization of a global design principle in a neural network for pattern coding. As is typical of many coding systems, the network learns by dynamically balancing a pattern-independent increase in strength against a pattern-specific increase in selectivity. This computation is implemented by a monotonic long-term memory process which has a bidirectional effect on the postsynaptic potential via functionally complementary signal components. These frequency-dependent and frequency-independent components realize the balance between specific and nonspecific functions at each synapse. This synaptic balance suggests a functional purpose for RSE which, by dynamically bounding total memory change, implements a distributed coding scheme which is stable with fast as well as slow learning. Although RSE would seem to make it impossible to code high-frequency input features, a network preprocessing step called complement coding symmetrizes the input representation, which allows the system to encode high-frequency as well as low-frequency features in an input pattern. A possible physical model interprets the two synaptic signal components in terms of ligand-gated and voltage-gated receptors, where learning converts channels from one type to another.Office of Naval Research and the Defense Advanced Research Projects Agency (N00014-95-1-0409, N00014-1-95-0657

    Adaptive Neural Coding Dependent on the Time-Varying Statistics of the Somatic Input Current

    Get PDF
    It is generally assumed that nerve cells optimize their performance to reflect the statistics of their input. Electronic circuit analogs of neurons require similar methods of self-optimization for stable and autonomous operation. We here describe and demonstrate a biologically plausible adaptive algorithm that enables a neuron to adapt the current threshold and the slope (or gain) of its current-frequency relationship to match the mean (or dc offset) and variance (or dynamic range or contrast) of the time-varying somatic input current. The adaptation algorithm estimates the somatic current signal from the spike train by way of the intracellular somatic calcium concentration, thereby continuously adjusting the neuronś firing dynamics. This principle is shown to work in an analog VLSI-designed silicon neuron

    Pseudo-labels for Supervised Learning on Dynamic Vision Sensor Data, Applied to Object Detection under Ego-motion

    Full text link
    In recent years, dynamic vision sensors (DVS), also known as event-based cameras or neuromorphic sensors, have seen increased use due to various advantages over conventional frame-based cameras. Using principles inspired by the retina, its high temporal resolution overcomes motion blurring, its high dynamic range overcomes extreme illumination conditions and its low power consumption makes it ideal for embedded systems on platforms such as drones and self-driving cars. However, event-based data sets are scarce and labels are even rarer for tasks such as object detection. We transferred discriminative knowledge from a state-of-the-art frame-based convolutional neural network (CNN) to the event-based modality via intermediate pseudo-labels, which are used as targets for supervised learning. We show, for the first time, event-based car detection under ego-motion in a real environment at 100 frames per second with a test average precision of 40.3% relative to our annotated ground truth. The event-based car detector handles motion blur and poor illumination conditions despite not explicitly trained to do so, and even complements frame-based CNN detectors, suggesting that it has learnt generalized visual representations

    Toward a dynamical systems analysis of neuromodulation

    Get PDF
    This work presents some first steps toward a more thorough understanding of the control systems employed in evolutionary robotics. In order to choose an appropriate architecture or to construct an effective novel control system we need insights into what makes control systems successful, robust, evolvable, etc. Here we present analysis intended to shed light on this type of question as it applies to a novel class of artificial neural networks that include a neuromodulatory mechanism: GasNets. We begin by instantiating a particular GasNet subcircuit responsible for tuneable pattern generation and thought to underpin the attractive property of “temporal adaptivity”. Rather than work within the GasNet formalism, we develop an extension of the well-known FitzHugh-Nagumo equations. The continuous nature of our model allows us to conduct a thorough dynamical systems analysis and to draw parallels between this subcircuit and beating/bursting phenomena reported in the neuroscience literature. We then proceed to explore the effects of different types of parameter modulation on the system dynamics. We conclude that while there are key differences between the gain modulation used in the GasNet and alternative schemes (including threshold modulation of more traditional synaptic input), both approaches are able to produce tuneable pattern generation. While it appears, at least in this study, that the GasNet’s gain modulation may not be crucial to pattern generation , we go on to suggest some possible advantages it could confer
    corecore