60 research outputs found

    A Fokker-Planck formalism for diffusion with finite increments and absorbing boundaries

    Get PDF
    Gaussian white noise is frequently used to model fluctuations in physical systems. In Fokker-Planck theory, this leads to a vanishing probability density near the absorbing boundary of threshold models. Here we derive the boundary condition for the stationary density of a first-order stochastic differential equation for additive finite-grained Poisson noise and show that the response properties of threshold units are qualitatively altered. Applied to the integrate-and-fire neuron model, the response turns out to be instantaneous rather than exhibiting low-pass characteristics, highly non-linear, and asymmetric for excitation and inhibition. The novel mechanism is exhibited on the network level and is a generic property of pulse-coupled systems of threshold units.Comment: Consists of two parts: main article (3 figures) plus supplementary text (3 extra figures

    Deep feature learning of in-cylinder flow fields to analyze cycle-to-cycle variations in an SI engine

    Get PDF
    Machine learning (ML) models based on a large data set of in-cylinder flow fields of an IC engine obtained by high-speed particle image velocimetry allow the identification of relevant flow structures underlying cycle-to-cycle variations of engine performance. To this end, deep feature learning is employed to train ML models that predict cycles of high and low in-cylinder maximum pressure. Deep convolutional autoencoders are self-supervised-trained to encode flow field features in low dimensional latent space. Without the limitations ascribable to manual feature engineering, ML models based on these learned features are able to classify high energy cycles already from the flow field during late intake and the compression stroke as early as 290 crank angle degrees before top dead center (-290° CA) with a mean accuracy above chance level. The prediction accuracy from -290° CA to -10° CA is comparable to baseline ML approaches utilizing an extensive set of engineered features. Relevant flow structures in the compression stroke are revealed by feature analysis of ML models and are interpreted using conditional averaged flow quantities. This analysis unveils the importance of the horizontal velocity component of in-cylinder flows in predicting engine performance. Combining deep learning and conventional flow analysis techniques promises to be a powerful tool for ultimately revealing high-level flow features relevant to the prediction of cycle-to-cycle variations and further engine optimization

    Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks

    Get PDF
    The Supplementary Material for this article can be found online at: http://journal.frontiersin.org/article/10.3389/fninf. 2017.00007/full#supplementary-materialModeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity.This study was supported by the European Union NR (658479-Spike Control), the Spanish National Grant NEUROPACT (TIN2013-47069-P) and by the Spanish National Grant PhD scholarship (AP2012-0906). We gratefully acknowledge the support of NVIDIA Corporation with the donation of two Titan GPUs for current EDLUT development

    A reafferent and feed-forward model of song syntax generation in the Bengalese finch

    Get PDF
    Adult Bengalese finches generate a variable song that obeys a distinct and individual syntax. The syntax is gradually lost over a period of days after deafening and is recovered when hearing is restored. We present a spiking neuronal network model of the song syntax generation and its loss, based on the assumption that the syntax is stored in reafferent connections from the auditory to the motor control area. Propagating synfire activity in the HVC codes for individual syllables of the song and priming signals from the auditory network reduce the competition between syllables to allow only those transitions that are permitted by the syntax. Both imprinting of song syntax within HVC and the interaction of the reafferent signal with an efference copy of the motor command are sufficient to explain the gradual loss of syntax in the absence of auditory feedback. The model also reproduces for the first time experimental findings on the influence of altered auditory feedback on the song syntax generation, and predicts song- and species-specific low frequency components in the LFP. This study illustrates how sequential compositionality following a defined syntax can be realized in networks of spiking neurons

    Motor cognition–motor semantics: Action perception theory of cognition and communication

    Get PDF
    A new perspective on cognition views cortical cell assemblies linking together knowledge about actions and perceptions not only as the vehicles of integrated action and perception processing but, furthermore, as a brain basis for a wide range of higher cortical functions, including attention, meaning and concepts, sequences, goals and intentions, and even communicative social interaction. This article explains mechanisms relevant to mechanistic action perception theory, points to concrete neuronal circuits in brains along with artificial neuronal network simulations, and summarizes recent brain imaging and other experimental data documenting the role of action perception circuits in cognition, language and communication

    An Imperfect Dopaminergic Error Signal Can Drive Temporal-Difference Learning

    Get PDF
    An open problem in the field of computational neuroscience is how to link synaptic plasticity to system-level learning. A promising framework in this context is temporal-difference (TD) learning. Experimental evidence that supports the hypothesis that the mammalian brain performs temporal-difference learning includes the resemblance of the phasic activity of the midbrain dopaminergic neurons to the TD error and the discovery that cortico-striatal synaptic plasticity is modulated by dopamine. However, as the phasic dopaminergic signal does not reproduce all the properties of the theoretical TD error, it is unclear whether it is capable of driving behavior adaptation in complex tasks. Here, we present a spiking temporal-difference learning model based on the actor-critic architecture. The model dynamically generates a dopaminergic signal with realistic firing rates and exploits this signal to modulate the plasticity of synapses as a third factor. The predictions of our proposed plasticity dynamics are in good agreement with experimental results with respect to dopamine, pre- and post-synaptic activity. An analytical mapping from the parameters of our proposed plasticity dynamics to those of the classical discrete-time TD algorithm reveals that the biological constraints of the dopaminergic signal entail a modified TD algorithm with self-adapting learning parameters and an adapting offset. We show that the neuronal network is able to learn a task with sparse positive rewards as fast as the corresponding classical discrete-time TD algorithm. However, the performance of the neuronal network is impaired with respect to the traditional algorithm on a task with both positive and negative rewards and breaks down entirely on a task with purely negative rewards. Our model demonstrates that the asymmetry of a realistic dopaminergic signal enables TD learning when learning is driven by positive rewards but not when driven by negative rewards
    • …
    corecore