16,528 research outputs found

    The quantitative single-neuron modeling competition

    Get PDF
    As large-scale, detailed network modeling projects are flourishing in the field of computational neuroscience, it is more and more important to design single neuron models that not only capture qualitative features of real neurons but are quantitatively accurate in silico representations of those. Recent years have seen substantial effort being put in the development of algorithms for the systematic evaluation and optimization of neuron models with respect to electrophysiological data. It is however difficult to compare these methods because of the lack of appropriate benchmark tests. Here, we describe one such effort of providing the community with a standardized set of tests to quantify the performances of single neuron models. Our effort takes the form of a yearly challenge similar to the ones which have been present in the machine learning community for some time. This paper gives an account of the first two challenges which took place in 2007 and 2008 and discusses future directions. The results of the competition suggest that best performance on data obtained from single or double electrode current or conductance injection is achieved by models that combine features of standard leaky integrate-and-fire models with a second variable reflecting adaptation, refractoriness, or a dynamic threshol

    The quantitative single-neuron modeling competition

    Get PDF
    As large-scale, detailed network modeling projects are flourishing in the field of computational neuroscience, it is more and more important to design single neuron models that not only capture qualitative features of real neurons but are quantitatively accurate in silico representations of those. Recent years have seen substantial effort being put in the development of algorithms for the systematic evaluation and optimization of neuron models with respect to electrophysiological data. It is however difficult to compare these methods because of the lack of appropriate benchmark tests. Here, we describe one such effort of providing the community with a standardized set of tests to quantify the performances of single neuron models. Our effort takes the form of a yearly challenge similar to the ones which have been present in the machine learning community for some time. This paper gives an account of the first two challenges which took place in 2007 and 2008 and discusses future directions. The results of the competition suggest that best performance on data obtained from single or double electrode current or conductance injection is achieved by models that combine features of standard leaky integrate-and-fire models with a second variable reflecting adaptation, refractoriness, or a dynamic threshold

    Transformation of context-dependent sensory dynamics into motor behavior

    Get PDF
    Latorre R, Levi R, Varona P (2013) Transformation of Context-dependent Sensory Dynamics into Motor Behavior. PLoS Comput Biol 9(2): e1002908. doi:10.1371/journal.pcbi.1002908The intrinsic dynamics of sensory networks play an important role in the sensory-motor transformation. In this paper we use conductance based models and electrophysiological recordings to address the study of the dual role of a sensory network to organize two behavioral context-dependent motor programs in the mollusk Clione limacina. We show that: (i) a winner take-all dynamics in the gravimetric sensory network model drives the typical repetitive rhythm in the wing central pattern generator (CPG) during routine swimming; (ii) the winnerless competition dynamics of the same sensory network organizes the irregular pattern observed in the wing CPG during hunting behavior. Our model also shows that although the timing of the activity is irregular, the sequence of the switching among the sensory cells is preserved whenever the same set of neurons are activated in a given time window. These activation phase locks in the sensory signals are transformed into specific events in the motor activity. The activation phase locks can play an important role in motor coordination driven by the intrinsic dynamics of a multifunctional sensory organThis work was supported by MINECO TIN2012-30883 and IPT-2011-0727-020000

    A Quantitative Neural Coding Model of Sensory Memory

    Full text link
    The coding mechanism of sensory memory on the neuron scale is one of the most important questions in neuroscience. We have put forward a quantitative neural network model, which is self organized, self similar, and self adaptive, just like an ecosystem following Darwin theory. According to this model, neural coding is a mult to one mapping from objects to neurons. And the whole cerebrum is a real-time statistical Turing Machine, with powerful representing and learning ability. This model can reconcile some important disputations, such as: temporal coding versus rate based coding, grandmother cell versus population coding, and decay theory versus interference theory. And it has also provided explanations for some key questions such as memory consolidation, episodic memory, consciousness, and sentiment. Philosophical significance is indicated at last.Comment: 9 pages, 3 figure

    Role of homeostasis in learning sparse representations

    Full text link
    Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect to the statistics of natural scenes. Furthermore, it is believed that such an efficient coding is achieved using a competition across neurons so as to generate a sparse representation, that is, where a relatively small number of neurons are simultaneously active. Indeed, different models of sparse coding, coupled with Hebbian learning and homeostasis, have been proposed that successfully match the observed emergent response. However, the specific role of homeostasis in learning such sparse representations is still largely unknown. By quantitatively assessing the efficiency of the neural representation during learning, we derive a cooperative homeostasis mechanism that optimally tunes the competition between neurons within the sparse coding algorithm. We apply this homeostasis while learning small patches taken from natural images and compare its efficiency with state-of-the-art algorithms. Results show that while different sparse coding algorithms give similar coding results, the homeostasis provides an optimal balance for the representation of natural images within the population of neurons. Competition in sparse coding is optimized when it is fair. By contributing to optimizing statistical competition across neurons, homeostasis is crucial in providing a more efficient solution to the emergence of independent components

    Extracting non-linear integrate-and-fire models from experimental data using dynamic I–V curves

    Get PDF
    The dynamic I–V curve method was recently introduced for the efficient experimental generation of reduced neuron models. The method extracts the response properties of a neuron while it is subject to a naturalistic stimulus that mimics in vivo-like fluctuating synaptic drive. The resulting history-dependent, transmembrane current is then projected onto a one-dimensional current–voltage relation that provides the basis for a tractable non-linear integrate-and-fire model. An attractive feature of the method is that it can be used in spike-triggered mode to quantify the distinct patterns of post-spike refractoriness seen in different classes of cortical neuron. The method is first illustrated using a conductance-based model and is then applied experimentally to generate reduced models of cortical layer-5 pyramidal cells and interneurons, in injected-current and injected- conductance protocols. The resulting low-dimensional neuron models—of the refractory exponential integrate-and-fire type—provide highly accurate predictions for spike-times. The method therefore provides a useful tool for the construction of tractable models and rapid experimental classification of cortical neurons

    COMBINED DEEP AND SHALLOW KNOWLEDGE IN A UNIFIED MODEL FOR DIAGNOSIS BY ABDUCTION

    Get PDF
    Fault Diagnosis in real systems usually involves human expert’s shallow knowledge (as pattern causes-effects) but also deep knowledge (as structural / functional modularization and models on behavior). The paper proposes a unified approach on diagnosis by abduction based on plausibility and relevance criteria multiple applied, in a connectionist implementation. Then, it focuses elicitation of deep knowledge on target conductive flow systems – most encountered in industry and not only, in the aim of fault diagnosis. Finally, the paper gives hints on design and building of diagnosis system by abduction, embedding deep and shallow knowledge (according to case) and performing hierarchical fault isolation, along with a case study on a hydraulic installation in a rolling mill plant.shallow knowledge, diagnosis, flow systems

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA
    corecore