4,198 research outputs found

    Automatically Selecting a Suitable Integration Scheme for Systems of Differential Equations in Neuron Models

    Get PDF
    On the level of the spiking activity, the integrate-and-fire neuron is one of the most commonly used descriptions of neural activity. A multitude of variants has been proposed to cope with the huge diversity of behaviors observed in biological nerve cells. The main appeal of this class of model is that it can be defined in terms of a hybrid model, where a set of mathematical equations describes the sub-threshold dynamics of the membrane potential and the generation of action potentials is often only added algorithmically without the shape of spikes being part of the equations. In contrast to more detailed biophysical models, this simple description of neuron models allows the routine simulation of large biological neuronal networks on standard hardware widely available in most laboratories these days. The time evolution of the relevant state variables is usually defined by a small set of ordinary differential equations (ODEs). A small number of evolution schemes for the corresponding systems of ODEs are commonly used for many neuron models, and form the basis of the neuron model implementations built into commonly used simulators like Brian, NEST and NEURON. However, an often neglected problem is that the implemented evolution schemes are only rarely selected through a structured process based on numerical criteria. This practice cannot guarantee accurate and stable solutions for the equations and the actual quality of the solution depends largely on the parametrization of the model. In this article, we give an overview of typical equations and state descriptions for the dynamics of the relevant variables in integrate-and-fire models. We then describe a formal mathematical process to automate the design or selection of a suitable evolution scheme for this large class of models. Finally, we present the reference implementation of our symbolic analysis toolbox for ODEs that can guide modelers during the implementation of custom neuron models

    Structure Learning in Coupled Dynamical Systems and Dynamic Causal Modelling

    Get PDF
    Identifying a coupled dynamical system out of many plausible candidates, each of which could serve as the underlying generator of some observed measurements, is a profoundly ill posed problem that commonly arises when modelling real world phenomena. In this review, we detail a set of statistical procedures for inferring the structure of nonlinear coupled dynamical systems (structure learning), which has proved useful in neuroscience research. A key focus here is the comparison of competing models of (ie, hypotheses about) network architectures and implicit coupling functions in terms of their Bayesian model evidence. These methods are collectively referred to as dynamical casual modelling (DCM). We focus on a relatively new approach that is proving remarkably useful; namely, Bayesian model reduction (BMR), which enables rapid evaluation and comparison of models that differ in their network architecture. We illustrate the usefulness of these techniques through modelling neurovascular coupling (cellular pathways linking neuronal and vascular systems), whose function is an active focus of research in neurobiology and the imaging of coupled neuronal systems

    Gravitational Models Explain Shifts on Human Visual Attention

    Get PDF
    Visual attention refers to the human brain's ability to select relevant sensory information for preferential processing, improving performance in visual and cognitive tasks. It proceeds in two phases. One in which visual feature maps are acquired and processed in parallel. Another where the information from these maps is merged in order to select a single location to be attended for further and more complex computations and reasoning. Its computational description is challenging, especially if the temporal dynamics of the process are taken into account. Numerous methods to estimate saliency have been proposed in the last three decades. They achieve almost perfect performance in estimating saliency at the pixel level, but the way they generate shifts in visual attention fully depends on winner-take-all (WTA) circuitry. WTA is implemented} by the biological hardware in order to select a location with maximum saliency, towards which to direct overt attention. In this paper we propose a gravitational model (GRAV) to describe the attentional shifts. Every single feature acts as an attractor and {the shifts are the result of the joint effects of the attractors. In the current framework, the assumption of a single, centralized saliency map is no longer necessary, though still plausible. Quantitative results on two large image datasets show that this model predicts shifts more accurately than winner-take-all

    Techniques of replica symmetry breaking and the storage problem of the McCulloch-Pitts neuron

    Full text link
    In this article the framework for Parisi's spontaneous replica symmetry breaking is reviewed, and subsequently applied to the example of the statistical mechanical description of the storage properties of a McCulloch-Pitts neuron. The technical details are reviewed extensively, with regard to the wide range of systems where the method may be applied. Parisi's partial differential equation and related differential equations are discussed, and a Green function technique introduced for the calculation of replica averages, the key to determining the averages of physical quantities. The ensuing graph rules involve only tree graphs, as appropriate for a mean-field-like model. The lowest order Ward-Takahashi identity is recovered analytically and is shown to lead to the Goldstone modes in continuous replica symmetry breaking phases. The need for a replica symmetry breaking theory in the storage problem of the neuron has arisen due to the thermodynamical instability of formerly given solutions. Variational forms for the neuron's free energy are derived in terms of the order parameter function x(q), for different prior distribution of synapses. Analytically in the high temperature limit and numerically in generic cases various phases are identified, among them one similar to the Parisi phase in the Sherrington-Kirkpatrick model. Extensive quantities like the error per pattern change slightly with respect to the known unstable solutions, but there is a significant difference in the distribution of non-extensive quantities like the synaptic overlaps and the pattern storage stability parameter. A simulation result is also reviewed and compared to the prediction of the theory.Comment: 103 Latex pages (with REVTeX 3.0), including 15 figures (ps, epsi, eepic), accepted for Physics Report

    Techniques of replica symmetry breaking and the storage problem of the McCulloch-Pitts neuron

    Full text link
    In this article the framework for Parisi's spontaneous replica symmetry breaking is reviewed, and subsequently applied to the example of the statistical mechanical description of the storage properties of a McCulloch-Pitts neuron. The technical details are reviewed extensively, with regard to the wide range of systems where the method may be applied. Parisi's partial differential equation and related differential equations are discussed, and a Green function technique introduced for the calculation of replica averages, the key to determining the averages of physical quantities. The ensuing graph rules involve only tree graphs, as appropriate for a mean-field-like model. The lowest order Ward-Takahashi identity is recovered analytically and is shown to lead to the Goldstone modes in continuous replica symmetry breaking phases. The need for a replica symmetry breaking theory in the storage problem of the neuron has arisen due to the thermodynamical instability of formerly given solutions. Variational forms for the neuron's free energy are derived in terms of the order parameter function x(q), for different prior distribution of synapses. Analytically in the high temperature limit and numerically in generic cases various phases are identified, among them one similar to the Parisi phase in the Sherrington-Kirkpatrick model. Extensive quantities like the error per pattern change slightly with respect to the known unstable solutions, but there is a significant difference in the distribution of non-extensive quantities like the synaptic overlaps and the pattern storage stability parameter. A simulation result is also reviewed and compared to the prediction of the theory.Comment: 103 Latex pages (with REVTeX 3.0), including 15 figures (ps, epsi, eepic), accepted for Physics Report

    Using neural network for simulations to improve the quality of disease diagnosis: Technical aspects

    Full text link
    Mathematical models are important for the processes of cognition and decision-making. They provide a concise representation of significant relationships in the description of objects and situations. Adding new relationships leads to narrowing the scope of applicability of the model. The formula is an example of a compressed description of a potentially infinite set of objects and situations. Knowledge processing is based on the use of mathematical methods. In this case, it is the most thorough, at least from the point of view of strict logic and consistent formalization. To process knowledge, we must present it in some form that is convenient for analysis. Thus, when analyzing data and knowledge, we do not use them directly, but their representations. Mathematical models of objects and phenomena are an effective way of representation. This is now the most powerful method of cognition of processes, objects and phenomena. Modeling is a special way of scientific research. A mathematical model of an object is a mathematical structure interpreted within a given domain. © 2020, World Academy of Research in Science and Engineering. All rights reserved

    Learning and recognition by a dynamical system with a plastic velocity field

    Get PDF
    Learning is a mechanism intrinsic to all sentient biological systems. Despite the diverse range of paradigms that exist, it appears that an artificial system has yet to be developed that can emulate learning with a comparable degree of accuracy or efficiency to the human brain. With the development of new approaches comes the opportunity to reduce this disparity in performance. A model presented by Janson and Marsden [arXiv:1107.0674 (2011)] (Memory foam model) redefines the critical features that an intelligent system should demonstrate. Rather than focussing on the topological constraints of the rigid neuron structure, the emphasis is placed on the on-line, unsupervised, classification, retention and recognition of stimuli. In contrast to traditional AI approaches, the system s memory is not plagued by spurious attractors or the curse of dimensionality. The ability to continuously learn, whilst simultaneously recognising aspects of a stimuli ensures that this model more closely embodies the operations occurring in the brain than many other AI approaches. Here we consider the pertinent deficiencies of classical artificial learning models before introducing and developing this memory foam self-shaping system. As this model is relatively new, its limitations are not yet apparent. These must be established by testing the model in various complex environments. Here we consider its ability to learn and recognize the RGB colours composing cartoons as observed via a web-camera. The self-shaping vector field of the system is shown to adjust its composition to reflect the distribution of three-dimensional inputs. The model builds a memory of its experiences and is shown to recognize unfamiliar colours by locating the most appropriate class with which to associate a stimuli. In addition, we discuss a method to map a three-dimensional RGB input onto a line spectrum of colours. The corresponding reduction of the models dimensions is shown to dramatically improve computational speed, however, the model is then restricted to a much smaller set of representable colours. This models prototype offers a gradient description of recognition, it is evident that a more complex, non-linear alternative may be used to better characterize the classes of the system. It is postulated that non-linear attractors may be utilized to convey the concept of hierarchy that relates the different classes of the system. We relate the dynamics of the van der Pol oscillator to this plastic self-shaping system, first demonstrating the recognition of stimuli with limit cycle trajectories. The location and frequency of each cycle is dependent on the topology of the systems energy potential. For a one-dimensional stimuli the dynamics are restricted to the cycle, the extension of the model to an N dimensional stimuli is approached via the coupling of N oscillators. Here we study systems of up to three mutually coupled oscillators and relate limit cycles, fixed points and quasi-periodic orbits to the recognition of stimuli
    corecore