4,198 research outputs found
Automatically Selecting a Suitable Integration Scheme for Systems of Differential Equations in Neuron Models
On the level of the spiking activity, the integrate-and-fire neuron is one of the most commonly used descriptions of neural activity. A multitude of variants has been proposed to cope with the huge diversity of behaviors observed in biological nerve cells. The main appeal of this class of model is that it can be defined in terms of a hybrid model, where a set of mathematical equations describes the sub-threshold dynamics of the membrane potential and the generation of action potentials is often only added algorithmically without the shape of spikes being part of the equations. In contrast to more detailed biophysical models, this simple description of neuron models allows the routine simulation of large biological neuronal networks on standard hardware widely available in most laboratories these days. The time evolution of the relevant state variables is usually defined by a small set of ordinary differential equations (ODEs). A small number of evolution schemes for the corresponding systems of ODEs are commonly used for many neuron models, and form the basis of the neuron model implementations built into commonly used simulators like Brian, NEST and NEURON. However, an often neglected problem is that the implemented evolution schemes are only rarely selected through a structured process based on numerical criteria. This practice cannot guarantee accurate and stable solutions for the equations and the actual quality of the solution depends largely on the parametrization of the model. In this article, we give an overview of typical equations and state descriptions for the dynamics of the relevant variables in integrate-and-fire models. We then describe a formal mathematical process to automate the design or selection of a suitable evolution scheme for this large class of models. Finally, we present the reference implementation of our symbolic analysis toolbox for ODEs that can guide modelers during the implementation of custom neuron models
Structure Learning in Coupled Dynamical Systems and Dynamic Causal Modelling
Identifying a coupled dynamical system out of many plausible candidates, each
of which could serve as the underlying generator of some observed measurements,
is a profoundly ill posed problem that commonly arises when modelling real
world phenomena. In this review, we detail a set of statistical procedures for
inferring the structure of nonlinear coupled dynamical systems (structure
learning), which has proved useful in neuroscience research. A key focus here
is the comparison of competing models of (ie, hypotheses about) network
architectures and implicit coupling functions in terms of their Bayesian model
evidence. These methods are collectively referred to as dynamical casual
modelling (DCM). We focus on a relatively new approach that is proving
remarkably useful; namely, Bayesian model reduction (BMR), which enables rapid
evaluation and comparison of models that differ in their network architecture.
We illustrate the usefulness of these techniques through modelling
neurovascular coupling (cellular pathways linking neuronal and vascular
systems), whose function is an active focus of research in neurobiology and the
imaging of coupled neuronal systems
Gravitational Models Explain Shifts on Human Visual Attention
Visual attention refers to the human brain's ability to select relevant
sensory information for preferential processing, improving performance in
visual and cognitive tasks. It proceeds in two phases. One in which visual
feature maps are acquired and processed in parallel. Another where the
information from these maps is merged in order to select a single location to
be attended for further and more complex computations and reasoning. Its
computational description is challenging, especially if the temporal dynamics
of the process are taken into account. Numerous methods to estimate saliency
have been proposed in the last three decades. They achieve almost perfect
performance in estimating saliency at the pixel level, but the way they
generate shifts in visual attention fully depends on winner-take-all (WTA)
circuitry. WTA is implemented} by the biological hardware in order to select a
location with maximum saliency, towards which to direct overt attention. In
this paper we propose a gravitational model (GRAV) to describe the attentional
shifts. Every single feature acts as an attractor and {the shifts are the
result of the joint effects of the attractors. In the current framework, the
assumption of a single, centralized saliency map is no longer necessary, though
still plausible. Quantitative results on two large image datasets show that
this model predicts shifts more accurately than winner-take-all
Techniques of replica symmetry breaking and the storage problem of the McCulloch-Pitts neuron
In this article the framework for Parisi's spontaneous replica symmetry
breaking is reviewed, and subsequently applied to the example of the
statistical mechanical description of the storage properties of a
McCulloch-Pitts neuron. The technical details are reviewed extensively, with
regard to the wide range of systems where the method may be applied. Parisi's
partial differential equation and related differential equations are discussed,
and a Green function technique introduced for the calculation of replica
averages, the key to determining the averages of physical quantities. The
ensuing graph rules involve only tree graphs, as appropriate for a
mean-field-like model. The lowest order Ward-Takahashi identity is recovered
analytically and is shown to lead to the Goldstone modes in continuous replica
symmetry breaking phases. The need for a replica symmetry breaking theory in
the storage problem of the neuron has arisen due to the thermodynamical
instability of formerly given solutions. Variational forms for the neuron's
free energy are derived in terms of the order parameter function x(q), for
different prior distribution of synapses. Analytically in the high temperature
limit and numerically in generic cases various phases are identified, among
them one similar to the Parisi phase in the Sherrington-Kirkpatrick model.
Extensive quantities like the error per pattern change slightly with respect to
the known unstable solutions, but there is a significant difference in the
distribution of non-extensive quantities like the synaptic overlaps and the
pattern storage stability parameter. A simulation result is also reviewed and
compared to the prediction of the theory.Comment: 103 Latex pages (with REVTeX 3.0), including 15 figures (ps, epsi,
eepic), accepted for Physics Report
Techniques of replica symmetry breaking and the storage problem of the McCulloch-Pitts neuron
In this article the framework for Parisi's spontaneous replica symmetry
breaking is reviewed, and subsequently applied to the example of the
statistical mechanical description of the storage properties of a
McCulloch-Pitts neuron. The technical details are reviewed extensively, with
regard to the wide range of systems where the method may be applied. Parisi's
partial differential equation and related differential equations are discussed,
and a Green function technique introduced for the calculation of replica
averages, the key to determining the averages of physical quantities. The
ensuing graph rules involve only tree graphs, as appropriate for a
mean-field-like model. The lowest order Ward-Takahashi identity is recovered
analytically and is shown to lead to the Goldstone modes in continuous replica
symmetry breaking phases. The need for a replica symmetry breaking theory in
the storage problem of the neuron has arisen due to the thermodynamical
instability of formerly given solutions. Variational forms for the neuron's
free energy are derived in terms of the order parameter function x(q), for
different prior distribution of synapses. Analytically in the high temperature
limit and numerically in generic cases various phases are identified, among
them one similar to the Parisi phase in the Sherrington-Kirkpatrick model.
Extensive quantities like the error per pattern change slightly with respect to
the known unstable solutions, but there is a significant difference in the
distribution of non-extensive quantities like the synaptic overlaps and the
pattern storage stability parameter. A simulation result is also reviewed and
compared to the prediction of the theory.Comment: 103 Latex pages (with REVTeX 3.0), including 15 figures (ps, epsi,
eepic), accepted for Physics Report
Using neural network for simulations to improve the quality of disease diagnosis: Technical aspects
Mathematical models are important for the processes of cognition and decision-making. They provide a concise representation of significant relationships in the description of objects and situations. Adding new relationships leads to narrowing the scope of applicability of the model. The formula is an example of a compressed description of a potentially infinite set of objects and situations. Knowledge processing is based on the use of mathematical methods. In this case, it is the most thorough, at least from the point of view of strict logic and consistent formalization. To process knowledge, we must present it in some form that is convenient for analysis. Thus, when analyzing data and knowledge, we do not use them directly, but their representations. Mathematical models of objects and phenomena are an effective way of representation. This is now the most powerful method of cognition of processes, objects and phenomena. Modeling is a special way of scientific research. A mathematical model of an object is a mathematical structure interpreted within a given domain. © 2020, World Academy of Research in Science and Engineering. All rights reserved
Recommended from our members
Resource-Aware Predictive Models in Cyber-Physical Systems
Cyber-Physical Systems (CPS) are composed of computing devices interacting with physical systems. Model-based design is a powerful methodology in CPS design in the implementation of control systems. For instance, Model Predictive Control (MPC) is typically implemented in CPS applications, e.g., in path tracking of autonomous vehicles. MPC deploys a model to estimate the behavior of the physical system at future time instants for a specific time horizon. Ordinary Differential Equations (ODE) are the most commonly used models to emulate the behavior of continuous-time (non-)linear dynamical systems. A complex physical model may comprise thousands of ODEs that pose scalability, performance and power consumption challenges. One approach to address these model complexity challenges are frameworks that automate the development of model-to-model transformation. In this dissertation, a state-based model with tunable parameters is proposed to operate as a reconfigurable predictive model of the physical system. Moreover, we propose a run-time switching algorithm that selects the best model using machine learning. We employed a metric that formulates the trade-off between the error and computational savings due to model reduction. Building statistical models are constrained to having expert knowledge and an actual understanding of the modeled phenomenon or process. Also, statistical models may not produce solutions that are as robust in a real-world context as factors outside the model, like disruptions would not be taken into account. Machine learning models have emerged as a solution to account for the dynamic behavior of the environment and automate intelligence acquisition and refinement. Neural networks are machine learning models, well-known to have the ability to learn linear and nonlinear relations between input and output variables without prior knowledge. However, the ability to efficiently exploit resource-hungry neural networks in embedded resource-bound settings is a major challenge.Here, we proposed Priority Neuron Network (PNN), a resource-aware neural networks model that can be reconfigured into smaller sub-networks at runtime. This approach enables a trade-off between the model's computation time and accuracy based on available resources. The PNN model is memory efficient since it stores only one set of parameters to account for various sub-network sizes. We propose a training algorithm that applies regularization techniques to constrain the activation value of neurons and assigns a priority to each one. We consider the neuron's ordinal number as our priority criteria in that the priority of the neuron is inversely proportional to its ordinal number in the layer. This imposes a relatively sorted order on the activation values. We conduct experiments to employ our PNN as the predictive model in a CPS application. We can see that not only our technique will resolve the memory overhead of DNN architectures but it also reduces the computation overhead for the training process substantially. The training time is a critical matter especially in embedded systems where many NN models are trained on the fly
Learning and recognition by a dynamical system with a plastic velocity field
Learning is a mechanism intrinsic to all sentient biological systems. Despite the diverse range of paradigms that exist, it appears that an artificial system has yet to be developed that can emulate learning with a comparable degree of accuracy or efficiency to the human brain. With the development of new approaches comes the opportunity to reduce this disparity in performance. A model presented by Janson and Marsden [arXiv:1107.0674 (2011)] (Memory foam model) redefines the critical features that an intelligent system should demonstrate. Rather than focussing on the topological constraints of the rigid neuron structure, the emphasis is placed on the on-line, unsupervised, classification, retention and recognition of stimuli. In contrast to traditional AI approaches, the system s memory is not plagued by spurious attractors or the curse of dimensionality. The ability to continuously learn, whilst simultaneously recognising aspects of a stimuli ensures that this model more closely embodies the operations occurring in the brain than many other AI approaches. Here we consider the pertinent deficiencies of classical artificial learning models before introducing and developing this memory foam self-shaping system.
As this model is relatively new, its limitations are not yet apparent. These must be established by testing the model in various complex environments. Here we consider its ability to learn and recognize the RGB colours composing cartoons as observed via a web-camera. The self-shaping vector field of the system is shown to adjust its composition to reflect the distribution of three-dimensional inputs. The model builds a memory of its experiences and is shown to recognize unfamiliar colours by locating the most appropriate class with which to associate a stimuli. In addition, we discuss a method to map a three-dimensional RGB input onto a line spectrum of colours. The corresponding reduction of the models dimensions is shown to dramatically improve computational speed, however, the model is then restricted to a much smaller set of representable colours.
This models prototype offers a gradient description of recognition, it is evident that a more complex, non-linear alternative may be used to better characterize the classes of the system. It is postulated that non-linear attractors may be utilized to convey the concept of hierarchy that relates the different classes of the system. We relate the dynamics of the van der Pol oscillator to this plastic self-shaping system, first demonstrating the recognition of stimuli with limit cycle trajectories. The location and frequency of each cycle is dependent on the topology of the systems energy potential. For a one-dimensional stimuli the dynamics are restricted to the cycle, the extension of the model to an N dimensional stimuli is approached via the coupling of N oscillators. Here we study systems of up to three mutually coupled oscillators and relate limit cycles, fixed points and quasi-periodic orbits to the recognition of stimuli
- …