77 research outputs found

    A new twist for the simulation of hybrid systems using the true jump method

    Full text link
    The use of stochastic models, in effect piecewise deterministic Markov processes (PDMP), has become increasingly popular especially for the modeling of chemical reactions and cell biophysics. Yet, exact simulation methods, for the simulation of these models in evolving environments, are limited by the need to find the next jumping time at each recursion of the algorithm. Here, we report on a new general method to find this jumping time for the True Jump Method. It is based on an expression in terms of ordinary differential equations for which efficient numerical methods are available. As such, our new result makes it possible to study numerically stochastic models for which analytical formulas are not available thereby providing a way to approximate the state distribution for example. We conclude that the wide use of event detection schemes for the simulation of PDMPs should be strongly reconsidered. The only relevant remaining question being the efficiency of our method compared to the Fictitious Jump Method, question which is strongly case dependent

    Local/global analysis of the stationary solutions of some neural field equations

    Full text link
    Neural or cortical fields are continuous assemblies of mesoscopic models, also called neural masses, of neural populations that are fundamental in the modeling of macroscopic parts of the brain. Neural fields are described by nonlinear integro-differential equations. The solutions of these equations represent the state of activity of these populations when submitted to inputs from neighbouring brain areas. Understanding the properties of these solutions is essential in advancing our understanding of the brain. In this paper we study the dependency of the stationary solutions of the neural fields equations with respect to the stiffness of the nonlinearity and the contrast of the external inputs. This is done by using degree theory and bifurcation theory in the context of functional, in particular infinite dimensional, spaces. The joint use of these two theories allows us to make new detailed predictions about the global and local behaviours of the solutions. We also provide a generic finite dimensional approximation of these equations which allows us to study in great details two models. The first model is a neural mass model of a cortical hypercolumn of orientation sensitive neurons, the ring model. The second model is a general neural field model where the spatial connectivity isdescribed by heterogeneous Gaussian-like functions.Comment: 38 pages, 9 figure

    Illusions in the Ring Model of visual orientation selectivity

    Full text link
    The Ring Model of orientation tuning is a dynamical model of a hypercolumn of visual area V1 in the human neocortex that has been designed to account for the experimentally observed orientation tuning curves by local, i.e., cortico-cortical computations. The tuning curves are stationary, i.e. time independent, solutions of this dynamical model. One important assumption underlying the Ring Model is that the LGN input to V1 is weakly tuned to the retinal orientation and that it is the local computations in V1 that sharpen this tuning. Because the equations that describe the Ring Model have built-in equivariance properties in the synaptic weight distribution with respect to a particular group acting on the retinal orientation of the stimulus, the model in effect encodes an infinite number of tuning curves that are arbitrarily translated with respect to each other. By using the Orbit Space Reduction technique we rewrite the model equations in canonical form as functions of polynomials that are invariant with respect to the action of this group. This allows us to combine equivariant bifurcation theory with an efficient numerical continuation method in order to compute the tuning curves predicted by the Ring Model. Surprisingly some of these tuning curves are not tuned to the stimulus. We interpret them as neural illusions and show numerically how they can be induced by simple dynamical stimuli. These neural illusions are important biological predictions of the model. If they could be observed experimentally this would be a strong point in favour of the Ring Model. We also show how our theoretical analysis allows to very simply specify the ranges of the model parameters by comparing the model predictions with published experimental observations.Comment: 33 pages, 12 figure

    Long time behavior of a mean-field model of interacting neurons

    Get PDF
    We study the long time behavior of the solution to some McKean-Vlasov stochastic differential equation (SDE) driven by a Poisson process. In neuroscience, this SDE models the asymptotic dynamic of the membrane potential of a spiking neuron in a large network. We prove that for a small enough interaction parameter, any solution converges to the unique (in this case) invariant measure. To this aim, we first obtain global bounds on the jump rate and derive a Volterra type integral equation satisfied by this rate. We then replace temporary the interaction part of the equation by a deterministic external quantity (we call it the external current). For constant current, we obtain the convergence to the invariant measure. Using a perturbation method, we extend this result to more general external currents. Finally, we prove the result for the non-linear McKean-Vlasov equation

    Stability of the stationary solutions of neural field equations with propagation delays

    Get PDF
    In this paper, we consider neural field equations with space-dependent delays. Neural fields are continuous assemblies of mesoscopic models arising when modeling macroscopic parts of the brain. They are modeled by nonlinear integro-differential equations. We rigorously prove, for the first time to our knowledge, sufficient conditions for the stability of their stationary solutions. We use two methods 1) the computation of the eigenvalues of the linear operator defined by the linearized equations and 2) the formulation of the problem as a fixed point problem. The first method involves tools of functional analysis and yields a new estimate of the semigroup of the previous linear operator using the eigenvalues of its infinitesimal generator. It yields a sufficient condition for stability which is independent of the characteristics of the delays. The second method allows us to find new sufficient conditions for the stability of stationary solutions which depend upon the values of the delays. These conditions are very easy to evaluate numerically. We illustrate the conservativeness of the bounds with a comparison with numerical simulation

    Persistent neural states: stationary localized activity patterns in nonlinear continuous nn-population, qq-dimensional neural networks

    Get PDF
    Neural continuum networks are an important aspect of the modeling of macroscopic parts of the cortex. Two classes of such networks are considered: voltage- and activity-based. In both cases our networks contain an arbitrary number, nn, of interacting neuron populations. Spatial non-symmetric connectivity functions represent cortico-cortical, local, connections, external inputs represent non-local connections. Sigmoidal nonlinearities model the relationship between (average) membrane potential and activity. Departing from most of the previous work in this area we do not assume the nonlinearity to be singular, i.e., represented by the discontinuous Heaviside function. Another important difference with previous work is our relaxing of the assumption that the domain of definition where we study these networks is infinite, i.e. equal to R\R or R2\R^2. We explicitely consider the biologically more relevant case of a bounded subset Ω\Omega of Rq, q=1, 2, 3\R^q,\,q=1,\,2,\,3, a better model of a piece of cortex. The time behaviour of these networks is described by systems of integro-differential equations. Using methods of functional analysis, we study the existence and uniqueness of a stationary, i.e., time-independent, solution of these equations in the case of a stationary input. These solutions can be seen as ``persistent'', they are also sometimes called ``bumps''. We show that under very mild assumptions on the connectivity functions and because we do not use the Heaviside function for the nonlinearities, such solutions always exist. We also give sufficient conditions on the connectivity functions for the solution to be absolutely stable, that is to say independent of the initial state of the network. We then study the sensitivity of the solution(s) to variations of such parameters as the connectivity functions, the sigmoids, the external inputs, and, last but not least, the shape of the domain of existence Ω\Omega of the neural continuum networks. These theoretical results are illustrated and corroborated by a large number of numerical experiments in most of the cases 2≀n≀3, 2≀q≀32\leq n \leq 3,\,2\leq q \leq 3

    On a toy network of neurons interacting through their dendrites

    Get PDF
    Consider a large number nn of neurons, each being connected to approximately NN other ones, chosen at random. When a neuron spikes, which occurs randomly at some rate depending on its electric potential, its potential is set to a minimum value vminv_{min}, and this initiates, after a small delay, two fronts on the (linear) dendrites of all the neurons to which it is connected. Fronts move at constant speed. When two fronts (on the dendrite of the same neuron) collide, they annihilate. When a front hits the soma of a neuron, its potential is increased by a small value wnw_n. Between jumps, the potentials of the neurons are assumed to drift in [vmin,∞)[v_{min},\infty), according to some well-posed ODE. We prove the existence and uniqueness of a heuristically derived mean-field limit of the system when n,N→∞n,N \to \infty with wn≃N−1/2w_n \simeq N^{-1/2}. We make use of some recent versions of the results of Deuschel and Zeitouni \cite{dz} concerning the size of the longest increasing subsequence of an i.i.d. collection of points in the plan. We also study, in a very particular case, a slightly different model where the neurons spike when their potential reach some maximum value vmaxv_{max}, and find an explicit formula for the (heuristic) mean-field limit

    A center manifold result for delayed neural fields equations

    Get PDF
    We develop a framework for the study of delayed neural fields equations and prove a center manifold theorem for these equations. Specific properties of delayed neural fields equations make it impossible to apply existing methods from the literature concerning center manifold results for functional differential equations. Our approach for the proof of the center manifold theorem uses the original combination of results from Vanderbauwhede etal. together with a theory of linear functional differential equations in a history space larger than the commonly used set of time-continuous functions

    Bifurcations in neural masses

    Get PDF
    ISBN : 978-2-9532965-0-1Neural continuum networks are an important aspect of the modeling of macroscopic parts of the cortex. They have been first studied by Amari[6]. These networks have then been the basis to model the visual cortex by Bresslov[4]. From a computational viewpoint, the neural masses could be used to perform image processing like segmentation, contour detection... The neural masses model is also well suited to study the impact of the delays in the dynamics of neural networks, for example see Roxin [8]. Thus, there is a need to develop tools (theoretical and numerical) allowing the study of the dynamical and stationary properties of the neural masses equations. In this paper, we look at the dependency of neural masses stationary solutions with respect to the stiffness of the nonlinearity. This is done by using bifurcation theory in infinite dimensions. We provide a useful approximation of the connectivity matrix and give numerical examples of bifurcated branches which had not been yet fully computed in the literature. The analysis relies on the study of a simple model thought generic in the sense it has the properties that any neural mass system should possess generically
    • 

    corecore