2,563 research outputs found

    Tokamak plasma boundary reconstruction using toroidal harmonics and an optimal control method

    Get PDF
    This paper proposes a new fast and stable algorithm for the reconstruction of the plasma boundary from discrete magnetic measurements taken at several locations surrounding the vacuum vessel. The resolution of this inverse problem takes two steps. In the first one we transform the set of measurements into Cauchy conditions on a fixed contour Γ_O\Gamma\_O close to the measurement points. This is done by least square fitting a truncated series of toroidal harmonic functions to the measurements. The second step consists in solving a Cauchy problem for the elliptic equation satisfied by the flux in the vacuum and for the overdetermined boundary conditions on Γ_O\Gamma\_O previously obtained with the help of toroidal harmonics. It is reformulated as an optimal control problem on a fixed annular domain of external boundary Γ_O\Gamma\_O and fictitious inner boundary Γ_I\Gamma\_I. A regularized Kohn-Vogelius cost function depending on the value of the flux on Γ_I\Gamma\_I and measuring the discrepency between the solution to the equation satisfied by the flux obtained using Dirichlet conditions on Γ_O\Gamma\_O and the one obtained using Neumann conditions is minimized. The method presented here has led to the development of a software, called VacTH-KV, which enables plasma boundary reconstruction in any Tokamak.Comment: Fusion Science and Technology, 201

    Local/global analysis of the stationary solutions of some neural field equations

    Full text link
    Neural or cortical fields are continuous assemblies of mesoscopic models, also called neural masses, of neural populations that are fundamental in the modeling of macroscopic parts of the brain. Neural fields are described by nonlinear integro-differential equations. The solutions of these equations represent the state of activity of these populations when submitted to inputs from neighbouring brain areas. Understanding the properties of these solutions is essential in advancing our understanding of the brain. In this paper we study the dependency of the stationary solutions of the neural fields equations with respect to the stiffness of the nonlinearity and the contrast of the external inputs. This is done by using degree theory and bifurcation theory in the context of functional, in particular infinite dimensional, spaces. The joint use of these two theories allows us to make new detailed predictions about the global and local behaviours of the solutions. We also provide a generic finite dimensional approximation of these equations which allows us to study in great details two models. The first model is a neural mass model of a cortical hypercolumn of orientation sensitive neurons, the ring model. The second model is a general neural field model where the spatial connectivity isdescribed by heterogeneous Gaussian-like functions.Comment: 38 pages, 9 figure

    Stochastic neural field equations: A rigorous footing

    Get PDF
    We extend the theory of neural fields which has been developed in a deterministic framework by considering the influence spatio-temporal noise. The outstanding problem that we here address is the development of a theory that gives rigorous meaning to stochastic neural field equations, and conditions ensuring that they are well-posed. Previous investigations in the field of computational and mathematical neuroscience have been numerical for the most part. Such questions have been considered for a long time in the theory of stochastic partial differential equations, where at least two different approaches have been developed, each having its advantages and disadvantages. It turns out that both approaches have also been used in computational and mathematical neuroscience, but with much less emphasis on the underlying theory. We present a review of two existing theories and show how they can be used to put the theory of stochastic neural fields on a rigorous footing. We also provide general conditions on the parameters of the stochastic neural field equations under which we guarantee that these equations are well-posed. In so doing we relate each approach to previous work in computational and mathematical neuroscience. We hope this will provide a reference that will pave the way for future studies (both theoretical and applied) of these equations, where basic questions of existence and uniqueness will no longer be a cause for concern

    Illusions in the Ring Model of visual orientation selectivity

    Full text link
    The Ring Model of orientation tuning is a dynamical model of a hypercolumn of visual area V1 in the human neocortex that has been designed to account for the experimentally observed orientation tuning curves by local, i.e., cortico-cortical computations. The tuning curves are stationary, i.e. time independent, solutions of this dynamical model. One important assumption underlying the Ring Model is that the LGN input to V1 is weakly tuned to the retinal orientation and that it is the local computations in V1 that sharpen this tuning. Because the equations that describe the Ring Model have built-in equivariance properties in the synaptic weight distribution with respect to a particular group acting on the retinal orientation of the stimulus, the model in effect encodes an infinite number of tuning curves that are arbitrarily translated with respect to each other. By using the Orbit Space Reduction technique we rewrite the model equations in canonical form as functions of polynomials that are invariant with respect to the action of this group. This allows us to combine equivariant bifurcation theory with an efficient numerical continuation method in order to compute the tuning curves predicted by the Ring Model. Surprisingly some of these tuning curves are not tuned to the stimulus. We interpret them as neural illusions and show numerically how they can be induced by simple dynamical stimuli. These neural illusions are important biological predictions of the model. If they could be observed experimentally this would be a strong point in favour of the Ring Model. We also show how our theoretical analysis allows to very simply specify the ranges of the model parameters by comparing the model predictions with published experimental observations.Comment: 33 pages, 12 figure

    Asymptotic description of stochastic neural networks. II - Characterization of the limit law

    Get PDF
    We continue the development, started in of the asymptotic description of certain stochastic neural networks. We use the Large Deviation Principle (LDP) and the good rate function H announced there to prove that H has a unique minimum mu_e, a stationary measure on the set of trajectories. We characterize this measure by its two marginals, at time 0, and from time 1 to T. The second marginal is a stationary Gaussian measure. With an eye on applications, we show that its mean and covariance operator can be inductively computed. Finally we use the LDP to establish various convergence results, averaged and quenched
    • …
    corecore