In this study we consider limit theorems for microscopic stochastic models of
neural fields. We show that the Wilson-Cowan equation can be obtained as the
limit in probability on compacts for a sequence of microscopic models when the
number of neuron populations distributed in space and the number of neurons per
population tend to infinity. Though the latter divergence is not necessary.
This result also allows to obtain limits for qualitatively different stochastic
convergence concepts, e.g., convergence in the mean. Further, we present a
central limit theorem for the martingale part of the microscopic models which,
suitably rescaled, converges to a centered Gaussian process with independent
increments. These two results provide the basis for presenting the neural field
Langevin equation, a stochastic differential equation taking values in a
Hilbert space, which is the infinite-dimensional analogue of the Chemical
Langevin Equation in the present setting. On a technical level we apply
recently developed law of large numbers and central limit theorems for
piecewise deterministic processes taking values in Hilbert spaces to a master
equation formulation of stochastic neuronal network models. These theorems are
valid for processes taking values in Hilbert spaces and by this are able to
incorporate spatial structures of the underlying model.Comment: 38 page