5,594 research outputs found

    Piecewise deterministic Markov processes in biological models

    Full text link
    We present a short introduction into the framework of piecewise deterministic Markov processes. We illustrate the abstract mathematical setting with a series of examples related to dispersal of biological systems, cell cycle models, gene expression, physiologically structured populations, as well as neural activity. General results concerning asymptotic properties of stochastic semigroups induced by such Markov processes are applied to specific examples.Comment: in: Semigroup of Operators - Theory and Applications, J. Banasiak et al. (eds.), Springer Proceedings in Mathematics & Statistics 113, (2015), pp. 235-25

    Weak convergence of marked point processes generated by crossings of multivariate jump processes. Applications to neural network modeling

    Get PDF
    We consider the multivariate point process determined by the crossing times of the components of a multivariate jump process through a multivariate boundary, assuming to reset each component to an initial value after its boundary crossing. We prove that this point process converges weakly to the point process determined by the crossing times of the limit process. This holds for both diffusion and deterministic limit processes. The almost sure convergence of the first passage times under the almost sure convergence of the processes is also proved. The particular case of a multivariate Stein process converging to a multivariate Ornstein-Uhlenbeck process is discussed as a guideline for applying diffusion limits for jump processes. We apply our theoretical findings to neural network modeling. The proposed model gives a mathematical foundation to the generalization of the class of Leaky Integrate-and-Fire models for single neural dynamics to the case of a firing network of neurons. This will help future study of dependent spike trains.Comment: 20 pages, 1 figur

    The Hitchhiker's Guide to Nonlinear Filtering

    Get PDF
    Nonlinear filtering is the problem of online estimation of a dynamic hidden variable from incoming data and has vast applications in different fields, ranging from engineering, machine learning, economic science and natural sciences. We start our review of the theory on nonlinear filtering from the simplest `filtering' task we can think of, namely static Bayesian inference. From there we continue our journey through discrete-time models, which is usually encountered in machine learning, and generalize to and further emphasize continuous-time filtering theory. The idea of changing the probability measure connects and elucidates several aspects of the theory, such as the parallels between the discrete- and continuous-time problems and between different observation models. Furthermore, it gives insight into the construction of particle filtering algorithms. This tutorial is targeted at scientists and engineers and should serve as an introduction to the main ideas of nonlinear filtering, and as a segway to more advanced and specialized literature.Comment: 64 page
    corecore