3,526 research outputs found

    Auto-Encoding Sequential Monte Carlo

    Full text link
    We build on auto-encoding sequential Monte Carlo (AESMC): a method for model and proposal learning based on maximizing the lower bound to the log marginal likelihood in a broad family of structured probabilistic models. Our approach relies on the efficiency of sequential Monte Carlo (SMC) for performing inference in structured probabilistic models and the flexibility of deep neural networks to model complex conditional probability distributions. We develop additional theoretical insights and introduce a new training procedure which improves both model and proposal learning. We demonstrate that our approach provides a fast, easy-to-implement and scalable means for simultaneous model learning and proposal adaptation in deep generative models

    Mean field games based on the stable-like processes

    Get PDF
    In this paper, we investigate the mean field games with K classes of agents who are weakly coupled via the empirical measure. The underlying dynamics of the representative agents is assumed to be a controlled nonlinear Markov process associated with rather general integro-differential generators of LĀ“evy-Khintchine type (with variable coefficients), with the major stress on applications to stable and stable- like processes, as well as their various modifications like tempered stable-like processes or their mixtures with diffusions. We show that nonlinear measure-valued kinetic equations describing the dynamic law of large numbers limit for system with large number N of agents are solvable and that their solutions represent 1/N-Nash equilibria for approximating systems of N agents

    Inverse Problems and Data Assimilation

    Full text link
    These notes are designed with the aim of providing a clear and concise introduction to the subjects of Inverse Problems and Data Assimilation, and their inter-relations, together with citations to some relevant literature in this area. The first half of the notes is dedicated to studying the Bayesian framework for inverse problems. Techniques such as importance sampling and Markov Chain Monte Carlo (MCMC) methods are introduced; these methods have the desirable property that in the limit of an infinite number of samples they reproduce the full posterior distribution. Since it is often computationally intensive to implement these methods, especially in high dimensional problems, approximate techniques such as approximating the posterior by a Dirac or a Gaussian distribution are discussed. The second half of the notes cover data assimilation. This refers to a particular class of inverse problems in which the unknown parameter is the initial condition of a dynamical system, and in the stochastic dynamics case the subsequent states of the system, and the data comprises partial and noisy observations of that (possibly stochastic) dynamical system. We will also demonstrate that methods developed in data assimilation may be employed to study generic inverse problems, by introducing an artificial time to generate a sequence of probability measures interpolating from the prior to the posterior
    • ā€¦
    corecore