9,125 research outputs found

    Characterizing the Initial Phase of Epidemic Growth on some Empirical Networks

    Full text link
    A key parameter in models for the spread of infectious diseases is the basic reproduction number R0R_0, which is the expected number of secondary cases a typical infected primary case infects during its infectious period in a large mostly susceptible population. In order for this quantity to be meaningful, the initial expected growth of the number of infectious individuals in the large-population limit should be exponential. We investigate to what extent this assumption is valid by performing repeated simulations of epidemics on selected empirical networks, viewing each epidemic as a random process in discrete time. The initial phase of each epidemic is analyzed by fitting the number of infected people at each time step to a generalised growth model, allowing for estimating the shape of the growth. For reference, similar investigations are done on some elementary graphs such as integer lattices in different dimensions and configuration model graphs, for which the early epidemic behaviour is known. We find that for the empirical networks tested in this paper, exponential growth characterizes the early stages of the epidemic, except when the network is restricted by a strong low-dimensional spacial constraint, such as is the case for the two-dimensional square lattice. However, on finite integer lattices of sufficiently high dimension, the early development of epidemics shows exponential growth.Comment: To be included in the conference proceedings for SPAS 2017 (International Conference on Stochastic Processes and Algebraic Structures), October 4-6, 201

    Variational Walkback: Learning a Transition Operator as a Stochastic Recurrent Net

    Full text link
    We propose a novel method to directly learn a stochastic transition operator whose repeated application provides generated samples. Traditional undirected graphical models approach this problem indirectly by learning a Markov chain model whose stationary distribution obeys detailed balance with respect to a parameterized energy function. The energy function is then modified so the model and data distributions match, with no guarantee on the number of steps required for the Markov chain to converge. Moreover, the detailed balance condition is highly restrictive: energy based models corresponding to neural networks must have symmetric weights, unlike biological neural circuits. In contrast, we develop a method for directly learning arbitrarily parameterized transition operators capable of expressing non-equilibrium stationary distributions that violate detailed balance, thereby enabling us to learn more biologically plausible asymmetric neural networks and more general non-energy based dynamical systems. The proposed training objective, which we derive via principled variational methods, encourages the transition operator to "walk back" in multi-step trajectories that start at data-points, as quickly as possible back to the original data points. We present a series of experimental results illustrating the soundness of the proposed approach, Variational Walkback (VW), on the MNIST, CIFAR-10, SVHN and CelebA datasets, demonstrating superior samples compared to earlier attempts to learn a transition operator. We also show that although each rapid training trajectory is limited to a finite but variable number of steps, our transition operator continues to generate good samples well past the length of such trajectories, thereby demonstrating the match of its non-equilibrium stationary distribution to the data distribution. Source Code: http://github.com/anirudh9119/walkback_nips17Comment: To appear at NIPS 201
    • …
    corecore