36,225 research outputs found

    Dropout Inference in Bayesian Neural Networks with Alpha-divergences

    Full text link
    To obtain uncertainty estimates with real-world Bayesian deep learning models, practical inference approximations are needed. Dropout variational inference (VI) for example has been used for machine vision and medical applications, but VI can severely underestimates model uncertainty. Alpha-divergences are alternative divergences to VI's KL objective, which are able to avoid VI's uncertainty underestimation. But these are hard to use in practice: existing techniques can only use Gaussian approximating distributions, and require existing models to be changed radically, thus are of limited use for practitioners. We propose a re-parametrisation of the alpha-divergence objectives, deriving a simple inference technique which, together with dropout, can be easily implemented with existing models by simply changing the loss of the model. We demonstrate improved uncertainty estimates and accuracy compared to VI in dropout networks. We study our model's epistemic uncertainty far away from the data using adversarial images, showing that these can be distinguished from non-adversarial images by examining our model's uncertainty

    Approximations for the Moments of Nonstationary and State Dependent Birth-Death Queues

    Full text link
    In this paper we propose a new method for approximating the nonstationary moment dynamics of one dimensional Markovian birth-death processes. By expanding the transition probabilities of the Markov process in terms of Poisson-Charlier polynomials, we are able to estimate any moment of the Markov process even though the system of moment equations may not be closed. Using new weighted discrete Sobolev spaces, we derive explicit error bounds of the transition probabilities and new weak a priori estimates for approximating the moments of the Markov processs using a truncated form of the expansion. Using our error bounds and estimates, we are able to show that our approximations converge to the true stochastic process as we add more terms to the expansion and give explicit bounds on the truncation error. As a result, we are the first paper in the queueing literature to provide error bounds and estimates on the performance of a moment closure approximation. Lastly, we perform several numerical experiments for some important models in the queueing theory literature and show that our expansion techniques are accurate at estimating the moment dynamics of these Markov process with only a few terms of the expansion
    • …
    corecore