1,319 research outputs found

    Variational Dropout and the Local Reparameterization Trick

    Get PDF
    We investigate a local reparameterizaton technique for greatly reducing the variance of stochastic gradients for variational Bayesian inference (SGVB) of a posterior over model parameters, while retaining parallelizability. This local reparameterization translates uncertainty about global parameters into local noise that is independent across datapoints in the minibatch. Such parameterizations can be trivially parallelized and have variance that is inversely proportional to the minibatch size, generally leading to much faster convergence. Additionally, we explore a connection with dropout: Gaussian dropout objectives correspond to SGVB with local reparameterization, a scale-invariant prior and proportionally fixed posterior variance. Our method allows inference of more flexibly parameterized posteriors; specifically, we propose variational dropout, a generalization of Gaussian dropout where the dropout rates are learned, often leading to better models. The method is demonstrated through several experiments

    Auto-Encoding Variational Bayes

    Get PDF
    How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results

    Efficient Gradient-Based Inference through Transformations between Bayes Nets and Neural Nets

    Full text link
    Hierarchical Bayesian networks and neural networks with stochastic hidden units are commonly perceived as two separate types of models. We show that either of these types of models can often be transformed into an instance of the other, by switching between centered and differentiable non-centered parameterizations of the latent variables. The choice of parameterization greatly influences the efficiency of gradient-based posterior inference; we show that they are often complementary to eachother, we clarify when each parameterization is preferred and show how inference can be made robust. In the non-centered form, a simple Monte Carlo estimator of the marginal likelihood can be used for learning the parameters. Theoretical results are supported by experiments

    An Introduction to Variational Autoencoders

    Full text link
    Variational autoencoders provide a principled framework for learning deep latent-variable models and corresponding inference models. In this work, we provide an introduction to variational autoencoders and some important extensions

    The 2+1 Kepler Problem and Its Quantization

    Get PDF
    We study a system of two pointlike particles coupled to three dimensional Einstein gravity. The reduced phase space can be considered as a deformed version of the phase space of two special-relativistic point particles in the centre of mass frame. When the system is quantized, we find some possibly general effects of quantum gravity, such as a minimal distances and a foaminess of the spacetime at the order of the Planck length. We also obtain a quantization of geometry, which restricts the possible asymptotic geometries of the universe.Comment: 59 pages, LaTeX2e, 9 eps figure

    Broad Absorption Line Variability in Radio-Loud Quasars

    Full text link
    We investigate C IV broad absorption line (BAL) variability within a sample of 46 radio-loud quasars (RLQs), selected from SDSS/FIRST data to include both core-dominated (39) and lobe-dominated (7) objects. The sample consists primarily of high-ionization BAL quasars, and a substantial fraction have large BAL velocities or equivalent widths; their radio luminosities and radio-loudness values span ~2.5 orders of magnitude. We have obtained 34 new Hobby-Eberly Telescope (HET) spectra of 28 BAL RLQs to compare to earlier SDSS data, and we also incorporate archival coverage (primarily dual-epoch SDSS) for a total set of 78 pairs of equivalent width measurements for 46 BAL RLQs, probing rest-frame timescales of ~80-6000 d (median 500 d). In general, only modest changes in the depths of segments of absorption troughs are observed, akin to those seen in prior studies of BAL RQQs. Also similar to previous findings for RQQs, the RLQs studied here are more likely to display BAL variability on longer rest-frame timescales. However, typical values of |Delta_EW| and |Delta_EW|/ are about 40+/-20% lower for BAL RLQs when compared with those of a timescale-matched sample of BAL RQQs. Optical continuum variability is of similar amplitude in BAL RLQs and BAL RQQs; for both RLQs and RQQs, continuum variability tends to be stronger on longer timescales. BAL variability in RLQs does not obviously depend upon their radio luminosities or radio-loudness values, but we do find tentative evidence for greater fractional BAL variability within lobe-dominated RLQs. Enhanced BAL variability within more edge-on (lobe-dominated) RLQs supports some geometrical dependence to the outflow structure.Comment: 27 pages, 16 figures, 6 tables, accepted to MNRAS, full Appendix A at http://www.macalester.edu/~bmille13/balrlqs.htm
    corecore