754 research outputs found

    Gauge invariance in Loop Quantum Cosmology : Hamilton-Jacobi and Mukhanov-Sasaki equations for scalar perturbations

    Full text link
    Gauge invariance of scalar perturbations is studied together with the associated equations of motion. Extending methods developed in the framework of hamiltonian General Relativity, the Hamilton-Jacobi equation is investigated into the details in Loop Quantum Cosmology. The gauge-invariant observables are built and their equations of motions are reviewed both in Hamiltonian and Lagrangian approaches. This method is applied to scalar perturbations with either holonomy or inverse-volume corrections.Comment: 16 page

    Anomaly-free perturbations with inverse-volume and holonomy corrections in Loop Quantum Cosmology

    Full text link
    This article addresses the issue of the closure of the algebra of constraints for generic (cosmological) perturbations when taking into account simultaneously the two main corrections of effective loop quantum cosmology, namely the holonomy and the inverse-volume terms. Previous works on either the holonomy or the inverse volume case are reviewed and generalized. In the inverse-volume case, we point out new possibilities. An anomaly-free solution including both corrections is found for perturbations, and the corresponding equations of motion are derived.Comment: previous mistake corrected leading to new result

    Primordial tensor power spectrum in holonomy corrected Omega-LQC

    Full text link
    The holonomy correction is one of the main terms arising when implementing loop quantum gravity ideas at an effective level in cosmology. The recent construction of an anomaly free algebra has shown that the formalism used, up to now, to derive the primordial spectrum of fluctuations was not correct. This article aims at computing the tensor spectrum in a fully consistent way within this deformed and closed algebra.Comment: 5 pages, 6 figures, accepted by Phys. Rev.

    Stabilizing Training of Generative Adversarial Networks through Regularization

    Full text link
    Deep generative models based on Generative Adversarial Networks (GANs) have demonstrated impressive sample quality but in order to work they require a careful choice of architecture, parameter initialization, and selection of hyper-parameters. This fragility is in part due to a dimensional mismatch or non-overlapping support between the model distribution and the data distribution, causing their density ratio and the associated f-divergence to be undefined. We overcome this fundamental limitation and propose a new regularization approach with low computational cost that yields a stable GAN training procedure. We demonstrate the effectiveness of this regularizer across several architectures trained on common benchmark image generation tasks. Our regularization turns GAN models into reliable building blocks for deep learning

    Inflation in loop quantum cosmology: Dynamics and spectrum of gravitational waves

    Full text link
    Loop quantum cosmology provides an efficient framework to study the evolution of the Universe beyond the classical Big Bang paradigm. Because of holonomy corrections, the singularity is replaced by a "bounce". The dynamics of the background is investigated into the details, as a function of the parameters of the model. In particular, the conditions required for inflation to occur are carefully considered and are shown to be generically met. The propagation of gravitational waves is then investigated in this framework. By both numerical and analytical approaches, the primordial tensor power spectrum is computed for a wide range of parameters. Several interesting features could be observationally probed.Comment: 11 pages, 14 figures. Matches version published in Phys. Rev.

    Variance Reduced Stochastic Gradient Descent with Neighbors

    Full text link
    Stochastic Gradient Descent (SGD) is a workhorse in machine learning, yet its slow convergence can be a computational bottleneck. Variance reduction techniques such as SAG, SVRG and SAGA have been proposed to overcome this weakness, achieving linear convergence. However, these methods are either based on computations of full gradients at pivot points, or on keeping per data point corrections in memory. Therefore speed-ups relative to SGD may need a minimal number of epochs in order to materialize. This paper investigates algorithms that can exploit neighborhood structure in the training data to share and re-use information about past stochastic gradients across data points, which offers advantages in the transient optimization phase. As a side-product we provide a unified convergence analysis for a family of variance reduction algorithms, which we call memorization algorithms. We provide experimental results supporting our theory.Comment: Appears in: Advances in Neural Information Processing Systems 28 (NIPS 2015). 13 page
    corecore