15 research outputs found

    Generative invertible quantum neural networks

    Get PDF
    Invertible Neural Networks (INN) have become established tools for the simulation and generation of highly complex data. We propose a quantum-gate algorithm for a Quantum Invertible Neural Network (QINN) and apply it to the LHC data of jet-associated production of a Z-boson that decays into leptons, a standard candle process for particle collider precision measurements. We compare the QINN's performance for different loss functions and training scenarios. For this task, we find that a hybrid QINN matches the performance of a significantly larger purely classical INN in learning and generating complex data

    Generative Invertible Quantum Neural Networks

    Full text link
    Invertible Neural Networks (INN) have become established tools for the simulation and generation of highly complex data. We propose a quantum-gate algorithm for a Quantum Invertible Neural Network (QINN) and apply it to the LHC data of jet-associated production of a Z-boson that decays into leptons, a standard candle process for particle collider precision measurements. We compare the QINN's performance for different loss functions and training scenarios. For this task, we find that a hybrid QINN matches the performance of a significantly larger purely classical INN in learning and generating complex data.Comment: 19 pages, 6 figures Changes in replacement: Add references 49-51, provided gitlab link to code repositor

    Free-form Flows: Make Any Architecture a Normalizing Flow

    Full text link
    Normalizing Flows are generative models that directly maximize the likelihood. Previously, the design of normalizing flows was largely constrained by the need for analytical invertibility. We overcome this constraint by a training procedure that uses an efficient estimator for the gradient of the change of variables formula. This enables any dimension-preserving neural network to serve as a generative model through maximum likelihood training. Our approach allows placing the emphasis on tailoring inductive biases precisely to the task at hand. Specifically, we achieve excellent results in molecule generation benchmarks utilizing E(n)E(n)-equivariant networks. Moreover, our method is competitive in an inverse problem benchmark, while employing off-the-shelf ResNet architectures

    On the Convergence Rate of Gaussianization with Random Rotations

    Full text link
    Gaussianization is a simple generative model that can be trained without backpropagation. It has shown compelling performance on low dimensional data. As the dimension increases, however, it has been observed that the convergence speed slows down. We show analytically that the number of required layers scales linearly with the dimension for Gaussian input. We argue that this is because the model is unable to capture dependencies between dimensions. Empirically, we find the same linear increase in cost for arbitrary input p(x)p(x), but observe favorable scaling for some distributions. We explore potential speed-ups and formulate challenges for further research

    Maximum Likelihood Training of Autoencoders

    Full text link
    Maximum likelihood training has favorable statistical properties and is popular for generative modeling, especially with normalizing flows. On the other hand, generative autoencoders promise to be more efficient than normalizing flows due to the manifold hypothesis. In this work, we introduce successful maximum likelihood training of unconstrained autoencoders for the first time, bringing the two paradigms together. To do so, we identify and overcome two challenges: Firstly, existing maximum likelihood estimators for free-form networks are unacceptably slow, relying on iteration schemes whose cost scales linearly with latent dimension. We introduce an improved estimator which eliminates iteration, resulting in constant cost (roughly double the runtime per batch of a vanilla autoencoder). Secondly, we demonstrate that naively applying maximum likelihood to autoencoders can lead to divergent solutions and use this insight to motivate a stable maximum likelihood training objective. We perform extensive experiments on toy, tabular and image data, demonstrating the competitive performance of the resulting model. We call our model the maximum likelihood autoencoder (MLAE)

    Generative networks for precision enthusiasts

    No full text
    Generative networks are opening new avenues in fast event generation for the LHC. We show how generative flow networks can reach percent-level precision for kinematic distributions, how they can be trained jointly with a discriminator, and how this discriminator improves the generation. Our joint training relies on a novel coupling of the two networks which does not require a Nash equilibrium. We then estimate the generation uncertainties through a Bayesian network setup and through conditional data augmentation, while the discriminator ensures that there are no systematic inconsistencies compared to the training data

    FRIPON: A worldwide network to track incoming meteoroids

    No full text
    Context. Until recently, camera networks designed for monitoring fireballs worldwide were not fully automated, implying that in case of a meteorite fall, the recovery campaign was rarely immediate. This was an important limiting factor as the most fragile - hence precious - meteorites must be recovered rapidly to avoid their alteration. Aims. The Fireball Recovery and InterPlanetary Observation Network (FRIPON) scientific project was designed to overcome this limitation. This network comprises a fully automated camera and radio network deployed over a significant fraction of western Europe and a small fraction of Canada. As of today, it consists of 150 cameras and 25 European radio receivers and covers an area of about 1.5 × 106km2. Methods. The FRIPON network, fully operational since 2018, has been monitoring meteoroid entries since 2016, thereby allowing the characterization of their dynamical and physical properties. In addition, the level of automation of the network makes it possible to trigger a meteorite recovery campaign only a few hours after it reaches the surface of the Earth. Recovery campaigns are only organized for meteorites with final masses estimated of at least 500 g, which is about one event per year in France. No recovery campaign is organized in the case of smaller final masses on the order of 50 to 100 g, which happens about three times a year; instead, the information is delivered to the local media so that it can reach the inhabitants living in the vicinity of the fall. Results. Nearly 4000 meteoroids have been detected so far and characterized by FRIPON. The distribution of their orbits appears to be bimodal, with a cometary population and a main belt population. Sporadic meteors amount to about 55% of all meteors. A first estimate of the absolute meteoroid flux (mag < -5; meteoroid size ≥∼1 cm) amounts to 1250/yr/106km2. This value is compatible with previous estimates. Finally, the first meteorite was recovered in Italy (Cavezzo, January 2020) thanks to the PRISMA network, a component of the FRIPON science project
    corecore