13 research outputs found

    Attention to Mean-Fields for Particle Cloud Generation

    Full text link
    The generation of collider data using machine learning has emerged as a prominent research topic in particle physics due to the increasing computational challenges associated with traditional Monte Carlo simulation methods, particularly for future colliders with higher luminosity. Although generating particle clouds is analogous to generating point clouds, accurately modelling the complex correlations between the particles presents a considerable challenge. Additionally, variable particle cloud sizes further exacerbate these difficulties, necessitating more sophisticated models. In this work, we propose a novel model that utilizes an attention-based aggregation mechanism to address these challenges. The model is trained in an adversarial training paradigm, ensuring that both the generator and critic exhibit permutation equivariance/invariance with respect to their input. A novel feature matching loss in the critic is introduced to stabilize the training. The proposed model performs competitively to the state-of-art whilst having significantly fewer parameters

    Point Cloud Generation using Transformer Encoders and Normalising Flows

    Full text link
    Data generation based on Machine Learning has become a major research topic in particle physics. This is due to the current Monte Carlo simulation approach being computationally challenging for future colliders, which will have a significantly higher luminosity. The generation of collider data is similar to point cloud generation, but arguably more difficult as there are complex correlations between the points which need to be modelled correctly. A refinement model consisting of normalising flows and transformer encoders is presented. The normalising flow output is corrected by a transformer encoder, which is adversarially trained against another transformer encoder discriminator/critic. The model reaches state-of-the-art performance while yielding a stable training

    DeepTreeGAN: Fast Generation of High Dimensional Point Clouds

    Get PDF
    In High Energy Physics, detailed and time-consuming simulations are used for particle interactions with detectors. To bypass these simulations with a generative model, the generation of large point clouds in a short time is required, while the complex dependencies between the particles must be correctly modelled. Particle showers are inherently tree-based processes, as each particle is produced by the decay or detector interaction of a particle of the previous generation. In this work, we present a novel Graph Neural Network model (DeepTreeGAN) that is able to generate such point clouds in a tree-based manner. We show that this model can reproduce complex distributions, and we evaluate its performance on the public JetNet dataset

    JetFlow: Generating Jets with Conditioned and Mass Constrained Normalising Flows

    Full text link
    Fast data generation based on Machine Learning has become a major research topic in particle physics. This is mainly because the Monte Carlo simulation approach is computationally challenging for future colliders, which will have a significantly higher luminosity. The generation of collider data is similar to point cloud generation with complex correlations between the points. In this study, the generation of jets with up to 30 constituents with Normalising Flows using Rational Quadratic Spline coupling layers is investigated. Without conditioning on the jet mass, our Normalising Flows are unable to model all correlations in data correctly, which is evident when comparing the invariant jet mass distributions between ground truth and generated data. Using the invariant mass as a condition for the coupling transformation enhances the performance on all tracked metrics. In addition, we demonstrate how to sample the original mass distribution by interpolating the empirical cumulative distribution function. Similarly, the variable number of constituents is taken care of by introducing an additional condition on the number of constituents in the jet. Furthermore, we study the usefulness of including an additional mass constraint in the loss term. On the \texttt{JetNet} dataset, our model shows state-of-the-art performance combined with fast and stable training

    Pay Attention to Mean-Fields during Particle Cloud Generation

    No full text
    The generation of collider data using machine learning has emerged as a prominent research topic in particle physics due to the increasing computational challenges associated with traditional Monte Carlo simulation methods, particularly for future colliders with higher luminosity. Although generating particle clouds is analogous to generating point clouds, accurately modelling the complex correlations between the particles presents a considerable challenge. Additionally, variable particle cloud sizes further exacerbate these difficulties, necessitating more sophisticated models. In this work, we propose a novel model that utilizes an attention-based aggregation mechanism to address these challenges. The model is trained in an adversarial training paradigm, ensuring that both the generator and critic exhibit permutation equivariance/invariance with respect to their input. A novel feature matching loss in the critic is introduced to stabilize the training. The proposed model performs competitively to the state-of-art whilst having significantly fewer parameters

    Point Cloud Generation using Transformer Encoders and Normalising Flows

    No full text
    Data generation based on Machine Learning has become a major research topic in particle physics. This is due to the current Monte Carlo simulation approach being computationally challenging for future colliders, which will have a significantly higher luminosity. The generation of collider data is similar to point cloud generation, but arguably more difficult as there are complex correlations between the points which need to be modelled correctly. A refinement model consisting of normalising flows and transformer encoders is presented. The normalising flow output is corrected by a transformer encoder, which is adversarially trained against another transformer encoder discriminator/critic. The model reaches state-of-the-art performance while yielding a stable training

    Point Cloud Generation using Transformer Encoders and Normalising Flows

    No full text
    Data generation based on Machine Learning has become a major research topic in particle physics. This is due to the current Monte Carlo simulation approach being computationally challenging for future colliders, which will have a significantly higher luminosity. The generation of collider data is similar to point cloud generation, but arguably more difficult as there are complex correlations between the points which need to be modelled correctly. A refinement model consisting of normalising flows and transformer encoders is presented. The normalising flow output is corrected by a transformer encoder, which is adversarially trained against another transformer encoder. The model reaches state-of-the-art results with a lightweight model architecture which is stable to train

    Point Cloud Generation using Transformer Encoders and Normalising Flows

    No full text
    Machine-learning-based data generation has become a major topic in particle physics, as the current Monte Carlo simulation approach is computationally challenging for future colliders, which will have a significantly higher luminosity. The generation of particles poses difficult problems similar as is the case for point clouds. We propose that a transformer setup is well fitted to this task. In this study, a novel refinement model is presented, which uses normalizing flows as a prior and then enhances the generated points using an adversarial setup with two Transformer encoder networks. Different training architectures and procedures were tested and compared on the jetnet datasets

    DeepTreeGAN: Fast Generation of High Dimensional Point Clouds

    No full text
    In High Energy Physics, detailed and time-consuming simulations are used for particle interactions with detectors. To bypass these simulations with a generative model, the generation of large point clouds in a short time is required, while the complex dependencies between the particles must be correctly modelled. Particle showers are inherently tree-based processes, as each particle is produced by the decay or detector interaction of a particle of the previous generation.In this work, we present a novel Graph Neural Network model (DeepTreeGAN) that is able to generate such point clouds in a tree-based manner. We show that this model can reproduce complex distributions, and we evaluate its performance on the public JetNet dataset
    corecore