45 research outputs found
Search for the Standard Model Higgs boson produced in association with tt and decaying into bb at 8 TeV with the ATLAS detector using the Matrix Element Method
A likelihood-based reconstruction algorithm for top-quark pairs and the KLFitter framework
A likelihood-based reconstruction algorithm for arbitrary event topologies is
introduced and, as an example, applied to the single-lepton decay mode of
top-quark pair production. The algorithm comes with several options which
further improve its performance, in particular the reconstruction efficiency,
i.e., the fraction of events for which the observed jets and leptons can be
correctly associated with the final-state particles of the corresponding event
topology. The performance is compared to that of well-established
reconstruction algorithms using a common framework for kinematic fitting. This
framework has a modular structure which describes the physics processes and
detector models independently. The implemented algorithms are generic and can
easily be ported from one experiment to another.Comment: 20 pages, 5 figures, 2 table
SR-GAN for SR-gamma: photon super resolution at collider experiments
We study single-image super-resolution algorithms for photons at collider
experiments based on generative adversarial networks. We treat the energy
depositions of simulated electromagnetic showers of photons and neutral-pion
decays in a toy electromagnetic calorimeter as 2D images and we train
super-resolution networks to generate images with an artificially increased
resolution by a factor of four in each dimension. The generated images are able
to reproduce features of the electromagnetic showers that are not obvious from
the images at nominal resolution. Using the artificially-enhanced images for
the reconstruction of shower-shape variables and of the position of the shower
center results in significant improvements. We additionally investigate the
utilization of the generated images as a pre-processing step for deep-learning
photon-identification algorithms and observe improvements in the case of low
training statistics.Comment: 24 pages, 13 figure
Small beams, fast predictions: a comparison of machine learning dose prediction models for proton minibeam therapy
Background:
Dose calculations for novel radiotherapy cancer treatments such as proton minibeam radiation therapy is often done using full Monte Carlo (MC) simulations. As MC simulations can be very time consuming for this kind of application, deep learning models have been considered to accelerate dose estimation in cancer patients.
Purpose:
This work systematically evaluates the dose prediction accuracy, speed and generalization performance of three selected state-of-the-art deep learning models for dose prediction applied to the proton minibeam therapy. The strengths and weaknesses of those models are thoroughly investigated, helping other researchers to decide on a viable algorithm for their own application.
Methods:
The following recently published models are compared: first, a 3D U-Net model trained as a regression network, second, a 3D U-Net trained as a generator of a generative adversarial network (GAN) and third, a dose transformer model which interprets the dose prediction as a sequence translation task. These models are trained to emulate the result of MC simulations. The dose depositions of a proton minibeam with a diameter of 800ÎŒm and an energy of 20â100 MeV inside a simple head phantom calculated by full Geant4 MC simulations are used as a case study for this comparison. The spatial resolution is 0.5 mm. Special attention is put on the evaluation of the generalization performance of the investigated models.
Results:
Dose predictions with all models are produced in the order of a second on a GPU, the 3D U-Net models being fastest with an average of 130 ms. An investigated 3D U-Net regression model is found to show the strongest performance with overall 61.0%±0.5% of all voxels exhibiting a deviation in energy deposition prediction of less than 3% compared to full MC simulations with no spatial deviation allowed. The 3D U-Net models are observed to show better generalization performance for target geometry variations, while the transformer-based model shows better generalization with regard to the proton energy.
Conclusions:
This paper reveals that (1) all studied deep learning models are significantly faster than non-machine learning approaches predicting the dose in the order of seconds compared to hours for MC, (2) all models provide reasonable accuracy, and (3) the regression-trained 3D U-Net provides the most accurate predictions
Fast and accurate dose predictions for novel radiotherapy treatments in heterogeneous phantoms using conditional 3DâUNet generative adversarial networks
Purpose:
Novel radiotherapy techniques like synchrotron X-ray microbeam radiation therapy (MRT) require fast dose distribution predictions that are accurate at the sub-mm level, especially close to tissue/bone/air interfaces. Monte Carlo (MC) physics simulations are recognized to be one of the most accurate tools to predict the dose delivered in a target tissue but can be very time consuming and therefore prohibitive for treatment planning. Faster dose prediction algorithms are usually developed for clinically deployed treatments only. In this work, we explore a new approach for fast and accurate dose estimations suitable for novel treatments using digital phantoms used in preclinical development and modern machine learning techniques. We develop a generative adversarial network (GAN) model, which is able to emulate the equivalent Geant4 MC simulation with adequate accuracy and use it to predict the radiation dose delivered by a broad synchrotron beam to various phantoms.
Methods:
The energy depositions used for the training of the GAN are obtained using full Geant4 MC simulations of a synchrotron radiation broad beam passing through the phantoms. The energy deposition is scored and predicted in voxel matrices of size 140 Ă 18 Ă 18 with a voxel edge length of 1 mm. The GAN model consists of two competing 3D convolutional neural networks, which are conditioned on the photon beam and phantom properties. The generator network has a U-Net structure and is designed to predict the energy depositions of the photon beam inside three phantoms of variable geometry with increasing complexity. The critic network is a relatively simple convolutional network, which is trained to distinguish energy depositions predicted by the generator from the ones obtained with the full MC simulation.
Results:
The energy deposition predictions inside all phantom geometries under investigation show deviations of less than 3% of the maximum deposited energy from the simulation for roughly 99% of the voxels in the field of the beam. Inside the most realistic phantom, a simple pediatric head, the model predictions deviate by less than 1% of the maximal energy deposition from the simulations in more than 96% of the in-field voxels. For all three phantoms, the model generalizes the energy deposition predictions well to phantom geometries, which have not been used for training the model but are interpolations of the training data in multiple dimensions. The computing time for a single prediction is reduced from several hundred hours using Geant4 simulation to less than a second using the GAN model.
Conclusions:
The proposed GAN model predicts dose distributions inside unknown phantoms with only small deviations from the full MC simulation with computations times of less than a second. It demonstrates good interpolation ability to unseen but similar phantom geometries and is flexible enough to be trained on data with different radiation scenarios without the need for optimization of the model parameter. This proof-of-concept encourages to apply and further develop the model for the use in MRT treatment planning, which requires fast and accurate predictions with sub-mm resolutions
A step towards treatment planning for microbeam radiation therapy: fast peak and valley dose predictions with 3D U-Nets
Fast and accurate dose predictions are one of the bottlenecks in treatment
planning for microbeam radiation therapy (MRT). In this paper, we propose a
machine learning (ML) model based on a 3D U-Net. Our approach predicts
separately the large doses of the narrow high intensity synchrotron microbeams
and the lower valley doses between them. For this purpose, a concept of macro
peak doses and macro valley doses is introduced, describing the respective
doses not on a microscopic level but as macroscopic quantities in larger
voxels. The ML model is trained to mimic full Monte Carlo (MC) data. Complex
physical effects such as polarization are therefore automatically taking into
account by the model.
The macro dose distribution approach described in this study allows for
superimposing single microbeam predictions to a beam array field making it an
interesting candidate for treatment planning. It is shown that the proposed
approach can overcome a main obstacle with microbeam dose predictions by
predicting a full microbeam irradiation field in less than a minute while
maintaining reasonable accuracy.Comment: accepted for publication in the IFMBE Proceedings on the World
Congress on Medical Physics and Biomedical Engineering 202
Accurate and fast deep learning dose prediction for a preclinical microbeam radiation therapy study using low-statistics Monte Carlo simulations
Microbeam radiation therapy (MRT) utilizes coplanar synchrotron radiation
beamlets and is a proposed treatment approach for several tumour diagnoses that
currently have poor clinical treatment outcomes, such as gliosarcomas.
Prescription dose estimations for treating preclinical gliosarcoma models in
MRT studies at the Imaging and Medical Beamline at the Australian Synchrotron
currently rely on Monte Carlo (MC) simulations. The steep dose gradients
associated with the 50m wide coplanar beamlets present a significant
challenge for precise MC simulation of the MRT irradiation treatment field in a
short time frame. Much research has been conducted on fast dose estimation
methods for clinically available treatments. However, such methods, including
GPU Monte Carlo implementations and machine learning (ML) models, are
unavailable for novel and emerging cancer radiation treatment options like MRT.
In this work, the successful application of a fast and accurate machine
learning dose prediction model in a retrospective preclinical MRT rodent study
is presented for the first time. The ML model predicts the peak doses in the
path of the microbeams and the valley doses between them, delivered to the
gliosarcoma in rodent patients. The predictions of the ML model show excellent
agreement with low-noise MC simulations, especially within the investigated
tumour volume. This agreement is despite the ML model being deliberately
trained with MC-calculated samples exhibiting significantly higher statistical
uncertainties. The successful use of high-noise training set data samples,
which are much faster to generate, encourages and accelerates the transfer of
the ML model to different treatment modalities for other future applications in
novel radiation cancer therapies