319 research outputs found
Fast and accurate dose predictions for novel radiotherapy treatments in heterogeneous phantoms using conditional 3DâUNet generative adversarial networks
Purpose:
Novel radiotherapy techniques like synchrotron X-ray microbeam radiation therapy (MRT) require fast dose distribution predictions that are accurate at the sub-mm level, especially close to tissue/bone/air interfaces. Monte Carlo (MC) physics simulations are recognized to be one of the most accurate tools to predict the dose delivered in a target tissue but can be very time consuming and therefore prohibitive for treatment planning. Faster dose prediction algorithms are usually developed for clinically deployed treatments only. In this work, we explore a new approach for fast and accurate dose estimations suitable for novel treatments using digital phantoms used in preclinical development and modern machine learning techniques. We develop a generative adversarial network (GAN) model, which is able to emulate the equivalent Geant4 MC simulation with adequate accuracy and use it to predict the radiation dose delivered by a broad synchrotron beam to various phantoms.
Methods:
The energy depositions used for the training of the GAN are obtained using full Geant4 MC simulations of a synchrotron radiation broad beam passing through the phantoms. The energy deposition is scored and predicted in voxel matrices of size 140 Ă 18 Ă 18 with a voxel edge length of 1 mm. The GAN model consists of two competing 3D convolutional neural networks, which are conditioned on the photon beam and phantom properties. The generator network has a U-Net structure and is designed to predict the energy depositions of the photon beam inside three phantoms of variable geometry with increasing complexity. The critic network is a relatively simple convolutional network, which is trained to distinguish energy depositions predicted by the generator from the ones obtained with the full MC simulation.
Results:
The energy deposition predictions inside all phantom geometries under investigation show deviations of less than 3% of the maximum deposited energy from the simulation for roughly 99% of the voxels in the field of the beam. Inside the most realistic phantom, a simple pediatric head, the model predictions deviate by less than 1% of the maximal energy deposition from the simulations in more than 96% of the in-field voxels. For all three phantoms, the model generalizes the energy deposition predictions well to phantom geometries, which have not been used for training the model but are interpolations of the training data in multiple dimensions. The computing time for a single prediction is reduced from several hundred hours using Geant4 simulation to less than a second using the GAN model.
Conclusions:
The proposed GAN model predicts dose distributions inside unknown phantoms with only small deviations from the full MC simulation with computations times of less than a second. It demonstrates good interpolation ability to unseen but similar phantom geometries and is flexible enough to be trained on data with different radiation scenarios without the need for optimization of the model parameter. This proof-of-concept encourages to apply and further develop the model for the use in MRT treatment planning, which requires fast and accurate predictions with sub-mm resolutions
A step towards treatment planning for microbeam radiation therapy: fast peak and valley dose predictions with 3D U-Nets
Fast and accurate dose predictions are one of the bottlenecks in treatment
planning for microbeam radiation therapy (MRT). In this paper, we propose a
machine learning (ML) model based on a 3D U-Net. Our approach predicts
separately the large doses of the narrow high intensity synchrotron microbeams
and the lower valley doses between them. For this purpose, a concept of macro
peak doses and macro valley doses is introduced, describing the respective
doses not on a microscopic level but as macroscopic quantities in larger
voxels. The ML model is trained to mimic full Monte Carlo (MC) data. Complex
physical effects such as polarization are therefore automatically taking into
account by the model.
The macro dose distribution approach described in this study allows for
superimposing single microbeam predictions to a beam array field making it an
interesting candidate for treatment planning. It is shown that the proposed
approach can overcome a main obstacle with microbeam dose predictions by
predicting a full microbeam irradiation field in less than a minute while
maintaining reasonable accuracy.Comment: accepted for publication in the IFMBE Proceedings on the World
Congress on Medical Physics and Biomedical Engineering 202
Accurate and fast deep learning dose prediction for a preclinical microbeam radiation therapy study using low-statistics Monte Carlo simulations
Microbeam radiation therapy (MRT) utilizes coplanar synchrotron radiation
beamlets and is a proposed treatment approach for several tumour diagnoses that
currently have poor clinical treatment outcomes, such as gliosarcomas.
Prescription dose estimations for treating preclinical gliosarcoma models in
MRT studies at the Imaging and Medical Beamline at the Australian Synchrotron
currently rely on Monte Carlo (MC) simulations. The steep dose gradients
associated with the 50m wide coplanar beamlets present a significant
challenge for precise MC simulation of the MRT irradiation treatment field in a
short time frame. Much research has been conducted on fast dose estimation
methods for clinically available treatments. However, such methods, including
GPU Monte Carlo implementations and machine learning (ML) models, are
unavailable for novel and emerging cancer radiation treatment options like MRT.
In this work, the successful application of a fast and accurate machine
learning dose prediction model in a retrospective preclinical MRT rodent study
is presented for the first time. The ML model predicts the peak doses in the
path of the microbeams and the valley doses between them, delivered to the
gliosarcoma in rodent patients. The predictions of the ML model show excellent
agreement with low-noise MC simulations, especially within the investigated
tumour volume. This agreement is despite the ML model being deliberately
trained with MC-calculated samples exhibiting significantly higher statistical
uncertainties. The successful use of high-noise training set data samples,
which are much faster to generate, encourages and accelerates the transfer of
the ML model to different treatment modalities for other future applications in
novel radiation cancer therapies
Generalized nonreciprocity in an optomechanical circuit via synthetic magnetism and reservoir engineering
Synthetic magnetism has been used to control charge neutral excitations for
applications ranging from classical beam steering to quantum simulation. In
optomechanics, radiation-pressure-induced parametric coupling between optical
(photon) and mechanical (phonon) excitations may be used to break time-reversal
symmetry, providing the prerequisite for synthetic magnetism. Here we design
and fabricate a silicon optomechanical circuit with both optical and mechanical
connectivity between two optomechanical cavities. Driving the two cavities with
phase-correlated laser light results in a synthetic magnetic flux, which in
combination with dissipative coupling to the mechanical bath, leads to
nonreciprocal transport of photons with 35dB of isolation. Additionally,
optical pumping with blue-detuned light manifests as a particle non-conserving
interaction between photons and phonons, resulting in directional optical
amplification of 12dB in the isolator through direction. These results indicate
the feasibility of utilizing optomechanical circuits to create a more general
class of nonreciprocal optical devices, and further, to enable novel
topological phases for both light and sound on a microchip.Comment: 18 pages, 8 figures, 4 appendice
Large tunable valley splitting in edge-free graphene quantum dots on boron nitride
Coherent manipulation of binary degrees of freedom is at the heart of modern
quantum technologies. Graphene offers two binary degrees: the electron spin and
the valley. Efficient spin control has been demonstrated in many solid state
systems, while exploitation of the valley has only recently been started, yet
without control on the single electron level. Here, we show that van-der Waals
stacking of graphene onto hexagonal boron nitride offers a natural platform for
valley control. We use a graphene quantum dot induced by the tip of a scanning
tunneling microscope and demonstrate valley splitting that is tunable from -5
to +10 meV (including valley inversion) by sub-10-nm displacements of the
quantum dot position. This boosts the range of controlled valley splitting by
about one order of magnitude. The tunable inversion of spin and valley states
should enable coherent superposition of these degrees of freedom as a first
step towards graphene-based qubits
Phenomenology of event shapes at hadron colliders
We present results for matched distributions of a range of dijet event shapes
at hadron colliders, combining next-to-leading logarithmic (NLL) accuracy in
the resummation exponent, next-to-next-to leading logarithmic (NNLL) accuracy
in its expansion and next-to-leading order (NLO) accuracy in a pure alpha_s
expansion. This is the first time that such a matching has been carried out for
hadronic final-state observables at hadron colliders. We compare our results to
Monte Carlo predictions, with and without matching to multi-parton tree-level
fixed-order calculations. These studies suggest that hadron-collider event
shapes have significant scope for constraining both perturbative and
non-perturbative aspects of hadron-collider QCD. The differences between
various calculational methods also highlight the limits of relying on
simultaneous variations of renormalisation and factorisation scale in making
reliable estimates of uncertainties in QCD predictions. We also discuss the
sensitivity of event shapes to the topology of multi-jet events, which are
expected to appear in many New Physics scenarios.Comment: 70 pages, 25 figures, additional material available from
http://www.lpthe.jussieu.fr/~salam/pp-event-shapes
A Formalism for the Systematic Treatment of Rapidity Logarithms in Quantum Field Theory
Many observables in QCD rely upon the resummation of perturbation theory to
retain predictive power. Resummation follows after one factorizes the cross
section into the rele- vant modes. The class of observables which are sensitive
to soft recoil effects are particularly challenging to factorize and resum
since they involve rapidity logarithms. In this paper we will present a
formalism which allows one to factorize and resum the perturbative series for
such observables in a systematic fashion through the notion of a "rapidity
renormalization group". That is, a Collin-Soper like equation is realized as a
renormalization group equation, but has a more universal applicability to
observables beyond the traditional transverse momentum dependent parton
distribution functions (TMDPDFs) and the Sudakov form factor. This formalism
has the feature that it allows one to track the (non-standard) scheme
dependence which is inherent in any scenario where one performs a resummation
of rapidity divergences. We present a pedagogical introduction to the formalism
by applying it to the well-known massive Sudakov form factor. The formalism is
then used to study observables of current interest. A factorization theorem for
the transverse momentum distribution of Higgs production is presented along
with the result for the resummed cross section at NLL. Our formalism allows one
to define gauge invariant TMDPDFs which are independent of both the hard
scattering amplitude and the soft function, i.e. they are uni- versal. We
present details of the factorization and resummation of the jet broadening
cross section including a renormalization in pT space. We furthermore show how
to regulate and renormalize exclusive processes which are plagued by endpoint
singularities in such a way as to allow for a consistent resummation.Comment: Typos in Appendix C corrected, as well as a typo in eq. 5.6
Omental infarction in the postpartum period: a case report and a review of the literature
<p>Abstract</p> <p>Introduction</p> <p>Omental infarction is a rare and often misdiagnosed clinical event with unspecific symptoms. It affects predominantly young and middle aged women.</p> <p>Case presentation</p> <p>This is a case report of a 26-year-old Caucasian woman with spontaneous omental infarction two weeks after normal vaginal delivery.</p> <p>Conclusion</p> <p>Omental infarction is a differential diagnosis in the postpartum acute abdomen. As some cases of omental infarction, which are caused by torsion, can be adequately diagnosed via computed tomography, a conservative treatment strategy for patients without complications should be considered in order to avoid any unnecessary surgical intervention.</p
Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison
A confusingly wide variety of temporally asymmetric learning rules exists related to reinforcement learning and/or to spike-timing dependent plasticity, many of which look exceedingly similar, while displaying strongly different behavior. These rules often find their use in control tasks, for example in robotics and for this rigorous convergence and numerical stability is required. The goal of this article is to review these rules and compare them to provide a better overview over their different properties. Two main classes will be discussed: temporal difference (TD) rules and correlation based (differential hebbian) rules and some transition cases. In general we will focus on neuronal implementations with changeable synaptic weights and a time-continuous representation of activity. In a machine learning (non-neuronal) context, for TD-learning a solid mathematical theory has existed since several years. This can partly be transfered to a neuronal framework, too. On the other hand, only now a more complete theory has also emerged for differential Hebb rules. In general rules differ by their convergence conditions and their numerical stability, which can lead to very undesirable behavior, when wanting to apply them. For TD, convergence can be enforced with a certain output condition assuring that the ÎŽ-error drops on average to zero (output control). Correlation based rules, on the other hand, converge when one input drops to zero (input control). Temporally asymmetric learning rules treat situations where incoming stimuli follow each other in time. Thus, it is necessary to remember the first stimulus to be able to relate it to the later occurring second one. To this end different types of so-called eligibility traces are being used by these two different types of rules. This aspect leads again to different properties of TD and differential Hebbian learning as discussed here. Thus, this paper, while also presenting several novel mathematical results, is mainly meant to provide a road map through the different neuronally emulated temporal asymmetrical learning rules and their behavior to provide some guidance for possible applications
Resummation of small-x double logarithms in QCD: semi-inclusive electron-positron annihilation
We have derived the coefficients of the highest three 1/x-enhanced small-x
logarithms of all timelike splitting functions and the coefficient functions
for the transverse fragmentation function in one-particle inclusive e^+e^-
annihilation at (in principle) all orders in massless perturbative QCD. For the
longitudinal fragmentation function we present the respective two highest
contributions. These results have been obtained from KLN-related decompositions
of the unfactorized fragmentation functions in dimensional regularization and
their structure imposed by the mass-factorization theorem. The resummation is
found to completely remove the huge small-x spikes present in the fixed-order
results for all quantities above, allowing for stable results down to very
small values of the momentum fraction and scaling variable x. Our calculations
can be extended to (at least) the corresponding as^n ln^(2n-l) x contributions
to the above quantities and their counterparts in deep-inelastic scattering.Comment: 27 pages, LaTeX, 6 eps-figure
- âŠ