24 research outputs found

    k-strip: A novel segmentation algorithm in k-space for the application of skull stripping

    Full text link
    Objectives: Present a novel deep learning-based skull stripping algorithm for magnetic resonance imaging (MRI) that works directly in the information rich k-space. Materials and Methods: Using two datasets from different institutions with a total of 36,900 MRI slices, we trained a deep learning-based model to work directly with the complex raw k-space data. Skull stripping performed by HD-BET (Brain Extraction Tool) in the image domain were used as the ground truth. Results: Both datasets were very similar to the ground truth (DICE scores of 92\%-98\% and Hausdorff distances of under 5.5 mm). Results on slices above the eye-region reach DICE scores of up to 99\%, while the accuracy drops in regions around the eyes and below, with partially blurred output. The output of k-strip often smoothed edges at the demarcation to the skull. Binary masks are created with an appropriate threshold. Conclusion: With this proof-of-concept study, we were able to show the feasibility of working in the k-space frequency domain, preserving phase information, with consistent results. Future research should be dedicated to discovering additional ways the k-space can be used for innovative image analysis and further workflows.Comment: 11 pages, 6 figures, 2 table

    Small beams, fast predictions: a comparison of machine learning dose prediction models for proton minibeam therapy

    Get PDF
    Background: Dose calculations for novel radiotherapy cancer treatments such as proton minibeam radiation therapy is often done using full Monte Carlo (MC) simulations. As MC simulations can be very time consuming for this kind of application, deep learning models have been considered to accelerate dose estimation in cancer patients. Purpose: This work systematically evaluates the dose prediction accuracy, speed and generalization performance of three selected state-of-the-art deep learning models for dose prediction applied to the proton minibeam therapy. The strengths and weaknesses of those models are thoroughly investigated, helping other researchers to decide on a viable algorithm for their own application. Methods: The following recently published models are compared: first, a 3D U-Net model trained as a regression network, second, a 3D U-Net trained as a generator of a generative adversarial network (GAN) and third, a dose transformer model which interprets the dose prediction as a sequence translation task. These models are trained to emulate the result of MC simulations. The dose depositions of a proton minibeam with a diameter of 800μm and an energy of 20–100 MeV inside a simple head phantom calculated by full Geant4 MC simulations are used as a case study for this comparison. The spatial resolution is 0.5 mm. Special attention is put on the evaluation of the generalization performance of the investigated models. Results: Dose predictions with all models are produced in the order of a second on a GPU, the 3D U-Net models being fastest with an average of 130 ms. An investigated 3D U-Net regression model is found to show the strongest performance with overall 61.0%±0.5% of all voxels exhibiting a deviation in energy deposition prediction of less than 3% compared to full MC simulations with no spatial deviation allowed. The 3D U-Net models are observed to show better generalization performance for target geometry variations, while the transformer-based model shows better generalization with regard to the proton energy. Conclusions: This paper reveals that (1) all studied deep learning models are significantly faster than non-machine learning approaches predicting the dose in the order of seconds compared to hours for MC, (2) all models provide reasonable accuracy, and (3) the regression-trained 3D U-Net provides the most accurate predictions

    Fast and accurate dose predictions for novel radiotherapy treatments in heterogeneous phantoms using conditional 3D‐UNet generative adversarial networks

    Get PDF
    Purpose: Novel radiotherapy techniques like synchrotron X-ray microbeam radiation therapy (MRT) require fast dose distribution predictions that are accurate at the sub-mm level, especially close to tissue/bone/air interfaces. Monte Carlo (MC) physics simulations are recognized to be one of the most accurate tools to predict the dose delivered in a target tissue but can be very time consuming and therefore prohibitive for treatment planning. Faster dose prediction algorithms are usually developed for clinically deployed treatments only. In this work, we explore a new approach for fast and accurate dose estimations suitable for novel treatments using digital phantoms used in preclinical development and modern machine learning techniques. We develop a generative adversarial network (GAN) model, which is able to emulate the equivalent Geant4 MC simulation with adequate accuracy and use it to predict the radiation dose delivered by a broad synchrotron beam to various phantoms. Methods: The energy depositions used for the training of the GAN are obtained using full Geant4 MC simulations of a synchrotron radiation broad beam passing through the phantoms. The energy deposition is scored and predicted in voxel matrices of size 140 × 18 × 18 with a voxel edge length of 1 mm. The GAN model consists of two competing 3D convolutional neural networks, which are conditioned on the photon beam and phantom properties. The generator network has a U-Net structure and is designed to predict the energy depositions of the photon beam inside three phantoms of variable geometry with increasing complexity. The critic network is a relatively simple convolutional network, which is trained to distinguish energy depositions predicted by the generator from the ones obtained with the full MC simulation. Results: The energy deposition predictions inside all phantom geometries under investigation show deviations of less than 3% of the maximum deposited energy from the simulation for roughly 99% of the voxels in the field of the beam. Inside the most realistic phantom, a simple pediatric head, the model predictions deviate by less than 1% of the maximal energy deposition from the simulations in more than 96% of the in-field voxels. For all three phantoms, the model generalizes the energy deposition predictions well to phantom geometries, which have not been used for training the model but are interpolations of the training data in multiple dimensions. The computing time for a single prediction is reduced from several hundred hours using Geant4 simulation to less than a second using the GAN model. Conclusions: The proposed GAN model predicts dose distributions inside unknown phantoms with only small deviations from the full MC simulation with computations times of less than a second. It demonstrates good interpolation ability to unseen but similar phantom geometries and is flexible enough to be trained on data with different radiation scenarios without the need for optimization of the model parameter. This proof-of-concept encourages to apply and further develop the model for the use in MRT treatment planning, which requires fast and accurate predictions with sub-mm resolutions

    A step towards treatment planning for microbeam radiation therapy: fast peak and valley dose predictions with 3D U-Nets

    Full text link
    Fast and accurate dose predictions are one of the bottlenecks in treatment planning for microbeam radiation therapy (MRT). In this paper, we propose a machine learning (ML) model based on a 3D U-Net. Our approach predicts separately the large doses of the narrow high intensity synchrotron microbeams and the lower valley doses between them. For this purpose, a concept of macro peak doses and macro valley doses is introduced, describing the respective doses not on a microscopic level but as macroscopic quantities in larger voxels. The ML model is trained to mimic full Monte Carlo (MC) data. Complex physical effects such as polarization are therefore automatically taking into account by the model. The macro dose distribution approach described in this study allows for superimposing single microbeam predictions to a beam array field making it an interesting candidate for treatment planning. It is shown that the proposed approach can overcome a main obstacle with microbeam dose predictions by predicting a full microbeam irradiation field in less than a minute while maintaining reasonable accuracy.Comment: accepted for publication in the IFMBE Proceedings on the World Congress on Medical Physics and Biomedical Engineering 202

    Accurate and fast deep learning dose prediction for a preclinical microbeam radiation therapy study using low-statistics Monte Carlo simulations

    Full text link
    Microbeam radiation therapy (MRT) utilizes coplanar synchrotron radiation beamlets and is a proposed treatment approach for several tumour diagnoses that currently have poor clinical treatment outcomes, such as gliosarcomas. Prescription dose estimations for treating preclinical gliosarcoma models in MRT studies at the Imaging and Medical Beamline at the Australian Synchrotron currently rely on Monte Carlo (MC) simulations. The steep dose gradients associated with the 50μ\,\mum wide coplanar beamlets present a significant challenge for precise MC simulation of the MRT irradiation treatment field in a short time frame. Much research has been conducted on fast dose estimation methods for clinically available treatments. However, such methods, including GPU Monte Carlo implementations and machine learning (ML) models, are unavailable for novel and emerging cancer radiation treatment options like MRT. In this work, the successful application of a fast and accurate machine learning dose prediction model in a retrospective preclinical MRT rodent study is presented for the first time. The ML model predicts the peak doses in the path of the microbeams and the valley doses between them, delivered to the gliosarcoma in rodent patients. The predictions of the ML model show excellent agreement with low-noise MC simulations, especially within the investigated tumour volume. This agreement is despite the ML model being deliberately trained with MC-calculated samples exhibiting significantly higher statistical uncertainties. The successful use of high-noise training set data samples, which are much faster to generate, encourages and accelerates the transfer of the ML model to different treatment modalities for other future applications in novel radiation cancer therapies

    Sarcoma classification by DNA methylation profiling

    Get PDF
    Sarcomas are malignant soft tissue and bone tumours affecting adults, adolescents and children. They represent a morphologically heterogeneous class of tumours and some entities lack defining histopathological features. Therefore, the diagnosis of sarcomas is burdened with a high inter-observer variability and misclassification rate. Here, we demonstrate classification of soft tissue and bone tumours using a machine learning classifier algorithm based on array-generated DNA methylation data. This sarcoma classifier is trained using a dataset of 1077 methylation profiles from comprehensively pre-characterized cases comprising 62 tumour methylation classes constituting a broad range of soft tissue and bone sarcoma subtypes across the entire age spectrum. The performance is validated in a cohort of 428 sarcomatous tumours, of which 322 cases were classified by the sarcoma classifier. Our results demonstrate the potential of the DNA methylation-based sarcoma classification for research and future diagnostic applications

    Microbeams - quick and dirty

    No full text
    Microbeam radiation therapy (MRT) is a promising yet preclinical radiotherapy treatment for several tumour diagnosis such as gliosarcoma and radioresistant melanoma for which even modern clinical treatments such as intensity-modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) yield poor outcome perspectives. The dose prediction during MRT treatment planning, as for most other novel radiotherapies, is mostly performed with very time-consuming Monte Carlo (MC) simulations. This slows down preclinical research processes and renders treatment plan optimization infeasible. In this thesis, several milestones for the introduction of a fast machine learning (ML) dose calculation method for MRT are presented. First, a 3D U-Net-based ML dose engine is developed using MC training data obtained with Geant4 simulations of a synchrotron broadbeam incident on different bone slab models and a simplified human head phantom as a proof of concept. The developed model is shown to produce dose predictions within less than 100ms which is substantially faster than the used MC simulations with up to 20hours and also the currently fastest approximative MRT dose prediction approach, called HybridDC, with approximately 30minutes. The model is also shown to be superior to a dose prediction approach using generative adversarial networks (GANs) and also a novel transformer-based ML model called Dose Transformer (DoTA), with which it is compared for application in proton minibeam radiation therapy (pMBRT) in a subsequent study. Secondly, the developed ML model and the MC simulations for data generation are extended to account for the spatially fractionated nature of MRT. For this, a novel MC scoring method is developed which is able produce separate dose estimations for the high-dose peak regions where the microbeams traverse the phantoms and the low-dose valley regions in-between those beams. Finally, the developed ML model and the MC scoring method are deployed in a first application of an ML dose prediction method in a preclinical MRT study in collaboration with the University of Wollongong, Australia, conducted at the Imaging and Medical Beamline (IMBL) at the Australian Synchrotron which aimed at treating rats after implanting gliosarcoma cells. It is shown that the ML model can be trained to provide unbiased dose estimations in complex target phantoms even when trained on high-noise MC data, in important finding for the acceleration of future developments of ML models as such datasets can be produced significantly faster. The ML predictions in the rat phantoms deviate at most 10% from the MC simulations, rendering the proposed model a suitable candidate for fast dose predictions during treatment plan optimization in the future.C (MRT) ist eine vielversprechende vorklinische Strahlentherapie für einige Tumordiagnosen, wie beispielsweise Gliosarcome und radioresistente Melanome, für die auch moderne Therapiemethoden wie intensity-modulated radiation therapy (IMRT) und volumetric modulated arc therapy (VMAT) schlechte Therapieaussichten haben. Die Dosisvorhersage während der Behandlungsplanung für MRT, ebenso wie für viele andere neue Strahlentherapien, wird meistens mit sehr zeitaufwändigen Monte Carlo (MC) Simulationen durchgeführt. Dies zieht die Forschungsschritte in vorklinischen Studien in die Länge und verhindert vor allem die Optimierung von Behandlungsplänen. In dieser Arbeit werden mehrere Meilensteine für die Einführung einer schnellen MRT-Dosisberechnungsmethode auf der Basis von ML präsentiert. Zuerst wird ein machine learning (ML)-Dosisberechnungsmodell auf der Grundlage eines 3D U-Nets entwickelt. Dazu werden zunächst MC Trainingsdaten mithilfe von Geant4 Simulationen erzeugt, die die Dosisverteilung in verschiedenen Knochenscheibenphantomen und einem vereinfachten Kopfphantom nach Bestrahlung mit einem sogenannten Synchrotron broadbeam vorhersagen. Das entwickelte Modell erzeugt Dosisvorhersagen innerhalb von weniger als 100ms, was signifikant schneller als die Laufzeit der verwendeten MC Simulationen (bis zu 20Stunden) und ebenfalls die zur Zeit schnellsten MRT Dosisberechnungsmethode mithilfe von Approximationen, der sogenannten HybridDC Methode (ca. 30Minuten). Darüber hinaus wird gezeigt, dass das vorgestellte Modell sowohl bessere Vorhersageergebnisse als ein alternativer ML-Ansatz auf Basis von generative adversarial networks (GANs), als auch ein neues Transformer-basiertes ML-Modell namens Dose Transformer (DoTA) erreicht. Der Vergleich mit dem DoTA-Modell erfolgt in einer Studie zur Dosisvorhersage einer anderen neuen Strahlentherapiemethode, der proton minibeam radiation therapy (pMBRT). Anschließend wird das entwickelte ML-Modell und die MC Simulationen weiterentwickelt, um der räumlich fraktionierten Natur von MRT gerechnet zu werden. Dazu wird eine neue MC Scoringmethode entwickelt, welche separate Dosisverteilungen für den Peakbereich, in dem die Microbeams die Phantome durchqueren und eine hohe Dosis deponieren, und für den Valleybereich mit deutlich geringeren Dosisdepositionen dazwischen erstellt. Abschließend werden das entwickelte ML-Modell und die neue MC Scoringmethode in einer ersten Anwendung von ML-Dosisvorhersagemethoden in einer vorklinischen MRT-Studie einer Forschungsgruppe der University of Wollongong angewendet, in der mit Gliosarcomen implantierte Ratten an der Imaging and Medical Beamline (IMBL) am Australian Synchrotron bestrahlt wurden. Es wird gezeigt, dass das ML-Modell nach dem Training Dosisvorhersagen ohne Bias erzeugen kann, obwohl es mithilfe von MC Simulationen mit einer hohen statistischen Unsicherheit trainiert wird. Dies ist eine wichtige Erkenntnis für die beschleunigte Entwicklung zukünftiger ML-Modelle, da solche Daten deutlich schneller erzeugt werden können. Die produzierten Dosisvorhersagen weichen zumeist höchstens 10% von den MC Simulationen ab, daher wird das entwickelte Modell als geeigneter Kandidat für zukünftige schnelle Dosisvorhersagen für die Planungsoptimierung von MRT-Bestrahlungen eingeordne
    corecore