94 research outputs found
A seed specific dose kernel method for low-energy brachytherapy dosimetry.
We describe a method for independently verifying the dose distributions from pre- and post-implant brachytherapy source distributions. Monte Carlo calculations have been performed to characterize the three-dimensional dose distribution in water phantom from a low-energy brachytherapy source. The calculations are performed in a voxelized, Cartesian coordinate geometry and normalized based upon a separate Monte Carlo calculation for the seed specific air-kerma strength to produce an absolute dose grid with units of cGy hr(-1) x U(-1). The seed-specific, three-dimensional dose grid is stored as a text file for processing using a separate visual basic program. This program requires the coordinate positions of each seed in the pre- or post-plan and sums the kernel file for a three-dimensional composite dose distribution. A kernel matrix size of 81x81x81 with a voxel size of 1.0x1.0x1.0 mm3 was chosen as a compromise between calculation time, kernel size, and truncation of the stored dose distribution as a function of radial distance from the midpoint of the seed. Good agreement is achieved for a representative pre- and post-plan comparison versus a commercial implementation of the TG-43 brachytherapy dosimetry protocol
Dosimetric characteristics of a new linear accelerator under gated operation.
Respiratory gated radiotherapy may allow reduction of the treatment margins, thus sparing healthy tissue and/or allowing dose escalation to the tumor. However, current commissioning and quality assurance of linear accelerators do not include evaluation of gated delivery. The purpose of this study is to test gated photon delivery of a Siemens ONCOR Avant-Garde linear accelerator. Dosimetric characteristics for gated and nongated delivery of 6-MV and 15-MV photons were compared for the range of doses, dose rates, and for several gating regimes. Dose profiles were also compared using Kodak EDR2 and X-Omat V films for 6-MV and 15-MV photons for several dose rates and gating regimes. Results showed that deviation is less than or equal to 0.6% for all dose levels evaluated with the exception of the lowest dose delivered at 25 MU at an unrealistically high gating frequency of 0.5 Hz. At 400 MU, dose profile deviations along the central axes in in-plane and cross-plane directions within 80% of the field size are below 0.7%. No unequivocally detectable dose profile deviation was observed for 50 MU. Based on the comparison with widely accepted standards for conventional delivery, our results indicate that this LINAC is well suited for gated delivery of nondynamic fields
Recommended from our members
IMRT QA using machine learning: A multi-institutional validation.
PurposeTo validate a machine learning approach to Virtual intensity-modulated radiation therapy (IMRT) quality assurance (QA) for accurately predicting gamma passing rates using different measurement approaches at different institutions.MethodsA Virtual IMRT QA framework was previously developed using a machine learning algorithm based on 498 IMRT plans, in which QA measurements were performed using diode-array detectors and a 3%local/3 mm with 10% threshold at Institution 1. An independent set of 139 IMRT measurements from a different institution, Institution 2, with QA data based on portal dosimetry using the same gamma index, was used to test the mathematical framework. Only pixels with ≥10% of the maximum calibrated units (CU) or dose were included in the comparison. Plans were characterized by 90 different complexity metrics. A weighted poison regression with Lasso regularization was trained to predict passing rates using the complexity metrics as input.ResultsThe methodology predicted passing rates within 3% accuracy for all composite plans measured using diode-array detectors at Institution 1, and within 3.5% for 120 of 139 plans using portal dosimetry measurements performed on a per-beam basis at Institution 2. The remaining measurements (19) had large areas of low CU, where portal dosimetry has a larger disagreement with the calculated dose and as such, the failure was expected. These beams need further modeling in the treatment planning system to correct the under-response in low-dose regions. Important features selected by Lasso to predict gamma passing rates were as follows: complete irradiated area outline (CIAO), jaw position, fraction of MLC leafs with gaps smaller than 20 or 5 mm, fraction of area receiving less than 50% of the total CU, fraction of the area receiving dose from penumbra, weighted average irregularity factor, and duty cycle.ConclusionsWe have demonstrated that Virtual IMRT QA can predict passing rates using different measurement techniques and across multiple institutions. Prediction of QA passing rates can have profound implications on the current IMRT process
Recommended from our members
Comparison of transabdominal ultrasound and electromagnetic transponders for prostate localization.
The aim of this study is to compare two methodologies of prostate localization in a large cohort of patients. Daily prostate localization using B-mode ultrasound has been performed at the Nebraska Medical Center since 2000. More recently, a technology using electromagnetic transponders implanted within the prostate was introduced into our clinic (Calypso(R)). With each technology, patients were localized initially using skin marks. Localization error distributions were determined from offsets between the initial setup positions and those determined by ultrasound or Calypso. Ultrasound localization data was summarized from 16619 imaging sessions spanning 7 years; Calypso localization data consists of 1524 fractions in 41 prostate patients treated in the course of a clinical trial at five institutions and 640 localizations from the first 16 patients treated with our clinical system. Ultrasound and Calypso patients treated between March and September 2007 at the Nebraska Medical Center were analyzed and compared, allowing a single institutional comparison of the two technologies. In this group of patients, the isocenter determined by ultrasound-based localization is on average 5.3 mm posterior to that determined by Calypso, while the systematic and random errors and PTV margins calculated from the ultrasound localizations were 3 - 4 times smaller than those calculated from the Calypso localizations. Our study finds that there are systematic differences between Calypso and ultrasound for prostate localization
Recommended from our members
Exploratory analysis using machine learning to predict for chest wall pain in patients with stage I non-small-cell lung cancer treated with stereotactic body radiation therapy.
Background and purposeChest wall toxicity is observed after stereotactic body radiation therapy (SBRT) for peripherally located lung tumors. We utilize machine learning algorithms to identify toxicity predictors to develop dose-volume constraints.Materials and methodsTwenty-five patient, tumor, and dosimetric features were recorded for 197 consecutive patients with Stage I NSCLC treated with SBRT, 11 of whom (5.6%) developed CTCAEv4 grade ≥2 chest wall pain. Decision tree modeling was used to determine chest wall syndrome (CWS) thresholds for individual features. Significant features were determined using independent multivariate methods. These methods incorporate out-of-bag estimation using Random forests (RF) and bootstrapping (100 iterations) using decision trees.ResultsUnivariate analysis identified rib dose to 1 cc < 4000 cGy (P = 0.01), chest wall dose to 30 cc < 1900 cGy (P = 0.035), rib Dmax < 5100 cGy (P = 0.05) and lung dose to 1000 cc < 70 cGy (P = 0.039) to be statistically significant thresholds for avoiding CWS. Subsequent multivariate analysis confirmed the importance of rib dose to 1 cc, chest wall dose to 30 cc, and rib Dmax. Using learning-curve experiments, the dataset proved to be self-consistent and provides a realistic model for CWS analysis.ConclusionsUsing machine learning algorithms in this first of its kind study, we identify robust features and cutoffs predictive for the rare clinical event of CWS. Additional data in planned subsequent multicenter studies will help increase the accuracy of multivariate analysis
Recommended from our members
Validation and clinical implementation of an accurate Monte Carlo code for pencil beam scanning proton therapy.
Monte Carlo (MC)-based dose calculations are generally superior to analytical dose calculations (ADC) in modeling the dose distribution for proton pencil beam scanning (PBS) treatments. The purpose of this paper is to present a methodology for commissioning and validating an accurate MC code for PBS utilizing a parameterized source model, including an implementation of a range shifter, that can independently check the ADC in commercial treatment planning system (TPS) and fast Monte Carlo dose calculation in opensource platform (MCsquare). The source model parameters (including beam size, angular divergence and energy spread) and protons per MU were extracted and tuned at the nozzle exit by comparing Tool for Particle Simulation (TOPAS) simulations with a series of commissioning measurements using scintillation screen/CCD camera detector and ionization chambers. The range shifter was simulated as an independent object with geometric and material information. The MC calculation platform was validated through comprehensive measurements of single spots, field size factors (FSF) and three-dimensional dose distributions of spread-out Bragg peaks (SOBPs), both without and with the range shifter. Differences in field size factors and absolute output at various depths of SOBPs between measurement and simulation were within 2.2%, with and without a range shifter, indicating an accurate source model. TOPAS was also validated against anthropomorphic lung phantom measurements. Comparison of dose distributions and DVHs for representative liver and lung cases between independent MC and analytical dose calculations from a commercial TPS further highlights the limitations of the ADC in situations of highly heterogeneous geometries. The fast MC platform has been implemented within our clinical practice to provide additional independent dose validation/QA of the commercial ADC for patient plans. Using the independent MC, we can more efficiently commission ADC by reducing the amount of measured data required for low dose "halo" modeling, especially when a range shifter is employed
Recommended from our members
Building more accurate decision trees with the additive tree.
The expansion of machine learning to high-stakes application domains such as medicine, finance, and criminal justice, where making informed decisions requires clear understanding of the model, has increased the interest in interpretable machine learning. The widely used Classification and Regression Trees (CART) have played a major role in health sciences, due to their simple and intuitive explanation of predictions. Ensemble methods like gradient boosting can improve the accuracy of decision trees, but at the expense of the interpretability of the generated model. Additive models, such as those produced by gradient boosting, and full interaction models, such as CART, have been investigated largely in isolation. We show that these models exist along a spectrum, revealing previously unseen connections between these approaches. This paper introduces a rigorous formalization for the additive tree, an empirically validated learning technique for creating a single decision tree, and shows that this method can produce models equivalent to CART or gradient boosted stumps at the extremes by varying a single parameter. Although the additive tree is designed primarily to provide both the model interpretability and predictive performance needed for high-stakes applications like medicine, it also can produce decision trees represented by hybrid models between CART and boosted stumps that can outperform either of these approaches
Recommended from our members
Expert-augmented machine learning.
Machine learning is proving invaluable across disciplines. However, its success is often limited by the quality and quantity of available data, while its adoption is limited by the level of trust afforded by given models. Human vs. machine performance is commonly compared empirically to decide whether a certain task should be performed by a computer or an expert. In reality, the optimal learning strategy may involve combining the complementary strengths of humans and machines. Here, we present expert-augmented machine learning (EAML), an automated method that guides the extraction of expert knowledge and its integration into machine-learned models. We used a large dataset of intensive-care patient data to derive 126 decision rules that predict hospital mortality. Using an online platform, we asked 15 clinicians to assess the relative risk of the subpopulation defined by each rule compared to the total sample. We compared the clinician-assessed risk to the empirical risk and found that, while clinicians agreed with the data in most cases, there were notable exceptions where they overestimated or underestimated the true risk. Studying the rules with greatest disagreement, we identified problems with the training data, including one miscoded variable and one hidden confounder. Filtering the rules based on the extent of disagreement between clinician-assessed risk and empirical risk, we improved performance on out-of-sample data and were able to train with less data. EAML provides a platform for automated creation of problem-specific priors, which help build robust and dependable machine-learning models in critical applications
- …