27 research outputs found

    Pre-segmented 2-Step IMRT with subsequent direct machine parameter optimisation – a planning study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Modern intensity modulated radiotherapy (IMRT) mostly uses iterative optimisation methods. The integration of machine parameters into the optimisation process of step and shoot leaf positions has been shown to be successful. For IMRT segmentation algorithms based on the analysis of the geometrical structure of the planning target volumes (PTV) and the organs at risk (OAR), the potential of such procedures has not yet been fully explored. In this work, 2-Step IMRT was combined with subsequent direct machine parameter optimisation (DMPO-Raysearch Laboratories, Sweden) to investigate this potential.</p> <p>Methods</p> <p>In a planning study DMPO on a commercial planning system was compared with manual primary 2-Step IMRT segment generation followed by DMPO optimisation. 15 clinical cases and the ESTRO Quasimodo phantom were employed. Both the same number of optimisation steps and the same set of objective values were used. The plans were compared with a clinical DMPO reference plan and a traditional IMRT plan based on fluence optimisation and consequent segmentation. The composite objective value (the weighted sum of quadratic deviations of the objective values and the related points in the dose volume histogram) was used as a measure for the plan quality. Additionally, a more extended set of parameters was used for the breast cases to compare the plans.</p> <p>Results</p> <p>The plans with segments pre-defined with 2-Step IMRT were slightly superior to DMPO alone in the majority of cases. The composite objective value tended to be even lower for a smaller number of segments. The total number of monitor units was slightly higher than for the DMPO-plans. Traditional IMRT fluence optimisation with subsequent segmentation could not compete.</p> <p>Conclusion</p> <p>2-Step IMRT segmentation is suitable as starting point for further DMPO optimisation and, in general, results in less complex plans which are equal or superior to plans generated by DMPO alone.</p

    Dosimetric evaluation of Acuros XB Advanced Dose Calculation algorithm in heterogeneous media

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>A study was realised to evaluate and determine relative figures of merit of a new algorithm for photon dose calculation when applied to inhomogeneous media.</p> <p>Methods</p> <p>The new Acuros XB algorithm implemented in the Varian Eclipse treatment planning system was compared against a Monte Carlo method (VMC++), and the Analytical Anisotropic Algorithm (AAA). The study was carried out in virtual phantoms characterized by simple geometrical structures. An insert of different material and density was included in a phantom built of skeletal-muscle and HU = 0 (setting "A"): Normal Lung (lung, 0.198 g/cm<sup>3</sup>); Light Lung (lung, 0.035 g/cm<sup>3</sup>); Bone (bone, 1.798 g/cm<sup>3</sup>); another phantom (setting "B") was built of adipose material and including thin layers of bone (1.85 g/cm<sup>3</sup>), adipose (0.92 g/cm<sup>3</sup>), cartilage (1.4745 g/cm<sup>3</sup>), air (0.0012 g/cm<sup>3</sup>). Investigations were performed for 6 and 15 MV photon beams, and for a large (13 × 13 cm<sup>2</sup>) and a small (2.8 × 13 cm<sup>2</sup>) field.</p> <p>Results</p> <p>Results are provided in terms of depth dose curves, transverse profiles and Gamma analysis (3 mm/3% and 2 mm/2% distance to agreement/dose difference criteria) in planes parallel to the beam central axis; Monte Carlo simulations were assumed as reference. Acuros XB gave an average gamma agreement, with a 3 mm/3% criteria, of 100%, 86% and 100% for Normal Lung, Light Lung and Bone settings, respectively, and dose to medium calculations. The same figures were 86%, 11% and 100% for AAA, where only dose rescaled to water calculations are possible.</p> <p>Conclusions</p> <p>In conclusion, Acuros XB algorithm provides a valid and accurate alternative to Monte Carlo calculations for heterogeneity management.</p

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency–Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research

    Photon-beam subsource sensitivity to the initial electron-beam parameters

    No full text
    One limitation to the widespread implementation of Monte Carlo (MC) patient dose-calculation algorithms for radiotherapy is the lack of a general and accurate source model of the accelerator radiation source. Our aim in this work is to investigate the sensitivity of the photon-beam subsource distributions in a MC source model (with target, primary collimator, and flattening filter photon subsources and an electron subsource) for 6- and 18-MV photon beams when the energy and radial distributions of initial electrons striking a linac target change. For this purpose, phase-space data (PSD) was calculated for various mean electron energies striking the target, various normally distributed electron energy spread, and various normally distributed electron radial intensity distributions. All PSD was analyzed in terms of energy, fluence, and energy fluence distributions, which were compared between the different parameter sets. The energy spread was found to have a negligible influence on the subsource distributions. The mean energy and radial intensity significantly changed the target subsource distribution shapes and intensities. For the primary collimator and flattening filter subsources, the distribution shapes of the fluence and energy fluence changed little for different mean electron energies striking the target, however, their relative intensity compared with the target subsource change, which can be accounted for by a scaling factor. This study indicates that adjustments to MC source models can likely be limited to adjusting the target subsource in conjunction with scaling the relative intensity and energy spectrum of the primary collimator, flattening filter, and electron subsources when the energy and radial distributions of the initial electron-beam change
    corecore