9,184 research outputs found
A Hierachical Evolutionary Algorithm for Multiobjective Optimization in IMRT
Purpose: Current inverse planning methods for IMRT are limited because they
are not designed to explore the trade-offs between the competing objectives
between the tumor and normal tissues. Our goal was to develop an efficient
multiobjective optimization algorithm that was flexible enough to handle any
form of objective function and that resulted in a set of Pareto optimal plans.
Methods: We developed a hierarchical evolutionary multiobjective algorithm
designed to quickly generate a diverse Pareto optimal set of IMRT plans that
meet all clinical constraints and reflect the trade-offs in the plans. The top
level of the hierarchical algorithm is a multiobjective evolutionary algorithm
(MOEA). The genes of the individuals generated in the MOEA are the parameters
that define the penalty function minimized during an accelerated deterministic
IMRT optimization that represents the bottom level of the hierarchy. The MOEA
incorporates clinical criteria to restrict the search space through protocol
objectives and then uses Pareto optimality among the fitness objectives to
select individuals.
Results: Acceleration techniques implemented on both levels of the
hierarchical algorithm resulted in short, practical runtimes for optimizations.
The MOEA improvements were evaluated for example prostate cases with one target
and two OARs. The modified MOEA dominated 11.3% of plans using a standard
genetic algorithm package. By implementing domination advantage and protocol
objectives, small diverse populations of clinically acceptable plans that were
only dominated 0.2% by the Pareto front could be generated in a fraction of an
hour.
Conclusions: Our MOEA produces a diverse Pareto optimal set of plans that
meet all dosimetric protocol criteria in a feasible amount of time. It
optimizes not only beamlet intensities but also objective function parameters
on a patient-specific basis
Superiorization and Perturbation Resilience of Algorithms: A Continuously Updated Bibliography
This document presents a, (mostly) chronologically ordered, bibliography of
scientific publications on the superiorization methodology and perturbation
resilience of algorithms which is compiled and continuously updated by us at:
http://math.haifa.ac.il/yair/bib-superiorization-censor.html. Since the
beginings of this topic we try to trace the work that has been published about
it since its inception. To the best of our knowledge this bibliography
represents all available publications on this topic to date, and while the URL
is continuously updated we will revise this document and bring it up to date on
arXiv approximately once a year. Abstracts of the cited works, and some links
and downloadable files of preprints or reprints are available on the above
mentioned Internet page. If you know of a related scientific work in any form
that should be included here kindly write to me on: [email protected] with
full bibliographic details, a DOI if available, and a PDF copy of the work if
possible. The Internet page was initiated on March 7, 2015, and has been last
updated on March 12, 2020.Comment: Original report: June 13, 2015 contained 41 items. First revision:
March 9, 2017 contained 64 items. Second revision: March 8, 2018 contained 76
items. Third revision: March 11, 2019 contains 90 items. Fourth revision:
March 16, 2020 contains 112 item
Enhancement of Sandwich Algorithms for Approximating Higher Dimensional Convex Pareto Sets
In many fields, we come across problems where we want to optimize several conflicting objectives simultaneously. To find a good solution for such multi-objective optimization problems, an approximation of the Pareto set is often generated. In this paper, we con- sider the approximation of Pareto sets for problems with three or more convex objectives and with convex constraints. For these problems, sandwich algorithms can be used to de- termine an inner and outer approximation between which the Pareto set is 'sandwiched'. Using these two approximations, we can calculate an upper bound on the approximation error. This upper bound can be used to determine which parts of the approximations must be improved and to provide a quality guarantee to the decision maker. In this paper, we extend higher dimensional sandwich algorithms in three different ways. Firstly, we introduce the new concept of adding dummy points to the inner approx- imation of a Pareto set. By using these dummy points, we can determine accurate inner and outer approximations more e±ciently, i.e., using less time-consuming optimizations. Secondly, we introduce a new method for the calculation of an error measure which is easy to interpret. The combination of easy calculation and easy interpretation makes this measure very suitable for sandwich algorithms. Thirdly, we show how transforming cer- tain objective functions can improve the results of sandwich algorithms and extend their applicability to certain non-convex problems. The calculation of the introduced error measure when using transformations will also be discussed. To show the effect of these enhancements, we make a numerical comparison using four test cases, including a four-dimensional case from the field of intensity-modulated radiation therapy (IMRT). The results of the different cases show that we can indeed achieve an accurate approximation using significantly fewer optimizations by using the enhancements.Convexity;e-efficiency;e-Pareto optimality;Geometric programming;Higher dimensional;Inner and outer approximation;IMRT;Pareto set;Multi-objective optimiza- tion;Sandwich algorithms;Transformations
A Bayesian approach for energy-based estimation of acoustic aberrations in high intensity focused ultrasound treatment
High intensity focused ultrasound is a non-invasive method for treatment of
diseased tissue that uses a beam of ultrasound to generate heat within a small
volume. A common challenge in application of this technique is that
heterogeneity of the biological medium can defocus the ultrasound beam. Here we
reduce the problem of refocusing the beam to the inverse problem of estimating
the acoustic aberration due to the biological tissue from acoustic radiative
force imaging data. We solve this inverse problem using a Bayesian framework
with a hierarchical prior and solve the inverse problem using a
Metropolis-within-Gibbs algorithm. The framework is tested using both synthetic
and experimental datasets. We demonstrate that our approach has the ability to
estimate the aberrations using small datasets, as little as 32 sonication
tests, which can lead to significant speedup in the treatment process.
Furthermore, our approach is compatible with a wide range of sonication tests
and can be applied to other energy-based measurement techniques
Dosimetric evidence confirms computational model for magnetic field induced dose distortions of therapeutic proton beams
Given the sensitivity of proton therapy to anatomical variations, this cancer
treatment modality is expected to benefit greatly from integration with
magnetic resonance (MR) imaging. One of the obstacles hindering such an
integration are strong magnetic field induced dose distortions. These have been
predicted in simulation studies, but no experimental validation has been
performed so far. Here we show the first measurement of planar distributions of
dose deposited by therapeutic proton pencil beams traversing a one-Tesla
transversal magnetic field while depositing energy in a tissue-like phantom
using film dosimetry. The lateral beam deflection ranges from one millimeter to
one centimeter for 80 to 180 MeV beams. Simulated and measured deflection agree
within one millimeter for all studied energies. These results proof that the
magnetic field induced proton beam deflection is both measurable and accurately
predictable. This demonstrates the feasibility of accurate dose measurement and
hence validates dose predictions for the framework of MR-integrated proton
therapy
Fast Monte Carlo Simulations for Quality Assurance in Radiation Therapy
Monte Carlo (MC) simulation is generally considered to be the most accurate method for dose calculation in radiation therapy. However, it suffers from the low simulation efficiency (hours to days) and complex configuration, which impede its applications in clinical studies. The recent rise of MRI-guided radiation platform (e.g. ViewRay’s MRIdian system) brings urgent need of fast MC algorithms because the introduced strong magnetic field may cause big errors to other algorithms. My dissertation focuses on resolving the conflict between accuracy and efficiency of MC simulations through 4 different approaches: (1) GPU parallel computation, (2) Transport mechanism simplification, (3) Variance reduction, (4) DVH constraint. Accordingly, we took several steps to thoroughly study the performance and accuracy influence of these methods. As a result, three Monte Carlo simulation packages named gPENELOPE, gDPMvr and gDVH were developed for subtle balance between performance and accuracy in different application scenarios. For example, the most accurate gPENELOPE is usually used as golden standard for radiation meter model, while the fastest gDVH is usually used for quick in-patient dose calculation, which significantly reduces the calculation time from 5 hours to 1.2 minutes (250 times faster) with only 1% error introduced. In addition, a cross-platform GUI integrating simulation kernels and 3D visualization was developed to make the toolkit more user-friendly. After the fast MC infrastructure was established, we successfully applied it to four radiotherapy scenarios: (1) Validate the vender provided Co60 radiation head model by comparing the dose calculated by gPENELOPE to experiment data; (2) Quantitatively study the effect of magnetic field to dose distribution and proposed a strategy to improve treatment planning efficiency; (3) Evaluate the accuracy of the build-in MC algorithm of MRIdian’s treatment planning system. (4) Perform quick quality assurance (QA) for the “online adaptive radiation therapy” that doesn’t permit enough time to perform experiment QA. Many other time-sensitive applications (e.g. motional dose accumulation) will also benefit a lot from our fast MC infrastructure
Algorithm and performance of a clinical IMRT beam-angle optimization system
This paper describes the algorithm and examines the performance of an IMRT
beam-angle optimization (BAO) system. In this algorithm successive sets of beam
angles are selected from a set of predefined directions using a fast simulated
annealing (FSA) algorithm. An IMRT beam-profile optimization is performed on
each generated set of beams. The IMRT optimization is accelerated by using a
fast dose calculation method that utilizes a precomputed dose kernel. A compact
kernel is constructed for each of the predefined beams prior to starting the
FSA algorithm. The IMRT optimizations during the BAO are then performed using
these kernels in a fast dose calculation engine. This technique allows the IMRT
optimization to be performed more than two orders of magnitude faster than a
similar optimization that uses a convolution dose calculation engine.Comment: Final version that appeared in Phys. Med. Biol. 48 (2003) 3191-3212.
Original EPS figures have been converted to PNG files due to size limi
- …