773 research outputs found
A GPU-based multi-criteria optimization algorithm for HDR brachytherapy
Currently in HDR brachytherapy planning, a manual fine-tuning of an objective
function is necessary to obtain case-specific valid plans. This study intends
to facilitate this process by proposing a patient-specific inverse planning
algorithm for HDR prostate brachytherapy: GPU-based multi-criteria optimization
(gMCO).
Two GPU-based optimization engines including simulated annealing (gSA) and a
quasi-Newton optimizer (gL-BFGS) were implemented to compute multiple plans in
parallel. After evaluating the equivalence and the computation performance of
these two optimization engines, one preferred optimization engine was selected
for the gMCO algorithm. Five hundred sixty-two previously treated prostate HDR
cases were divided into validation set (100) and test set (462). In the
validation set, the number of Pareto optimal plans to achieve the best plan
quality was determined for the gMCO algorithm. In the test set, gMCO plans were
compared with the physician-approved clinical plans.
Over 462 cases, the number of clinically valid plans was 428 (92.6%) for
clinical plans and 461 (99.8%) for gMCO plans. The number of valid plans with
target V100 coverage greater than 95% was 288 (62.3%) for clinical plans and
414 (89.6%) for gMCO plans. The mean planning time was 9.4 s for the gMCO
algorithm to generate 1000 Pareto optimal plans.
In conclusion, gL-BFGS is able to compute thousands of SA equivalent
treatment plans within a short time frame. Powered by gL-BFGS, an ultra-fast
and robust multi-criteria optimization algorithm was implemented for HDR
prostate brachytherapy. A large-scale comparison against physician approved
clinical plans showed that treatment plan quality could be improved and
planning time could be significantly reduced with the proposed gMCO algorithm.Comment: 18 pages, 7 figure
A Parallel Divide-and-Conquer based Evolutionary Algorithm for Large-scale Optimization
Large-scale optimization problems that involve thousands of decision
variables have extensively arisen from various industrial areas. As a powerful
optimization tool for many real-world applications, evolutionary algorithms
(EAs) fail to solve the emerging large-scale problems both effectively and
efficiently. In this paper, we propose a novel Divide-and-Conquer (DC) based EA
that can not only produce high-quality solution by solving sub-problems
separately, but also highly utilizes the power of parallel computing by solving
the sub-problems simultaneously. Existing DC-based EAs that were deemed to
enjoy the same advantages of the proposed algorithm, are shown to be
practically incompatible with the parallel computing scheme, unless some
trade-offs are made by compromising the solution quality.Comment: 12 pages, 0 figure
Cooperative Coevolution for Non-Separable Large-Scale Black-Box Optimization: Convergence Analyses and Distributed Accelerations
Given the ubiquity of non-separable optimization problems in real worlds, in
this paper we analyze and extend the large-scale version of the well-known
cooperative coevolution (CC), a divide-and-conquer optimization framework, on
non-separable functions. First, we reveal empirical reasons of why
decomposition-based methods are preferred or not in practice on some
non-separable large-scale problems, which have not been clearly pointed out in
many previous CC papers. Then, we formalize CC to a continuous game model via
simplification, but without losing its essential property. Different from
previous evolutionary game theory for CC, our new model provides a much simpler
but useful viewpoint to analyze its convergence, since only the pure Nash
equilibrium concept is needed and more general fitness landscapes can be
explicitly considered. Based on convergence analyses, we propose a hierarchical
decomposition strategy for better generalization, as for any decomposition
there is a risk of getting trapped into a suboptimal Nash equilibrium. Finally,
we use powerful distributed computing to accelerate it under the multi-level
learning framework, which combines the fine-tuning ability from decomposition
with the invariance property of CMA-ES. Experiments on a set of
high-dimensional functions validate both its search performance and scalability
(w.r.t. CPU cores) on a clustering computing platform with 400 CPU cores
SBSI:an extensible distributed software infrastructure for parameter estimation in systems biology
Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI’s use of standard data formats
Massively Parallel Genetic Optimization through Asynchronous Propagation of Populations
We present Propulate, an evolutionary optimization algorithm and software
package for global optimization and in particular hyperparameter search. For
efficient use of HPC resources, Propulate omits the synchronization after each
generation as done in conventional genetic algorithms. Instead, it steers the
search with the complete population present at time of breeding new
individuals. We provide an MPI-based implementation of our algorithm, which
features variants of selection, mutation, crossover, and migration and is easy
to extend with custom functionality. We compare Propulate to the established
optimization tool Optuna. We find that Propulate is up to three orders of
magnitude faster without sacrificing solution accuracy, demonstrating the
efficiency and efficacy of our lazy synchronization approach. Code and
documentation are available at https://github.com/Helmholtz-AI-Energy/propulateComment: 18 pages, 5 figures submitted to ISC High Performance 202
Various island-based parallel genetic algorithms for the 2-page drawing problem
Genetic algorithms have been applied to solve the
2-page drawing problem successfully, but they work
with one global population, so the search time and
space are limited. Parallelization provides an attractive
prospect in improving the efficiency and solution
quality of genetic algorithms. One of the most popular
tools for parallel computing is Message Passing
Interface (MPI). In this paper, we present four island
models of Parallel Genetic Algorithms with MPI: island
models with linear, grid, random graph topologies,
and island model with periodical synchronisation.
We compare their efficiency and quality of solutions for
the 2-page drawing problem on a variety of graphs
- …