2,900 research outputs found
A Hierachical Evolutionary Algorithm for Multiobjective Optimization in IMRT
Purpose: Current inverse planning methods for IMRT are limited because they
are not designed to explore the trade-offs between the competing objectives
between the tumor and normal tissues. Our goal was to develop an efficient
multiobjective optimization algorithm that was flexible enough to handle any
form of objective function and that resulted in a set of Pareto optimal plans.
Methods: We developed a hierarchical evolutionary multiobjective algorithm
designed to quickly generate a diverse Pareto optimal set of IMRT plans that
meet all clinical constraints and reflect the trade-offs in the plans. The top
level of the hierarchical algorithm is a multiobjective evolutionary algorithm
(MOEA). The genes of the individuals generated in the MOEA are the parameters
that define the penalty function minimized during an accelerated deterministic
IMRT optimization that represents the bottom level of the hierarchy. The MOEA
incorporates clinical criteria to restrict the search space through protocol
objectives and then uses Pareto optimality among the fitness objectives to
select individuals.
Results: Acceleration techniques implemented on both levels of the
hierarchical algorithm resulted in short, practical runtimes for optimizations.
The MOEA improvements were evaluated for example prostate cases with one target
and two OARs. The modified MOEA dominated 11.3% of plans using a standard
genetic algorithm package. By implementing domination advantage and protocol
objectives, small diverse populations of clinically acceptable plans that were
only dominated 0.2% by the Pareto front could be generated in a fraction of an
hour.
Conclusions: Our MOEA produces a diverse Pareto optimal set of plans that
meet all dosimetric protocol criteria in a feasible amount of time. It
optimizes not only beamlet intensities but also objective function parameters
on a patient-specific basis
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (“efficient”) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find “quickly” (reasonable run-times), with “high” probability, provable “good” solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Fast micro-differential evolution for topological active net optimization
This paper studies the optimization problem of topological active net (TAN), which is often seen in image segmentation and shape modeling. A TAN is a topological structure containing many nodes, whose positions must be optimized while a predefined topology needs to be maintained. TAN optimization is often time-consuming and even constructing a single solution is hard to do. Such a problem is usually approached by a ``best improvement local search'' (BILS) algorithm based on deterministic search (DS), which is inefficient because it spends too much efforts in nonpromising probing. In this paper, we propose the use of micro-differential evolution (DE) to replace DS in BILS for improved directional guidance. The resultant algorithm is termed deBILS. Its micro-population efficiently utilizes historical information for potentially promising search directions and hence improves efficiency in probing. Results show that deBILS can probe promising neighborhoods for each node of a TAN. Experimental tests verify that deBILS offers substantially higher search speed and solution quality not only than ordinary BILS, but also the genetic algorithm and scatter search algorithm
Comparing metaheuristic algorithms for error detection in Java programs
Chicano, F., Ferreira M., & Alba E. (2011). Comparing Metaheuristic Algorithms for Error Detection in Java Programs. In Proceedings of Search Based Software Engineering, Szeged, Hungary, September 10-12, 2011. pp. 82–96.Model checking is a fully automatic technique for checking concurrent software properties in which the states of a concurrent system are explored in an explicit or implicit way. The main drawback of this technique is the high memory consumption, which limits the size of the programs that can be checked. In the last years, some researchers have focused on the application of guided non-complete stochastic techniques to the search of the state space of such concurrent programs. In this paper, we compare five metaheuristic algorithms for this problem. The algorithms are Simulated Annealing, Ant Colony Optimization, Particle Swarm Optimization and two variants of Genetic Algorithm. To the best of our knowledge, it is the first time that Simulated Annealing has been applied to the problem. We use in the comparison a benchmark composed of 17 Java concurrent programs. We also compare the results of these algorithms with the ones of deterministic algorithms.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. This research has been partially funded by the Spanish Ministry of Science and Innovation and FEDER under contract TIN2008-06491-C04-01 (the M∗ project) and the Andalusian Government under contract P07-TIC-03044 (DIRICOM project)
A Memetic Algorithm for the Generalized Traveling Salesman Problem
The generalized traveling salesman problem (GTSP) is an extension of the
well-known traveling salesman problem. In GTSP, we are given a partition of
cities into groups and we are required to find a minimum length tour that
includes exactly one city from each group. The recent studies on this subject
consider different variations of a memetic algorithm approach to the GTSP. The
aim of this paper is to present a new memetic algorithm for GTSP with a
powerful local search procedure. The experiments show that the proposed
algorithm clearly outperforms all of the known heuristics with respect to both
solution quality and running time. While the other memetic algorithms were
designed only for the symmetric GTSP, our algorithm can solve both symmetric
and asymmetric instances.Comment: 15 pages, to appear in Natural Computing, Springer, available online:
http://www.springerlink.com/content/5v4568l492272865/?p=e1779dd02e4d4cbfa49d0d27b19b929f&pi=1
A hybrid genetic algorithm and tabu search approach for post enrolment course timetabling
Copyright @ Springer Science + Business Media. All rights reserved.The post enrolment course timetabling problem (PECTP) is one type of university course timetabling problems, in which a set of events has to be scheduled in time slots and located in suitable rooms according to the student enrolment data. The PECTP is an NP-hard combinatorial optimisation problem and hence is very difficult to solve to optimality. This paper proposes a hybrid approach to solve the PECTP in two phases. In the first phase, a guided search genetic algorithm is applied to solve the PECTP. This guided search genetic algorithm, integrates a guided search strategy and some local search techniques, where the guided search strategy uses a data structure that stores useful information extracted from previous good individuals to guide the generation of offspring into the population and the local search techniques are used to improve the quality of individuals. In the second phase, a tabu search heuristic is further used on the best solution obtained by the first phase to improve the optimality of the solution if possible. The proposed hybrid approach is tested on a set of benchmark PECTPs taken from the international timetabling competition in comparison with a set of state-of-the-art methods from the literature. The experimental results show that the proposed hybrid approach is able to produce promising results for the test PECTPs.This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant EP/E060722/01 and Grant EP/E060722/02
Improving Function Coverage with Munch: A Hybrid Fuzzing and Directed Symbolic Execution Approach
Fuzzing and symbolic execution are popular techniques for finding
vulnerabilities and generating test-cases for programs. Fuzzing, a blackbox
method that mutates seed input values, is generally incapable of generating
diverse inputs that exercise all paths in the program. Due to the
path-explosion problem and dependence on SMT solvers, symbolic execution may
also not achieve high path coverage. A hybrid technique involving fuzzing and
symbolic execution may achieve better function coverage than fuzzing or
symbolic execution alone. In this paper, we present Munch, an open source
framework implementing two hybrid techniques based on fuzzing and symbolic
execution. We empirically show using nine large open-source programs that
overall, Munch achieves higher (in-depth) function coverage than symbolic
execution or fuzzing alone. Using metrics based on total analyses time and
number of queries issued to the SMT solver, we also show that Munch is more
efficient at achieving better function coverage.Comment: To appear at 33rd ACM/SIGAPP Symposium On Applied Computing (SAC). To
be held from 9th to 13th April, 201
FairFuzz: Targeting Rare Branches to Rapidly Increase Greybox Fuzz Testing Coverage
In recent years, fuzz testing has proven itself to be one of the most
effective techniques for finding correctness bugs and security vulnerabilities
in practice. One particular fuzz testing tool, American Fuzzy Lop or AFL, has
become popular thanks to its ease-of-use and bug-finding power. However, AFL
remains limited in the depth of program coverage it achieves, in particular
because it does not consider which parts of program inputs should not be
mutated in order to maintain deep program coverage. We propose an approach,
FairFuzz, that helps alleviate this limitation in two key steps. First,
FairFuzz automatically prioritizes inputs exercising rare parts of the program
under test. Second, it automatically adjusts the mutation of inputs so that the
mutated inputs are more likely to exercise these same rare parts of the
program. We conduct evaluation on real-world programs against state-of-the-art
versions of AFL, thoroughly repeating experiments to get good measures of
variability. We find that on certain benchmarks FairFuzz shows significant
coverage increases after 24 hours compared to state-of-the-art versions of AFL,
while on others it achieves high program coverage at a significantly faster
rate
- …