1,469 research outputs found
A Parallel Divide-and-Conquer based Evolutionary Algorithm for Large-scale Optimization
Large-scale optimization problems that involve thousands of decision
variables have extensively arisen from various industrial areas. As a powerful
optimization tool for many real-world applications, evolutionary algorithms
(EAs) fail to solve the emerging large-scale problems both effectively and
efficiently. In this paper, we propose a novel Divide-and-Conquer (DC) based EA
that can not only produce high-quality solution by solving sub-problems
separately, but also highly utilizes the power of parallel computing by solving
the sub-problems simultaneously. Existing DC-based EAs that were deemed to
enjoy the same advantages of the proposed algorithm, are shown to be
practically incompatible with the parallel computing scheme, unless some
trade-offs are made by compromising the solution quality.Comment: 12 pages, 0 figure
Hybrid Representations for Composition Optimization and Parallelizing MOEAs
We present a hybrid EA representation suitable to optimize composition optimization problems ranging from optimizing recipes for catalytic materials to cardinality constrained portfolio selection. On several problem instances we can show that this new representation performs better than standard repair mechanisms with Lamarckism.
Additionally, we investigate the a clustering based parallelization scheme for MOEAs. We prove that typical "divide and conquer\u27\u27 approaches are not suitable for the standard test functions like ZDT 1-6. Therefore, we suggest a new test function based on the portfolio selection problem and prove the feasibility of "divide and conquer\u27\u27 approaches on this test function
Denoising Autoencoders for fast Combinatorial Black Box Optimization
Estimation of Distribution Algorithms (EDAs) require flexible probability
models that can be efficiently learned and sampled. Autoencoders (AE) are
generative stochastic networks with these desired properties. We integrate a
special type of AE, the Denoising Autoencoder (DAE), into an EDA and evaluate
the performance of DAE-EDA on several combinatorial optimization problems with
a single objective. We asses the number of fitness evaluations as well as the
required CPU times. We compare the results to the performance to the Bayesian
Optimization Algorithm (BOA) and RBM-EDA, another EDA which is based on a
generative neural network which has proven competitive with BOA. For the
considered problem instances, DAE-EDA is considerably faster than BOA and
RBM-EDA, sometimes by orders of magnitude. The number of fitness evaluations is
higher than for BOA, but competitive with RBM-EDA. These results show that DAEs
can be useful tools for problems with low but non-negligible fitness evaluation
costs.Comment: corrected typos and small inconsistencie
Parallelizing multi-objective evolutionary algorithms: cone separation
Evolutionary multi-objective optimization (EMO) may be computationally quite demanding, because instead of searching for a single optimum, one generally wishes to find the whole front of Pareto-optimal solutions. For that reason, parallelizing EMO is an important issue. Since we are looking for a number of Pareto-optimal solutions with different tradeoffs between the objectives, it seems natural to assign different parts of the search space to different processors. We propose the idea of cone separation which is used to divide up the search space by adding explicit constraints for each process. We show that the approach is more efficient than simple parallelization schemes, and that it also works on problems with a non-convex Pareto-optimal front
Parallel memetic algorithms for independent job scheduling in computational grids
In this chapter we present parallel implementations of Memetic Algorithms (MAs) for the problem of scheduling independent jobs in computational grids. The problem of scheduling in computational grids is known for its high demanding computational time. In this work we exploit the intrinsic parallel nature of MAs as well as the fact that computational grids offer large amount of resources, a part of which could be used to compute the efficient allocation of jobs to grid resources.
The parallel models exploited in this work for MAs include both fine-grained and coarse-grained parallelization and their hybridization. The resulting schedulers have been tested through different grid scenarios generated by a grid simulator to match different possible configurations of computational grids in terms of size (number of jobs and resources) and computational characteristics of resources. All in all, the result of this work showed that Parallel MAs are very good alternatives in order to match different performance requirement on fast scheduling of jobs to grid resources.Peer ReviewedPostprint (author's final draft
- …