63,783 research outputs found
A Parallel Divide-and-Conquer based Evolutionary Algorithm for Large-scale Optimization
Large-scale optimization problems that involve thousands of decision
variables have extensively arisen from various industrial areas. As a powerful
optimization tool for many real-world applications, evolutionary algorithms
(EAs) fail to solve the emerging large-scale problems both effectively and
efficiently. In this paper, we propose a novel Divide-and-Conquer (DC) based EA
that can not only produce high-quality solution by solving sub-problems
separately, but also highly utilizes the power of parallel computing by solving
the sub-problems simultaneously. Existing DC-based EAs that were deemed to
enjoy the same advantages of the proposed algorithm, are shown to be
practically incompatible with the parallel computing scheme, unless some
trade-offs are made by compromising the solution quality.Comment: 12 pages, 0 figure
Spatial Evolutionary Generative Adversarial Networks
Generative adversary networks (GANs) suffer from training pathologies such as
instability and mode collapse. These pathologies mainly arise from a lack of
diversity in their adversarial interactions. Evolutionary generative
adversarial networks apply the principles of evolutionary computation to
mitigate these problems. We hybridize two of these approaches that promote
training diversity. One, E-GAN, at each batch, injects mutation diversity by
training the (replicated) generator with three independent objective functions
then selecting the resulting best performing generator for the next batch. The
other, Lipizzaner, injects population diversity by training a two-dimensional
grid of GANs with a distributed evolutionary algorithm that includes neighbor
exchanges of additional training adversaries, performance based selection and
population-based hyper-parameter tuning. We propose to combine mutation and
population approaches to diversity improvement. We contribute a superior
evolutionary GANs training method, Mustangs, that eliminates the single loss
function used across Lipizzaner's grid. Instead, each training round, a loss
function is selected with equal probability, from among the three E-GAN uses.
Experimental analyses on standard benchmarks, MNIST and CelebA, demonstrate
that Mustangs provides a statistically faster training method resulting in more
accurate networks
Uncertainty And Evolutionary Optimization: A Novel Approach
Evolutionary algorithms (EA) have been widely accepted as efficient solvers
for complex real world optimization problems, including engineering
optimization. However, real world optimization problems often involve uncertain
environment including noisy and/or dynamic environments, which pose major
challenges to EA-based optimization. The presence of noise interferes with the
evaluation and the selection process of EA, and thus adversely affects its
performance. In addition, as presence of noise poses challenges to the
evaluation of the fitness function, it may need to be estimated instead of
being evaluated. Several existing approaches attempt to address this problem,
such as introduction of diversity (hyper mutation, random immigrants, special
operators) or incorporation of memory of the past (diploidy, case based
memory). However, these approaches fail to adequately address the problem. In
this paper we propose a Distributed Population Switching Evolutionary Algorithm
(DPSEA) method that addresses optimization of functions with noisy fitness
using a distributed population switching architecture, to simulate a
distributed self-adaptive memory of the solution space. Local regression is
used in the pseudo-populations to estimate the fitness. Successful applications
to benchmark test problems ascertain the proposed method's superior performance
in terms of both robustness and accuracy.Comment: In Proceedings of the The 9th IEEE Conference on Industrial
Electronics and Applications (ICIEA 2014), IEEE Press, pp. 988-983, 201
- …