36,467 research outputs found
Recommended from our members
Finding High-Dimensional D-OptimalDesigns for Logistic Models via Differential Evolution
D-optimal designs are frequently used in controlled experiments to obtain the most accurateestimate of model parameters at minimal cost. Finding them can be a challenging task, especially whenthere are many factors in a nonlinear model. As the number of factors becomes large and interact withone another, there are many more variables to optimize and the D-optimal design problem becomes highdimensionaland non-separable. Consequently, premature convergence issues arise. Candidate solutions gettrapped in local optima and the classical gradient-based optimization approaches to search for the D-optimaldesigns rarely succeed. We propose a specially designed version of differential evolution (DE) which is arepresentative gradient-free optimization approach to solve such high-dimensional optimization problems.The proposed specially designed DE uses a new novelty-based mutation strategy to explore the variousregions in the search space. The exploration of the regions will be carried out differently from the previouslyexplored regions and the diversity of the population can be preserved. The proposed novelty-based mutationstrategy is collaborated with two common DE mutation strategies to balance exploration and exploitationat the early or medium stage of the evolution. Additionally, we adapt the control parameters of DE as theevolution proceeds. Using logistic models with several factors on various design spaces as examples, oursimulation results show our algorithm can find D-optimal designs efficiently and the algorithm outperformsits competitors. As an application, we apply our algorithm and re-design a 10-factor car refueling experimentwith discrete and continuous factors and selected pairwise interactions. Our proposed algorithm was able toconsistently outperform the other algorithms and find a more efficient D-optimal design for the problem
Recommended from our members
Incremental evolution strategy for function optimization
This paper presents a novel evolutionary approach for function optimization Incremental Evolution Strategy (IES). Two strategies are proposed. One is to evolve the input variables incrementally. The whole evolution consists of several phases and one more variable is focused in each phase. The number of phases is equal to the number of variables in maximum. Each phase is composed of two stages: in the single-variable evolution (SVE) stage, evolution is taken on one independent variable in a series of cutting planes; in the multi-variable evolving (MVE) stage, the initial population is formed by integrating the populations obtained by the SVE and the MVE in the last phase. And the evolution is taken on the incremented variable set. The other strategy is a hybrid of particle swarm optimization (PSO) and evolution strategy (ES). PSO is applied to adjust the cutting planes/hyper-planes (in SVEs/MVEs) while (1+1)-ES is applied to searching optima in the cutting planes/hyper-planes. The results of experiments show that the performance of IES is generally better than that of three other evolutionary algorithms, improved normal GA, PSO and SADE_CERAF, in the sense that IES finds solutions closer to the true optima and with more optimal objective values
HEURISTICS OPTIMISATION OF NUMERICAL FUNCTIONS
The article presents an investigation of heuristic behaviour of search algorithms applied to numerical problems. The aim is to compare the abilities of Particle Swarm Optimisation, Differential Evolution and Free Search to adapt to variety of search spaces without the need for constant re-tuning of algorithms parameters. The article focuses on several advanced characteristics of Free Search and attempts to clarify specifics of its behaviour. The achieved experimental results are presented and discussed
Uncertainty And Evolutionary Optimization: A Novel Approach
Evolutionary algorithms (EA) have been widely accepted as efficient solvers
for complex real world optimization problems, including engineering
optimization. However, real world optimization problems often involve uncertain
environment including noisy and/or dynamic environments, which pose major
challenges to EA-based optimization. The presence of noise interferes with the
evaluation and the selection process of EA, and thus adversely affects its
performance. In addition, as presence of noise poses challenges to the
evaluation of the fitness function, it may need to be estimated instead of
being evaluated. Several existing approaches attempt to address this problem,
such as introduction of diversity (hyper mutation, random immigrants, special
operators) or incorporation of memory of the past (diploidy, case based
memory). However, these approaches fail to adequately address the problem. In
this paper we propose a Distributed Population Switching Evolutionary Algorithm
(DPSEA) method that addresses optimization of functions with noisy fitness
using a distributed population switching architecture, to simulate a
distributed self-adaptive memory of the solution space. Local regression is
used in the pseudo-populations to estimate the fitness. Successful applications
to benchmark test problems ascertain the proposed method's superior performance
in terms of both robustness and accuracy.Comment: In Proceedings of the The 9th IEEE Conference on Industrial
Electronics and Applications (ICIEA 2014), IEEE Press, pp. 988-983, 201
Proposal and Comparative Study of Evolutionary Algorithms for Optimum Design of a Gear System
This paper proposes a novel metaheuristic framework using a Differential Evolution (DE) algorithm with the Non-dominated Sorting Genetic Algorithm-II (NSGA-II). Both algorithms are combined employing a collaborative strategy with sequential execution, which is called DE-NSGA-II. The DE-NSGA-II takes advantage of the exploration abilities of the multi-objective evolutionary algorithms strengthened with the ability to search global mono-objective optimum of DE, that enhances the capability of finding those extreme solutions of Pareto Optimal Front (POF) difficult to achieve. Numerous experiments and performance comparisons between different evolutionary algorithms were performed on a referent problem for the mono-objective and multi-objective literature, which consists of the design of a double reduction gear train. A preliminary study of the problem, solved in an exhaustive way, discovers the low density of solutions in the vicinity of the optimal solution (mono-objective case) as well as in some areas of the POF of potential interest to a decision maker (multi-objective case). This characteristic of the problem would explain the considerable difficulties for its resolution when exact methods and/or metaheuristics are used, especially in the multi-objective case. However, the DE-NSGA-II framework exceeds these difficulties and obtains the whole POF which significantly improves the few previous multi-objective studies.Fil: MĂ©ndez Babey, Máximo. Universidad de Las Palmas de Gran Canaria; EspañaFil: Rossit, Daniel Alejandro. Universidad Nacional del Sur. Departamento de IngenierĂa; Argentina. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Centro CientĂfico TecnolĂłgico Conicet - BahĂa Blanca. Instituto de Matemática BahĂa Blanca. Universidad Nacional del Sur. Departamento de Matemática. Instituto de Matemática BahĂa Blanca; ArgentinaFil: González, Begoña. Universidad de Las Palmas de Gran Canaria; EspañaFil: Frutos, Mariano. Universidad Nacional del Sur. Departamento de IngenierĂa; Argentina. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Centro CientĂfico TecnolĂłgico Conicet - BahĂa Blanca. Instituto de Investigaciones EconĂłmicas y Sociales del Sur. Universidad Nacional del Sur. Departamento de EconomĂa. Instituto de Investigaciones EconĂłmicas y Sociales del Sur; Argentin
- …