395 research outputs found
The Gradient Free Directed Search Method as Local Search within Multi-objective Evolutionary Algorithms
Recently, the Directed Search Method has been proposed as a point-wise iterative search procedure that allows to steer the search, in any direction given in objective space, of a multi-objective optimization problem. While the original version requires the objectives’ gradients, we consider here a possible modification that allows to realize the method without gradient information. This makes the novel algorithm in particular interesting for hybridization with set oriented search procedures, such as multi-objective evolutionary algorithms. In this paper, we propose the DDS, a gradient free Directed Search method, and make a first attempt to demonstrate its benefit, as a local search procedure within a memetic strategy, by integrating the DDS into the well-known algorithmMOEA/D. Numerical results on some benchmark models indicate the advantage of the resulting hybrid
A multi-objective extremal optimisation approach applied to RFID antenna design
Extremal Optimisation (EO) is a recent nature-inspired meta-heuristic whose search method is especially suitable to solve combinatorial optimisation problems. This paper presents the implementation of a multi-objective version of EO to solve the real-world Radio Frequency IDentification (RFID) antenna design problem, which must maximise efficiency and minimise resonant frequency. The approach we take produces novel modified meander line antenna designs. Another important contribution of this work is the incorporation of an inseparable fitness evaluation technique to perform the fitness evaluation of the components of solutions. This is due to the use of the NEC evaluation suite, which works as a black box process. When the results are compared with those generated by previous implementations based on Ant Colony Optimisation (ACO) and Differential Evolution (DE), it is evident that our approach is able to obtain competitive results, especially in the generation of antennas with high efficiency. These results indicate that our approach is able to perform well on this problem; however, these results can still be improved, as demonstrated through a manual local search process.Full Tex
Electrical power grid network optimisation by evolutionary computing
A major factor in the consideration of an electrical power network of the scale of a national grid is the calculation of power flow and in particular, optimal power flow. This paper considers such a network, in which distributed generation is used, and examines how the network can be optimized, in terms of transmission line capacity, in order to obtain optimal or at least high-performing configurations, using multi-objective optimisation by evolutionary computing methods
A survey of diversity-oriented optimization
The concept of diversity plays a crucial role in many optimization approaches: On the one hand, diversity can be formulated as an essential goal, such as in level set approximation or multiobjective optimization where the aim is to find a diverse set of alternative feasible or, respectively, Pareto optimal solutions. On the other hand, diversity maintenance can play an important role in algorithms that ultimately searc
Local Optimization Often is Ill-conditioned in Genetic Programming for Symbolic Regression
Gradient-based local optimization has been shown to improve results of
genetic programming (GP) for symbolic regression. Several state-of-the-art GP
implementations use iterative nonlinear least squares (NLS) algorithms such as
the Levenberg-Marquardt algorithm for local optimization. The effectiveness of
NLS algorithms depends on appropriate scaling and conditioning of the
optimization problem. This has so far been ignored in symbolic regression and
GP literature. In this study we use a singular value decomposition of NLS
Jacobian matrices to determine the numeric rank and the condition number. We
perform experiments with a GP implementation and six different benchmark
datasets. Our results show that rank-deficient and ill-conditioned Jacobian
matrices occur frequently and for all datasets. The issue is less extreme when
restricting GP tree size and when using many non-linear functions in the
function set.Comment: Submitted to International Symposium on Symbolic and Numeric
Algorithms for Scientific Computing 2022 https://synasc.ro
Negatively Correlated Search
Evolutionary Algorithms (EAs) have been shown to be powerful tools for
complex optimization problems, which are ubiquitous in both communication and
big data analytics. This paper presents a new EA, namely Negatively Correlated
Search (NCS), which maintains multiple individual search processes in parallel
and models the search behaviors of individual search processes as probability
distributions. NCS explicitly promotes negatively correlated search behaviors
by encouraging differences among the probability distributions (search
behaviors). By this means, individual search processes share information and
cooperate with each other to search diverse regions of a search space, which
makes NCS a promising method for non-convex optimization. The cooperation
scheme of NCS could also be regarded as a novel diversity preservation scheme
that, different from other existing schemes, directly promotes diversity at the
level of search behaviors rather than merely trying to maintain diversity among
candidate solutions. Empirical studies showed that NCS is competitive to
well-established search methods in the sense that NCS achieved the best overall
performance on 20 multimodal (non-convex) continuous optimization problems. The
advantages of NCS over state-of-the-art approaches are also demonstrated with a
case study on the synthesis of unequally spaced linear antenna arrays
Population extremal optimisation for discrete multi-objective optimisation problems
The power to solve intractable optimisation problems is often found through population based evolutionary methods. These include, but are not limited to, genetic algorithms, particle swarm optimisation, differential evolution and ant colony optimisation. While showing much promise as an effective optimiser, extremal optimisation uses only a single solution in its canonical form – and there are no standard population mechanics. In this paper, two population models for extremal optimisation are proposed and applied to a multi-objective version of the generalised assignment problem. These models use novel intervention/interaction strategies as well as collective memory in order to allow individual population members to work together. Additionally, a general non-dominated local search algorithm is developed and tested. Overall, the results show that improved attainment surfaces can be produced using population based interactions over not using them. The new EO approach is also shown to be highly competitive with an implementation of NSGA-II.No Full Tex
Multi-objective Optimization by Uncrowded Hypervolume Gradient Ascent
Evolutionary algorithms (EAs) are the preferred method for solving black-box
multi-objective optimization problems, but when gradients of the objective
functions are available, it is not straightforward to exploit these
efficiently. By contrast, gradient-based optimization is well-established for
single-objective optimization. A single-objective reformulation of the
multi-objective problem could therefore offer a solution. Of particular
interest to this end is the recently introduced uncrowded hypervolume (UHV)
indicator, which takes into account dominated solutions. In this work, we show
that the gradient of the UHV can often be computed, which allows for a direct
application of gradient ascent algorithms. We compare this new approach with
two EAs for UHV optimization as well as with one gradient-based algorithm for
optimizing the well-established hypervolume. On several bi-objective
benchmarks, we find that gradient-based algorithms outperform the tested EAs by
obtaining a better hypervolume with fewer evaluations whenever exact gradients
of the multiple objective functions are available and in case of small
evaluation budgets. For larger budgets, however, EAs perform similarly or
better. We further find that, when finite differences are used to approximate
the gradients of the multiple objectives, our new gradient-based algorithm is
still competitive with EAs in most considered benchmarks. Implementations are
available at https://github.com/scmaree/uncrowded-hypervolume.Comment: T.M.D. and S.C.M. contributed equally. The final authenticated
version is available in the conference proceedings of Parallel Problem
Solving from Nature - PPSN XVI. Changes in new version: removed statement
about Pareto compliance in abstract; added related work; corrected minor
mistake
- …