1,553 research outputs found

    Improved sampling of the pareto-front in multiobjective genetic optimizations by steady-state evolution: a Pareto converging genetic algorithm

    Get PDF
    Previous work on multiobjective genetic algorithms has been focused on preventing genetic drift and the issue of convergence has been given little attention. In this paper, we present a simple steady-state strategy, Pareto Converging Genetic Algorithm (PCGA), which naturally samples the solution space and ensures population advancement towards the Pareto-front. PCGA eliminates the need for sharing/niching and thus minimizes heuristically chosen parameters and procedures. A systematic approach based on histograms of rank is introduced for assessing convergence to the Pareto-front, which, by definition, is unknown in most real search problems. We argue that there is always a certain inheritance of genetic material belonging to a population, and there is unlikely to be any significant gain beyond some point; a stopping criterion where terminating the computation is suggested. For further encouraging diversity and competition, a nonmigrating island model may optionally be used; this approach is particularly suited to many difficult (real-world) problems, which have a tendency to get stuck at (unknown) local minima. Results on three benchmark problems are presented and compared with those of earlier approaches. PCGA is found to produce diverse sampling of the Pareto-front without niching and with significantly less computational effort

    A convergence acceleration operator for multiobjective optimisation

    Get PDF
    A novel multiobjective optimisation accelerator is introduced that uses direct manipulation in objective space together with neural network mappings from objective space to decision space. This operator is a portable component that can be hybridized with any multiobjective optimisation algorithm. The purpose of this Convergence Acceleration Operator (CAO) is to enhance the search capability and the speed of convergence of the host algorithm. The operator acts directly in objective space to suggest improvements to solutions obtained by a multiobjective evolutionary algorithm (MOEA). These suggested improved objective vectors are then mapped into decision variable space and tested. The CAO is incorporated with two leading MOEAs, the Non-Dominated Sorting Genetic Algorithm (NSGA-II) and the Strength Pareto Evolutionary Algorithm (SPEA2) and tested. Results show that the hybridized algorithms consistently improve the speed of convergence of the original algorithm whilst maintaining the desired distribution of solutions

    The influence of mutation on population dynamics in multiobjective genetic programming

    Get PDF
    Using multiobjective genetic programming with a complexity objective to overcome tree bloat is usually very successful but can sometimes lead to undesirable collapse of the population to all single-node trees. In this paper we report a detailed examination of why and when collapse occurs. We have used different types of crossover and mutation operators (depth-fair and sub-tree), different evolutionary approaches (generational and steady-state), and different datasets (6-parity Boolean and a range of benchmark machine learning problems) to strengthen our conclusion. We conclude that mutation has a vital role in preventing population collapse by counterbalancing parsimony pressure and preserving population diversity. Also, mutation controls the size of the generated individuals which tends to dominate the time needed for fitness evaluation and therefore the whole evolutionary process. Further, the average size of the individuals in a GP population depends on the evolutionary approach employed. We also demonstrate that mutation has a wider role than merely culling single-node individuals from the population; even within a diversity-preserving algorithm such as SPEA2 mutation has a role in preserving diversity

    A Convergence indicator for Multi-Objective Optimisation Algorithms

    Get PDF
    The algorithms of multi-objective optimisation had a relative growth in the last years. Thereby, it's requires some way of comparing the results of these. In this sense, performance measures play a key role. In general, it's considered some properties of these algorithms such as capacity, convergence, diversity or convergence-diversity. There are some known measures such as generational distance (GD), inverted generational distance (IGD), hypervolume (HV), Spread(Δ\Delta), Averaged Hausdorff distance (Δp\Delta_p), R2-indicator, among others. In this paper, we focuses on proposing a new indicator to measure convergence based on the traditional formula for Shannon entropy. The main features about this measure are: 1) It does not require tho know the true Pareto set and 2) Medium computational cost when compared with Hypervolume.Comment: Submitted to TEM

    Multi-Objective Self-Organizing Migrating Algorithm: Sensitivity on Controlling Parameters

    Get PDF
    In this paper, we investigate the sensitivity of a novel Multi-Objective Self-Organizing Migrating Algorithm (MOSOMA) on setting its control parameters. Usually, efficiency and accuracy of searching for a solution depends on the settings of a used stochastic algorithm, because multi-objective optimization problems are highly non-linear. In the paper, the sensitivity analysis is performed exploiting a large number of benchmark problems having different properties (the number of optimized parameters, the shape of a Pareto front, etc.). The quality of solutions revealed by MOSOMA is evaluated in terms of a generational distance, a spread and a hyper-volume error. Recommendations for proper settings of the algorithm are derived: These recommendations should help a user to set the algorithm for any multi-objective task without prior knowledge about the solved problem
    corecore