9 research outputs found

    On the comparison of initialisation strategies in differential evolution for large scale optimisation

    Get PDF
    Differential Evolution (DE) has shown to be a promising global opimisation solver for continuous problems, even for those with a large dimensionality. Different previous works have studied the effects that a population initialisation strategy has on the performance of DE when solving large scale continuous problems, and several contradictions have appeared with respect to the benefits that a particular initialisation scheme might provide. Some works have claimed that by applying a particular approach to a given problem, the performance of DE is going to be better than using others. In other cases however , researchers have stated that the overall performance of DE is not going to be affected by the use of a particular initialisation method. In this work, we study a wide range of well-known initialisation techniques for DE. Taking into account the best and worst results, statistically significant differences among considered initialisation strategies appeared. Thus, with the aim of increasing the probability of appearance of high-quality results and/or reducing the probability of appearance of low-quality ones, a suitable initialisation strategy, which depends on the large scale problem being solved, should be selected

    Effect of the initial population construction on the DBMEA algorithm searching for the optimal solution of the traveling salesman problem

    Get PDF
    There are many factors that affect the performance of the evolutionary and memetic algorithms. One of these factors is the proper selection of the initial population, as it represents a very important criterion contributing to the convergence speed. Selecting a conveniently preprocessed initial population definitely increases the convergence speed and thus accelerates the probability of steering the search towards better regions in the search space, hence, avoiding premature convergence towards a local optimum. In this paper, we propose a new method for generating the initial individual candidate solution called Circle Group Heuristic (CGH) for Discrete Bacterial Memetic Evolutionary Algorithm (DBMEA), which is built with aid of a simple Genetic Algorithm (GA). CGH has been tested for several benchmark reference data of the Travelling Salesman Problem (TSP). The practical results show that CGH gives better tours compared with other well-known heuristic tour construction methods

    Bootstrapping artificial evolution to design robots for autonomous fabrication

    Get PDF
    A long-term vision of evolutionary robotics is a technology enabling the evolution of entire autonomous robotic ecosystems that live and work for long periods in challenging and dynamic environments without the need for direct human oversight. Evolutionary Robotics has been widely used due to its capability of creating unique robot designs in simulation. Recent work has shown that it is possible to autonomously construct evolved designs in the physical domain, however this brings new challenges: the autonomous manufacture and assembly process introduces new constraints that are not apparent in simulation. To tackle this, we introduce a new method for producing a repertoire of diverse but manufacturable robots. This repertoire is used to seed an evolutionary loop that subsequently evolves robot designs and controllers capable of solving a maze-navigation task. We show that compared to random initialisation, seeding with a diverse and manufacturable population speeds up convergence and on some tasks, increases performance, while maintaining manufacturability

    Target detection with morphological shared-weight neural network : different update approaches

    Get PDF
    Neural networks are widely used for image processing. Of these, the convolutional neural network (CNN) is one of the most popular. However, the CNN needs a large amount of training data to improve its accuracy. If training data is limited, a morphological shared-weight neural network (MSNN) can be a better choice. In this thesis, two different update approaches based on an evolutionary algorithm are proposed and compared to each other for target detection based on the MSNN. Another network training, based on back propagation, is used for comparisons in this thesis, which was proposed by Yongwan Won and applied by my colleague and fellow graduate student, Shuxian Shen and Anes Ouadou. Single-layer and multiple-layer MSNNs are both presented with different approaches. For a dataset, the author created part of a dataset for this thesis and used another dataset created by Shen to make comparisons with her network. Results of the MSNN are compared with CNN results to show the performance. Experiments show that for a single-layer MSNN, the performance of an evolutionary algorithm with partial backpropagation is the best. For a multiple layer MSNN, backpropagation performs better, although the MSNN still has a better performance than the CNN.Includes bibliographical reference

    On the performance of the hybridisation between migrating birds optimisation variants and differential evolution for large scale continuous problems

    Get PDF
    Migrating Birds Optimisation (mbo) is a nature-inspired approach which has been shown to be very effective when solving a variety of combinatorial optimisation problems. More recently, an adaptation of the algorithm has been proposed that enables it to deal with continuous search spaces. We extend this work in two ways.Firstly, a novel leader replacement strategy is proposed to counter the slow convergence of the existing mbo algorithms due to low selection pressure. Secondly, mbo is hybridised with adaptive neighbourhood operators borrowed from Differential Evolution (de) that promote exploration and exploitation. The new variants are tested on two sets of continuous large scale optimisation problems. Results show that mbo variants using adaptive, exploration-based operators outperform de on the cec benchmark suite with 1000variables. Further experiments on a second suite of 19 problems show that mbo variants outperform de on 90% of these test-cases

    Enhanced Harris's Hawk algorithm for continuous multi-objective optimization problems

    Get PDF
    Multi-objective swarm intelligence-based (MOSI-based) metaheuristics were proposed to solve multi-objective optimization problems (MOPs) with conflicting objectives. Harris鈥檚 hawk multi-objective optimizer (HHMO) algorithm is a MOSIbased algorithm that was developed based on the reference point approach. The reference point is determined by the decision maker to guide the search process to a particular region in the true Pareto front. However, HHMO algorithm produces a poor approximation to the Pareto front because lack of information sharing in its population update strategy, equal division of convergence parameter and randomly generated initial population. A two-step enhanced non-dominated sorting HHMO (2SENDSHHMO) algorithm has been proposed to solve this problem. The algorithm includes (i) a population update strategy which improves the movement of hawks in the search space, (ii) a parameter adjusting strategy to control the transition between exploration and exploitation, and (iii) a population generating method in producing the initial candidate solutions. The population update strategy calculates a new position of hawks based on the flush-and-ambush technique of Harris鈥檚 hawks, and selects the best hawks based on the non-dominated sorting approach. The adjustment strategy enables the parameter to adaptively changed based on the state of the search space. The initial population is produced by generating quasi-random numbers using Rsequence followed by adapting the partial opposition-based learning concept to improve the diversity of the worst half in the population of hawks. The performance of the 2S-ENDSHHMO has been evaluated using 12 MOPs and three engineering MOPs. The obtained results were compared with the results of eight state-of-the-art multi-objective optimization algorithms. The 2S-ENDSHHMO algorithm was able to generate non-dominated solutions with greater convergence and diversity in solving most MOPs and showed a great ability in jumping out of local optima. This indicates the capability of the algorithm in exploring the search space. The 2S-ENDSHHMO algorithm can be used to improve the search process of other MOSI-based algorithms and can be applied to solve MOPs in applications such as structural design and signal processing

    Towards a more efficient use of computational budget in large-scale black-box optimization

    Get PDF
    Evolutionary algorithms are general purpose optimizers that have been shown effective in solving a variety of challenging optimization problems. In contrast to mathematical programming models, evolutionary algorithms do not require derivative information and are still effective when the algebraic formula of the given problem is unavailable. Nevertheless, the rapid advances in science and technology have witnessed the emergence of more complex optimization problems than ever, which pose significant challenges to traditional optimization methods. The dimensionality of the search space of an optimization problem when the available computational budget is limited is one of the main contributors to its difficulty and complexity. This so-called curse of dimensionality can significantly affect the efficiency and effectiveness of optimization methods including evolutionary algorithms. This research aims to study two topics related to a more efficient use of computational budget in evolutionary algorithms when solving large-scale black-box optimization problems. More specifically, we study the role of population initializers in saving the computational resource, and computational budget allocation in cooperative coevolutionary algorithms. Consequently, this dissertation consists of two major parts, each of which relates to one of these research directions. In the first part, we review several population initialization techniques that have been used in evolutionary algorithms. Then, we categorize them from different perspectives. The contribution of each category to improving evolutionary algorithms in solving large-scale problems is measured. We also study the mutual effect of population size and initialization technique on the performance of evolutionary techniques when dealing with large-scale problems. Finally, assuming uniformity of initial population as a key contributor in saving a significant part of the computational budget, we investigate whether achieving a high-level of uniformity in high-dimensional spaces is feasible given the practical restriction in computational resources. In the second part of the thesis, we study the large-scale imbalanced problems. In many real world applications, a large problem may consist of subproblems with different degrees of difficulty and importance. In addition, the solution to each subproblem may contribute differently to the overall objective value of the final solution. When the computational budget is restricted, which is the case in many practical problems, investing the same portion of resources in optimizing each of these imbalanced subproblems is not the most efficient strategy. Therefore, we examine several ways to learn the contribution of each subproblem, and then, dynamically allocate the limited computational resources in solving each of them according to its contribution to the overall objective value of the final solution. To demonstrate the effectiveness of the proposed framework, we design a new set of 40 large-scale imbalanced problems and study the performance of some possible instances of the framework

    Initialization methods for large scale global optimization

    No full text
    Several population initialization methods for evolutionary algorithms (EAs) have been proposed previously. This paper categorizes the most well-known initialization methods and studies the effect of them on large scale global optimization problems. Experimental results indicate that the optimization of large scale problems using EAs is more sensitive to the initial population than optimizing lower dimensional problems. Statistical analysis of results show that basic random number generators, which are the most commonly used method for population initialization in EAs, lead to the inferior performance. Furthermore, our study shows, regardless of the size of the initial population, choosing a proper initialization method is vital for solving large scale problems

    Initialization Methods for Large Scale Global Optimization

    No full text
    Abstract鈥擲everal population initialization methods for evolutionary algorithms (EAs) have been proposed previously. This paper categorizes the most well-known initialization methods and studies the effect of them on large scale global optimization problems. Experimental results indicate that the optimization of large scale problems using EAs is more sensitive to the initial population than optimizing lower dimensional problems. Statistical analysis of results show that basic random number generators, which are the most commonly used method for population initialization in EAs, lead to the inferior performance. Furthermore, our study shows, regardless of the size of the initial population, choosing a proper initialization method is vital for solving large scale problems. I
    corecore