12,907 research outputs found

    SQG-Differential Evolution for difficult optimization problems under a tight function evaluation budget

    Full text link
    In the context of industrial engineering, it is important to integrate efficient computational optimization methods in the product development process. Some of the most challenging simulation-based engineering design optimization problems are characterized by: a large number of design variables, the absence of analytical gradients, highly non-linear objectives and a limited function evaluation budget. Although a huge variety of different optimization algorithms is available, the development and selection of efficient algorithms for problems with these industrial relevant characteristics, remains a challenge. In this communication, a hybrid variant of Differential Evolution (DE) is introduced which combines aspects of Stochastic Quasi-Gradient (SQG) methods within the framework of DE, in order to improve optimization efficiency on problems with the previously mentioned characteristics. The performance of the resulting derivative-free algorithm is compared with other state-of-the-art DE variants on 25 commonly used benchmark functions, under tight function evaluation budget constraints of 1000 evaluations. The experimental results indicate that the new algorithm performs excellent on the 'difficult' (high dimensional, multi-modal, inseparable) test functions. The operations used in the proposed mutation scheme, are computationally inexpensive, and can be easily implemented in existing differential evolution variants or other population-based optimization algorithms by a few lines of program code as an non-invasive optional setting. Besides the applicability of the presented algorithm by itself, the described concepts can serve as a useful and interesting addition to the algorithmic operators in the frameworks of heuristics and evolutionary optimization and computing

    Multimodal estimation of distribution algorithms

    Get PDF
    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima

    Integrating continuous differential evolution with discrete local search for meander line RFID antenna design

    Get PDF
    The automated design of meander line RFID antennas is a discrete self-avoiding walk(SAW) problem for which efficiency is to be maximized while resonant frequency is to beminimized. This work presents a novel exploration of how discrete local search may beincorporated into a continuous solver such as differential evolution (DE). A prior DE algorithmfor this problem that incorporates an adaptive solution encoding and a bias favoringantennas with low resonant frequency is extended by the addition of the backbite localsearch operator and a variety of schemes for reintroducing modified designs into the DEpopulation. The algorithm is extremely competitive with an existing ACO approach and thetechnique is transferable to other SAW problems and other continuous solvers. The findingsindicate that careful reintegration of discrete local search results into the continuous populationis necessary for effective performance

    Large-scale optimization : combining co-operative coevolution and fitness inheritance

    Get PDF
    Large-scale optimization, here referring mainly to problems with many design parameters remains a serious challenge for optimization algorithms. When the problem at hand does not succumb to analytical treatment (an overwhelmingly common place situation), the engineering and adaptation of stochastic black box optimization methods tends to be a favoured approach, particularly the use of Evolutionary Algorithms (EAs). In this context, many approaches are currently under investigation for accelerating performance on large-scale problems, and we focus on two of those in this research. The first is co-operative co-evolution (CC), where the strategy is to successively optimize only subsets of the design parameters at a time, keeping the remainder fixed, with an organized approach to managing and reconciling these subspace optimization. The second is fitness inheritance (FI), which is essentially a very simple surrogate model strategy, in which, with some probability, the fitness of a solution is simply guessed to be a simple function of the finesses of that solution’s parents. Both CC and FI have been found successful on nontrivial and multiple test cases, and they use fundamentally distinct strategies. In this thesis, we explored the extent to which both of these strategies can be used to provide additional benefits. In addition to combining CC and FI, this thesis also introduces a new FI scheme which further improves the performance of CC-FI. We show that the new algorithm CC-FI is highly effective for solving problems, especially when the new FI scheme is used. In the thesis, we also explored two basic adaptive parameter setting strategies for the FI component. We found that engineering FI (and CC, where it was otherwise not present) into these algorithms led to good performance and results

    Path Planning for Single Unmanned Aerial Vehicle by Separately Evolving Waypoints

    Get PDF
    Evolutionary algorithm-based unmanned aerial vehicle (UAV) path planners have been extensively studied for their effectiveness and flexibility. However, they still suffer from a drawback that the high-quality waypoints in previous candidate paths can hardly be exploited for further evolution, since they regard all the waypoints of a path as an integrated individual. Due to this drawback, the previous planners usually fail when encountering lots of obstacles. In this paper, a new idea of separately evaluating and evolving waypoints is presented to solve this problem. Concretely, the original objective and constraint functions of UAVs path planning are decomposed into a set of new evaluation functions, with which waypoints on a path can be evaluated separately. The new evaluation functions allow waypoints on a path to be evolved separately and, thus, high-quality waypoints can be better exploited. On this basis, the waypoints are encoded in a rotated coordinate system with an external restriction and evolved with JADE, a state-of-the-art variant of the differential evolution algorithm. To test the capabilities of the new planner on planning obstacle-free paths, five scenarios with increasing numbers of obstacles are constructed. Three existing planners and four variants of the proposed planner are compared to assess the effectiveness and efficiency of the proposed planner. The results demonstrate the superiority of the proposed planner and the idea of separate evolution

    Feature-based search space characterisation for data-driven adaptive operator selection

    Get PDF
    Combinatorial optimisation problems are known as unpredictable and challenging due to their nature and complexity. One way to reduce the unpredictability of such problems is to identify features and the characteristics that can be utilised to guide the search using domain-knowledge and act accordingly. Many problem solving algorithms use multiple complementary operators in patterns to handle such unpredictable cases. A well-characterised search space may help to evaluate the problem states better and select/apply a neighbourhood operator to generate more productive new problem states that allow for a smoother path to the final/optimum solutions. This applies to the algorithms that use multiple operators to solve problems. However, the remaining challenge is determining how to select an operator in an optimal way from the set of operators while taking the search space conditions into consideration. Recent research shows the success of adaptive operator selection to address this problem. However, efficiency and scalability issues still persist in this regard. In addition, selecting the most representative features remains crucial in addressing problem complexity and inducing commonality for transferring experience across domains. This paper investigates if a problem can be represented by a number of features identified by landscape analysis, and whether an adaptive operator selection scheme can be constructed using Machine Learning (ML) techniques to address the efficiency and scalability issues. The proposed method determines the optimal categorisation by analysing the predictivity of a set of features using the most well-known supervised ML techniques. The identified set of features is then used to construct an adaptive operator selection scheme. The findings of the experiments demonstrate that supervised ML algorithms are highly effective when building adaptable operator selectors

    Learning to Control Differential Evolution Operators

    Get PDF
    Evolutionary algorithms are widely used for optimsation by researchers in academia and industry. These algorithms have parameters, which have proven to highly determine the performance of an algorithm. For many decades, researchers have focused on determining optimal parameter values for an algorithm. Each parameter configuration has a performance value attached to it that is used to determine a good configuration for an algorithm. Parameter values depend on the problem at hand and are known to be set in two ways, by means of offline and online selection. Offline tuning assumes that the performance value of a configuration remains same during all generations in a run whereas online tuning assumes that the performance value varies from one generation to another. This thesis presents various adaptive approaches each learning from a range of feedback received from the evolutionary algorithm. The contributions demonstrate the benefits of utilising online and offline learning together at different levels for a particular task. Offline selection has been utilised to tune the hyper-parameters of proposed adaptive methods that control the parameters of evolutionary algorithm on-the-fly. All the contributions have been presented to control the mutation strategies of the differential evolution. The first contribution demonstrates an adaptive method that is mapped as markov reward process. It aims to maximise the cumulative future reward. Next chapter unifies various adaptive methods from literature that can be utilised to replicate existing methods and test new ones. The hyper-parameters of methods in first two chapters are tuned by an offline configurator, irace. Last chapter proposes four methods utilising deep reinforcement learning model. To test the applicability of the adaptive approaches presented in the thesis, all methods are compared to various adaptive methods from literature, variants of differential evolution and other state-of-the-art algorithms on various single objective noiseless problems from benchmark set, BBOB

    Adaptive multimodal continuous ant colony optimization

    Get PDF
    Seeking multiple optima simultaneously, which multimodal optimization aims at, has attracted increasing attention but remains challenging. Taking advantage of ant colony optimization algorithms in preserving high diversity, this paper intends to extend ant colony optimization algorithms to deal with multimodal optimization. First, combined with current niching methods, an adaptive multimodal continuous ant colony optimization algorithm is introduced. In this algorithm, an adaptive parameter adjustment is developed, which takes the difference among niches into consideration. Second, to accelerate convergence, a differential evolution mutation operator is alternatively utilized to build base vectors for ants to construct new solutions. Then, to enhance the exploitation, a local search scheme based on Gaussian distribution is self-adaptively performed around the seeds of niches. Together, the proposed algorithm affords a good balance between exploration and exploitation. Extensive experiments on 20 widely used benchmark multimodal functions are conducted to investigate the influence of each algorithmic component and results are compared with several state-of-the-art multimodal algorithms and winners of competitions on multimodal optimization. These comparisons demonstrate the competitive efficiency and effectiveness of the proposed algorithm, especially in dealing with complex problems with high numbers of local optima
    corecore