89 research outputs found

    Using an adaptive collection of local evolutionary algorithms for multi-modal problems

    Get PDF
    The codebase for this paper, containing LSEA_EA algorithm, is available at https://github.com/fieldsend/soft_computing_2014_lsea_eaMulti-modality can cause serious problems for many optimisers, often resulting convergence to sub-optimal modes. Even when this is not the case, it is often useful to locate and memorise a range of modes in the design space. This is because “optimal" decision parameter combinations may not actually be feasible when moving from a mathematical model emulating the real problem, to engineering an actual solution, making a range of disparate modal solutions of practical use. This paper builds upon our work on the use of a collection of localised search algorithms for niche/mode discovery which we presented at UKCI 2013 when using a collection of surrogate models to guide mode search. Here we present the results of using a collection of exploitative local evolutionary algorithms (EAs) within the same general framework. The algorithm dynamically adjusts its population size according to the number of regions it encounters that it believes contain a mode, and uses localised EAs to guide the mode exploitation. We find that using a collection of localised EAs, which have limited communication with each other, produces competitive results with the current state-of-the-art multimodal optimisation approaches on the CEC 2013 benchmark functions

    Running Up Those Hills: Multi-Modal Search with the Niching Migratory Multi-Swarm Optimiser

    Get PDF
    Copyright © 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.2014 IEEE Congress on Evolutionary Computation, Beijing, China, 6 - 11 July 2014The codebase for this paper, containing the NMMSO algorithm, is at https://github.com/fieldsend/ieee_cec_2014_nmmsoWe present a new multi-modal evolutionary optimiser, the niching migratory multi-swarm optimiser (NMMSO), which dynamically manages many particle swarms. These sub-swarms are concerned with optimising separate local modes, and employ measures to allow swarm elements to migrate away from their parent swarm if they are identified as being in the vicinity of a separate peak, and to merge swarms together if they are identified as being concerned with the same peak. We employ coarse peak identification to facilitate the mode identification required. Swarm members are not constrained to particular sub- regions of the parameter space, however members are initialised in the vicinity of a swarm’s local mode estimate. NMMSO is shown to cope with a range of problem types, and to produce results competitive with the state-of-the-art on the CEC 2013 multi-modal optimisation competition test problems, providing new benchmark results in the field

    A short note on the efficient random sampling of the multi-dimensional pyramid between a simplex and the origin lying in the unit hypercube

    Get PDF
    Copyright © 2005 University of ExeterWhen estimating how much better a classifier is than random allocation in Q-class ROC analysis, we need to sample from a particular region of the unit hypercube: specifically the region, in the unit hypercube, which lies between the Q − 1 simplex in Q(Q − 1) space and the origin. This report introduces a fast method for randomly sampling this volume, and is compared to rejection sampling of uniform draws from the unit hypercube. The new method is based on sampling from a Dirichlet distribution and shifting these samples using a draw from the Uniform distribution. We show that this method generates random samples within the volume at a probability ≈ 1/(Q(Q − 1)), as opposed to ≈ (Q − 1)Q(Q − 1) /(Q(Q − 1))! for rejection sampling from the unit hypercube. The vast reduction in rejection rates of this method means comparing classifiers in a Q-class ROC framework is now feasible, even for large Q.Department of Computer Science, University of Exete

    Multi-Objective Particle Swarm Optimisation Methods

    Get PDF
    Copyright © 2004 University of ExeterThis study compares a number of selection regimes for the choosing of global best (gbest) and personal best (pbest) for swarm members in multi-objective particle swarm optimisation (MOPSO). Two distinct gbest selection techniques are shown to exist in the literature, those that do not restrict the selection of archive members and those with `distance' based gbest selection techniques. Theoretical justification for both of these approaches is discussed, in terms of the two types of search that these methods promote, and the potential problem of particle clumping in MOPSO is described. The popular pbest selection methods in the literature are also compared, and the ffect of the recently introduced turbulence term is viewed in terms of the additional search it promotes, across all parameter combinations. In light of the discussion, new avenues of MOPSO research are highlighted.Department of Computer Science, University of Exete

    A performance assessment of dynamic heuristic optimisers on the IEEE CEC 2015 niching competition test problems

    Get PDF
    We present the performance results of ve multi-modal heuristic optimisers on the 20 benchmark functions of the IEEE CEC 2015 competition on niching methods for multimodal optimisation. All the algorithms compared here are pre-de ned in existing works, are dynamic in their niche maintenance, and exploit `hill-valley' niche detection. Code and les of output statistics required by the competition are also provided online. We nd that although the recently developed Niching Migratory Multi-Swarm Optimiser (NMMSO) algorithm performs best overall, other optimisers perform better on some of the problems, with the main divergence in performance apparent between homogeneous and heterogeneous problem landscapes

    Multi-Objective Supervised Learning

    Get PDF
    Workshop paper presented at the Workshop on Multiobjective Problem-Solving from Nature, 9th International Conference on Parallel Problem Solving from Nature (PPSN IX), Reykjavik, Iceland, 9-13 September 2006An extended version of this paper was subsequently published as a chapter in Multiobjective Problem Solving from Nature (Springer), pp. 155-176; see: http://hdl.handle.net/10871/11569This paper sets out a number of the popular areas from the literature in multi-objective supervised learning, along with simple examples. It continues by highlighting some specific areas of interest/concern when dealing with multi-objective supervised learning problems, and highlights future areas of potential research

    Enabling dominance resistance in visualisable distance-based many-objective problems

    Get PDF
    The codebase for this paper is available at https://github.com/fieldsend/gecco_2016_vizThe results when optimising most multi- and many-objective problems are difficult to visualise, often requiring sophisticated approaches for compressing information into planar or 3D representations, which can be difficult to decipher. Given this, distance-based test problems are attractive: they can be constructed such that the designs naturally lie on the plane, and the Pareto set elements easy to identify. As such, distance-based problems have gained in popularity as a way to visualise the distribution of designs maintained by different optimisers. Some taxing problem aspects (many-to-one mappings and multi-modality) have been embedded into planar distance-based test problems, although the full range of problem characteristics which exist in other test problem frameworks (deceptive fronts, degeneracy, etc.) have not. Here we present an augmentation to the distance-based test problem formulation which induces dominance resistance regions, which are otherwise missing from these test problems. We illustrate the performance of two popular optimisers on test problems generated from this framework, and highlight particular problems with evolutionary search that can manifest due to the problem characteristics.This work was supported financially by the Engineering and Physical Sciences Research Council grant EP/M017915 /1

    Elite Accumulative Sampling Strategies for Noisy Multi-Objective Optimisation

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-15892-1_128th International Conference on Evolutionary Multi-Criterion Optimization 2015, Guimarães, Portugal, 29 March - 1 April 1 2015The codebase for this paper is available at https://github.com/fieldsend/EMO_2015_eliteWhen designing evolutionary algorithms one of the key concerns is the balance between expending function evaluations on exploration versus exploitation. When the optimisation problem experiences observational noise, there is also a trade-off with respect to accuracy refinement – as improving the estimate of a design’s performance typically is at the cost of additional function reevaluations. Empirically the most effective resampling approach developed so far is accumulative resampling of the elite set. In this approach elite members are regularly reevaluated, meaning they progressively accumulate reevaluations over time. This results in their approximated objective values having greater fidelity, meaning non-dominated solutions are more likely to be correctly identified. Here we examine four different approaches to accumulative resampling of elite members, embedded within a differential evolution algorithm. Comparing results on 40 variants of the unconstrained IEEE CEC’09 multi-objective test problems, we find that at low noise levels a low fixed resample rate is usually sufficient, however for larger noise magnitudes progressively raising the number of minimum resamples of elite members based on detecting estimated front oscillation tends to improve performance

    Optimizing forecast model complexity using multi-objective evolutionary algorithms

    Get PDF
    Copyright © 2004 World ScientificWhen inducing a time series forecasting model there has always been the problem of defining a model that is complex enough to describe the process, yet not so complex as to promote data ‘overfitting’ – the so-called bias/variance trade-off. In the sphere of neural network forecast models this is commonly confronted by weight decay regularization, or by combining a complexity penalty term in the optimizing function. The correct degree of regularization, or penalty value, to implement for any particular problem however is difficult, if not impossible, to know a priori. This chapter presents the use of multi-objective optimization techniques, specifically those of an evolutionary nature, as a potential solution to this problem. This is achieved by representing forecast model ‘complexity’ and ‘accuracy’ as two separate objectives to be optimized. In doing this one can obtain problem specific information with regards to the accuracy/complexity trade-off of any particular problem, and, given the shape of the front on a set of validation data, ascertain an appropriate operating point. Examples are provided on a forecasting problem with varying levels of noise

    Pareto multi-objective non-linear regression modelling to aid CAPM analogous forecasting

    Get PDF
    Copyright © 2002 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.2002 International Joint Conference on Neural Networks (IJCNN '02), Honolulu, Hawaii, 12-17 May 2002Recent studies confront the problem of multiple error terms through summation. However this implicitly assumes prior knowledge of the problem's error surface. This study constructs a population of Pareto optimal Neural Network regression models to describe a market generation process in relation to the forecasting of its risk and return
    • …
    corecore