80 research outputs found

    Efficiently identifying pareto solutions when objective values change

    Get PDF
    Copyright © 2014 ACMThe example code for this paper is available at https://github.com/fieldsend/gecco_2014_changing_objectivesIn many multi-objective problems the objective values assigned to a particular design can change during the course of an optimisation. This may be due to dynamic changes in the problem itself, or updates to estimated objectives in noisy problems. In these situations, designs which are non-dominated at one time step may become dominated later not just because a new and better solution has been found, but because the existing solution's performance has degraded. Likewise, a dominated solution may later be identified as non-dominated because its objectives have comparatively improved. We propose management algorithms based on recording single “guardian dominators" for each solution which allow rapid discovery and updating of the non-dominated subset of solutions evaluated by an optimiser. We examine the computational complexity of our proposed approach, and compare the performance of different ways of selecting the guardian dominators

    Cardinality constrained portfolio optimisation

    Get PDF
    Copyright © 2004 Springer-Verlag Berlin Heidelberg. The final publication is available at link.springer.comBook title: Intelligent Data Engineering and Automated Learning – IDEAL 20045th International Conference on Intelligent Data Engineering and Automated Learning (IDEAL 2004), Exeter, UK. August 25-27, 2004The traditional quadratic programming approach to portfolio optimisation is difficult to implement when there are cardinality constraints. Recent approaches to resolving this have used heuristic algorithms to search for points on the cardinality constrained frontier. However, these can be computationally expensive when the practitioner does not know a priori exactly how many assets they may desire in a portfolio, or what level of return/risk they wish to be exposed to without recourse to analysing the actual trade-off frontier.This study introduces a parallel solution to this problem. By extending techniques developed in the multi-objective evolutionary optimisation domain, a set of portfolios representing estimates of all possible cardinality constrained frontiers can be found in a single search process, for a range of portfolio sizes and constraints. Empirical results are provided on emerging markets and US asset data, and compared to unconstrained frontiers found by quadratic programming

    A MOPSO Algorithm Based Exclusively on Pareto Dominance Concepts

    Get PDF
    Copyright © 2005 Springer Verlag. The final publication is available at link.springer.com3rd International Conference, EMO 2005, Guanajuato, Mexico, March 9-11, 2005. ProceedingsBook title: Evolutionary Multi-Criterion OptimizationIn extending the Particle Swarm Optimisation methodology to multi-objective problems it is unclear how global guides for particles should be selected. Previous work has relied on metric information in objective space, although this is at variance with the notion of dominance which is used to assess the quality of solutions. Here we propose methods based exclusively on dominance for selecting guides from a non-dominated archive. The methods are evaluated on standard test problems and we find that probabilistic selection favouring archival particles that dominate few particles provides good convergence towards and coverage of the Pareto front. We demonstrate that the scheme is robust to changes in objective scaling. We propose and evaluate methods for confining particles to the feasible region, and find that allowing particles to explore regions close to the constraint boundaries is important to ensure convergence to the Pareto front

    Trading-off Data Fit and Complexity in Training Gaussian Processes with Multiple Kernels

    Get PDF
    This is the author accepted manuscript. The final version is available from Springer Verlag via the DOI in this recordLOD 2019: Fifth International Conference on Machine Learning, Optimization, and Data Science, 10-13 September 2019, Siena, ItalyGaussian processes (GPs) belong to a class of probabilistic techniques that have been successfully used in different domains of machine learning and optimization. They are popular because they provide uncertainties in predictions, which sets them apart from other modelling methods providing only point predictions. The uncertainty is particularly useful for decision making as we can gauge how reliable a prediction is. One of the fundamental challenges in using GPs is that the efficacy of a model is conferred by selecting an appropriate kernel and the associated hyperparameter values for a given problem. Furthermore, the training of GPs, that is optimizing the hyperparameters using a data set is traditionally performed using a cost function that is a weighted sum of data fit and model complexity, and the underlying trade-off is completely ignored. Addressing these challenges and shortcomings, in this article, we propose the following automated training scheme. Firstly, we use a weighted product of multiple kernels with a view to relieve the users from choosing an appropriate kernel for the problem at hand without any domain specific knowledge. Secondly, for the first time, we modify GP training by using a multi-objective optimizer to tune the hyperparameters and weights of multiple kernels and extract an approximation of the complete trade-off front between data-fit and model complexity. We then propose to use a novel solution selection strategy based on mean standardized log loss (MSLL) to select a solution from the estimated trade-off front and finalise training of a GP model. The results on three data sets and comparison with the standard approach clearly show the potential benefit of the proposed approach of using multi-objective optimization with multiple kernels.Natural Environment Research Council (NERC

    Regression Error Characteristic Optimisation of Non-Linear Models.

    Get PDF
    Copyright © 2006 Springer-Verlag Berlin Heidelberg. The final publication is available at link.springer.comBook title: Multi-Objective Machine LearningIn this chapter recent research in the area of multi-objective optimisation of regression models is presented and combined. Evolutionary multi-objective optimisation techniques are described for training a population of regression models to optimise the recently defined Regression Error Characteristic Curves (REC). A method which meaningfully compares across regressors and against benchmark models (i.e. ‘random walk’ and maximum a posteriori approaches) for varying error rates. Through bootstrapping training data, degrees of confident out-performance are also highlighted

    An Evolutionary Approach to Active Robust Multiobjective Optimisation

    Get PDF
    An Active Robust Optimisation Problem (AROP) aims at finding robust adaptable solutions, i.e. solutions that actively gain robustness to environmental changes through adaptation. Existing AROP studies have considered only a single performance objective. This study extends the Active Robust Optimisation methodology to deal with problems with more than one objective. Once multiple objectives are considered, the optimal performance for every uncertain parameter setting is a set of configurations, offering different trade-offs between the objectives. To evaluate and compare solutions to this type of problems, we suggest a robustness indicator that uses a scalarising function combining the main aims of multi-objective optimisation: proximity, diversity and pertinence. The Active Robust Multi-objective Optimisation Problem is formulated in this study, and an evolutionary algorithm that uses the hypervolume measure as a scalarasing function is suggested in order to solve it. Proof-of-concept results are demonstrated using a simplified gearbox optimisation problem for an uncertain load demand

    Multi-objective optimisation for receiver operating characteristic analysis

    Get PDF
    Copyright © 2006 Springer-Verlag Berlin Heidelberg. The final publication is available at link.springer.comBook title: Multi-Objective Machine LearningSummary Receiver operating characteristic (ROC) analysis is now a standard tool for the comparison of binary classifiers and the selection operating parameters when the costs of misclassification are unknown. This chapter outlines the use of evolutionary multi-objective optimisation techniques for ROC analysis, in both its traditional binary classification setting, and in the novel multi-class ROC situation. Methods for comparing classifier performance in the multi-class case, based on an analogue of the Gini coefficient, are described, which leads to a natural method of selecting the classifier operating point. Illustrations are given concerning synthetic data and an application to Short Term Conflict Alert

    Research on Terminal Aggregative Selection Algorithm Based on Multi-objective Evolutionary

    Full text link

    On the exploitation of search history and accumulative sampling in robust optimisation

    Full text link
    • …
    corecore