17 research outputs found

    Multi-objective optimisation in the presence of uncertainty

    Get PDF
    2005 IEEE Congress on Evolutionary Computation, Edinburgh, Scotland, 2-5 September 2005The codebase for this paper is available at https://github.com/fieldsend/ieee_cec_2005_bayes_uncertainThere has been only limited discussion on the effect of uncertainty and noise in multi-objective optimisation problems and how to deal with it. Here we address this problem by assessing the probability of dominance and maintaining an archive of solutions which are, with some known probability, mutually non-dominating.We examine methods for estimating the probability of dominance. These depend crucially on estimating the effective noise variance and we introduce a novel method of learning the variance during optimisation.Probabilistic domination contours are presented as a method for conveying the confidence that may be placed in objectives that are optimised in the presence of uncertainty

    Multi-Objective Supervised Learning

    Get PDF
    Workshop paper presented at the Workshop on Multiobjective Problem-Solving from Nature, 9th International Conference on Parallel Problem Solving from Nature (PPSN IX), Reykjavik, Iceland, 9-13 September 2006An extended version of this paper was subsequently published as a chapter in Multiobjective Problem Solving from Nature (Springer), pp. 155-176; see: http://hdl.handle.net/10871/11569This paper sets out a number of the popular areas from the literature in multi-objective supervised learning, along with simple examples. It continues by highlighting some specific areas of interest/concern when dealing with multi-objective supervised learning problems, and highlights future areas of potential research

    Regression Error Characteristic Optimisation of Non-Linear Models.

    Get PDF
    Copyright © 2006 Springer-Verlag Berlin Heidelberg. The final publication is available at link.springer.comBook title: Multi-Objective Machine LearningIn this chapter recent research in the area of multi-objective optimisation of regression models is presented and combined. Evolutionary multi-objective optimisation techniques are described for training a population of regression models to optimise the recently defined Regression Error Characteristic Curves (REC). A method which meaningfully compares across regressors and against benchmark models (i.e. ‘random walk’ and maximum a posteriori approaches) for varying error rates. Through bootstrapping training data, degrees of confident out-performance are also highlighted

    Elite Accumulative Sampling Strategies for Noisy Multi-Objective Optimisation

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-15892-1_128th International Conference on Evolutionary Multi-Criterion Optimization 2015, Guimarães, Portugal, 29 March - 1 April 1 2015The codebase for this paper is available at https://github.com/fieldsend/EMO_2015_eliteWhen designing evolutionary algorithms one of the key concerns is the balance between expending function evaluations on exploration versus exploitation. When the optimisation problem experiences observational noise, there is also a trade-off with respect to accuracy refinement – as improving the estimate of a design’s performance typically is at the cost of additional function reevaluations. Empirically the most effective resampling approach developed so far is accumulative resampling of the elite set. In this approach elite members are regularly reevaluated, meaning they progressively accumulate reevaluations over time. This results in their approximated objective values having greater fidelity, meaning non-dominated solutions are more likely to be correctly identified. Here we examine four different approaches to accumulative resampling of elite members, embedded within a differential evolution algorithm. Comparing results on 40 variants of the unconstrained IEEE CEC’09 multi-objective test problems, we find that at low noise levels a low fixed resample rate is usually sufficient, however for larger noise magnitudes progressively raising the number of minimum resamples of elite members based on detecting estimated front oscillation tends to improve performance

    Multi-Objective Supervised Learning

    Get PDF
    Copyright © 2008 Springer-Verlag Berlin Heidelberg. The final publication is available at link.springer.comBook title: Multiobjective Problem Solving from NatureExtended version of the 2006 workshop paper presented at the Workshop on Multiobjective Problem-Solving from Nature, 9th International Conference on Parallel Problem Solving from Nature (PPSN IX), Reykjavik, Iceland, 9-13 September 2006; see: http://hdl.handle.net/10871/11785This chapter sets out a number of the popular areas in multiobjective supervised learning. It gives empirical examples of model complexity optimization and competing error terms, and presents the recent advances in multi-class receiver operating characteristic analysis enabled by multiobjective optimization. It concludes by highlighting some specific areas of interest/concern when dealing with multiobjective supervised learning problems, and sets out future areas of potential research

    Multi-objective optimisation of safety related systems: An application to Short Term Conflict Alert.

    Get PDF
    Copyright © 2006 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Notes: In this paper multi-objective optimisation is used for the first time to adjust the 1500 parameters of Short-Term Conflict Alert systems to optimise the Receiver Operating Characteristic (ROC) by simultaneously reducing the false positive rate and increasing the true positive alert rate, something that previous work by other researchers had not succeeded in doing. Importantly for such safety-critical systems, the method also yields an assessment of the confidence that may be placed in the optimised ROC curves. The paper results from a collaboration with NATS and a current KTP project, also with NATS, is deploying the methods in air-traffic control centres nationwide.Many safety related and critical systems warn of potentially dangerous events; for example, the short term conflict alert (STCA) system warns of airspace infractions between aircraft. Although installed with current technology, such critical systems may become out of date due to changes in the circumstances in which they function, operational procedures, and the regulatory environment. Current practice is to "tune," by hand, the many parameters governing the system in order to optimize the operating point in terms of the true positive and false positive rates, which are frequently associated with highly imbalanced costs. We cast the tuning of critical systems as a multiobjective optimization problem. We show how a region of the optimal receiver operating characteristic (ROC) curve may be obtained, permitting the system operators to select the operating point. We apply this methodology to the STCA system, using a multiobjective (1+1) evolution strategy, showing that we can improve upon the current hand-tuned operating point, as well as providing the salient ROC curve describing the true positive versus false positive tradeoff. We also provide results for three-objective optimization of the alert response time in addition to the true and false positive rates. Additionally, we illustrate the use of bootstrapping for representing evaluation uncertainty on estimated Pareto fronts, where the evaluation of a system is based upon a finite set of representative data

    Interval-based ranking in noisy evolutionary multiobjective optimization

    Get PDF
    As one of the most competitive approaches to multi-objective optimization, evolutionary algorithms have been shown to obtain very good results for many realworld multi-objective problems. One of the issues that can affect the performance of these algorithms is the uncertainty in the quality of the solutions which is usually represented with the noise in the objective values. Therefore, handling noisy objectives in evolutionary multi-objective optimization algorithms becomes very important and is gaining more attention in recent years. In this paper we present ?-degree Pareto dominance relation for ordering the solutions in multi-objective optimization when the values of the objective functions are given as intervals. Based on this dominance relation, we propose an adaptation of the non-dominated sorting algorithm for ranking the solutions. This ranking method is then used in a standardmulti-objective evolutionary algorithm and a recently proposed novel multi-objective estimation of distribution algorithm based on joint variable-objective probabilistic modeling, and applied to a set of multi-objective problems with different levels of independent noise. The experimental results show that the use of the proposed method for solution ranking allows to approximate Pareto sets which are considerably better than those obtained when using the dominance probability-based ranking method, which is one of the main methods for noise handling in multi-objective optimization

    Comparing Solutions under Uncertainty in Multiobjective Optimization

    Get PDF
    Due to various reasons the solutions in real-world optimization problems cannot always be exactly evaluated but are sometimes represented with approximated values and confidence intervals. In order to address this issue, the comparison of solutions has to be done differently than for exactly evaluated solutions. In this paper, we define new relations under uncertainty between solutions in multiobjective optimization that are represented with approximated values and confidence intervals. The new relations extend the Pareto dominance relations, can handle constraints, and can be used to compare solutions, both with and without the confidence interval. We also show that by including confidence intervals into the comparisons, the possibility of incorrect comparisons, due to inaccurate approximations, is reduced. Without considering confidence intervals, the comparison of inaccurately approximated solutions can result in the promising solutions being rejected and the worse ones preserved. The effect of new relations in the comparison of solutions in a multiobjective optimization algorithm is also demonstrated
    corecore