15,239 research outputs found

    On the evolutionary optimisation of many conflicting objectives

    Get PDF
    This inquiry explores the effectiveness of a class of modern evolutionary algorithms, represented by Non-dominated Sorting Genetic Algorithm (NSGA) components, for solving optimisation tasks with many conflicting objectives. Optimiser behaviour is assessed for a grid of mutation and recombination operator configurations. Performance maps are obtained for the dual aims of proximity to, and distribution across, the optimal trade-off surface. Performance sweet-spots for both variation operators are observed to contract as the number of objectives is increased. Classical settings for recombination are shown to be suitable for small numbers of objectives but correspond to very poor performance for higher numbers of objectives, even when large population sizes are used. Explanations for this behaviour are offered via the concepts of dominance resistance and active diversity promotion

    Methods for many-objective optimization: an analysis

    Get PDF
    Decomposition-based methods are often cited as the solution to problems related with many-objective optimization. Decomposition-based methods employ a scalarizing function to reduce a many-objective problem into a set of single objective problems, which upon solution yields a good approximation of the set of optimal solutions. This set is commonly referred to as Pareto front. In this work we explore the implications of using decomposition-based methods over Pareto-based methods from a probabilistic point of view. Namely, we investigate whether there is an advantage of using a decomposition-based method, for example using the Chebyshev scalarizing function, over Paretobased methods

    Half a billion simulations: evolutionary algorithms and distributed computing for calibrating the SimpopLocal geographical model

    Get PDF
    Multi-agent geographical models integrate very large numbers of spatial interactions. In order to validate those models large amount of computing is necessary for their simulation and calibration. Here a new data processing chain including an automated calibration procedure is experimented on a computational grid using evolutionary algorithms. This is applied for the first time to a geographical model designed to simulate the evolution of an early urban settlement system. The method enables us to reduce the computing time and provides robust results. Using this method, we identify several parameter settings that minimise three objective functions that quantify how closely the model results match a reference pattern. As the values of each parameter in different settings are very close, this estimation considerably reduces the initial possible domain of variation of the parameters. The model is thus a useful tool for further multiple applications on empirical historical situations

    A nature-inspired multi-objective optimisation strategy based on a new reduced space searching algorithm for the design of alloy steels

    Get PDF
    In this paper, a salient search and optimisation algorithm based on a new reduced space searching strategy, is presented. This algorithm originates from an idea which relates to a simple experience when humans search for an optimal solution to a ‘real-life’ problem, i.e. when humans search for a candidate solution given a certain objective, a large area tends to be scanned first; should one succeed in finding clues in relation to the predefined objective, then the search space is greatly reduced for a more detailed search. Furthermore, this new algorithm is extended to the multi-objective optimisation case. Simulation results of optimising some challenging benchmark problems suggest that both the proposed single objective and multi-objective optimisation algorithms outperform some of the other well-known Evolutionary Algorithms (EAs). The proposed algorithms are further applied successfully to the optimal design problem of alloy steels, which aims at determining the optimal heat treatment regime and the required weight percentages for chemical composites to obtain the desired mechanical properties of steel hence minimising production costs and achieving the overarching aim of ‘right-first-time production’ of metals

    Stochastic simulation framework for the Limit Order Book using liquidity motivated agents

    Full text link
    In this paper we develop a new form of agent-based model for limit order books based on heterogeneous trading agents, whose motivations are liquidity driven. These agents are abstractions of real market participants, expressed in a stochastic model framework. We develop an efficient way to perform statistical calibration of the model parameters on Level 2 limit order book data from Chi-X, based on a combination of indirect inference and multi-objective optimisation. We then demonstrate how such an agent-based modelling framework can be of use in testing exchange regulations, as well as informing brokerage decisions and other trading based scenarios

    A multi-objective genetic algorithm for the design of pressure swing adsorption

    Get PDF
    Pressure Swing Adsorption (PSA) is a cyclic separation process, more advantageous over other separation options for middle scale processes. Automated tools for the design of PSA processes would be beneficial for the development of the technology, but their development is a difficult task due to the complexity of the simulation of PSA cycles and the computational effort needed to detect the performance at cyclic steady state. We present a preliminary investigation of the performance of a custom multi-objective genetic algorithm (MOGA) for the optimisation of a fast cycle PSA operation, the separation of air for N2 production. The simulation requires a detailed diffusion model, which involves coupled nonlinear partial differential and algebraic equations (PDAEs). The efficiency of MOGA to handle this complex problem has been assessed by comparison with direct search methods. An analysis of the effect of MOGA parameters on the performance is also presented

    Robust optimisation of urban drought security for an uncertain climate

    Get PDF
    Abstract Recent experience with drought and a shifting climate has highlighted the vulnerability of urban water supplies to “running out of water” in Perth, south-east Queensland, Sydney, Melbourne and Adelaide and has triggered major investment in water source infrastructure which ultimately will run into tens of billions of dollars. With the prospect of continuing population growth in major cities, the provision of acceptable drought security will become more pressing particularly if the future climate becomes drier. Decision makers need to deal with significant uncertainty about future climate and population. In particular the science of climate change is such that the accuracy of model predictions of future climate is limited by fundamental irreducible uncertainties. It would be unwise to unduly rely on projections made by climate models and prudent to favour solutions that are robust across a range of possible climate futures. This study presents and demonstrates a methodology that addresses the problem of finding “good” solutions for urban bulk water systems in the presence of deep uncertainty about future climate. The methodology involves three key steps: 1) Build a simulation model of the bulk water system; 2) Construct replicates of future climate that reproduce natural variability seen in the instrumental record and that reflect a plausible range of future climates; and 3) Use multi-objective optimisation to efficiently search through potentially trillions of solutions to identify a set of “good” solutions that optimally trade-off expected performance against robustness or sensitivity of performance over the range of future climates. A case study based on the Lower Hunter in New South Wales demonstrates the methodology. It is important to note that the case study does not consider the full suite of options and objectives; preliminary information on plausible options has been generalised for demonstration purposes and therefore its results should only be used in the context of evaluating the methodology. “Dry” and “wet” climate scenarios that represent the likely span of climate in 2070 based on the A1F1 emissions scenario were constructed. Using the WATHNET5 model, a simulation model of the Lower Hunter was constructed and validated. The search for “good” solutions was conducted by minimizing two criteria, 1) the expected present worth cost of capital and operational costs and social costs due to restrictions and emergency rationing, and 2) the difference in present worth cost between the “dry” and “wet” 2070 climate scenarios. The constraint was imposed that solutions must be able to supply (reduced) demand in the worst drought. Two demand scenarios were considered, “1.28 x current demand” representing expected consumption in 2060 and “2 x current demand” representing a highly stressed system. The optimisation considered a representative range of options including desalination, new surface water sources, demand substitution using rainwater tanks, drought contingency measures and operating rules. It was found the sensitivity of solutions to uncertainty about future climate varied considerably. For the “1.28 x demand” scenario there was limited sensitivity to the climate scenarios resulting in a narrow range of trade-offs. In contrast, for the “2 x demand” scenario, the trade-off between expected present worth cost and robustness was considerable. The main policy implication is that (possibly large) uncertainty about future climate may not necessarily produce significantly different performance trajectories. The sensitivity is determined not only by differences between climate scenarios but also by other external stresses imposed on the system such as population growth and by constraints on the available options to secure the system against drought. Recent experience with drought and a shifting climate has highlighted the vulnerability of urban water supplies to “running out of water” in Perth, south-east Queensland, Sydney, Melbourne and Adelaide and has triggered major investment in water source infrastructure which ultimately will run into tens of billions of dollars. With the prospect of continuing population growth in major cities, the provision of acceptable drought security will become more pressing particularly if the future climate becomes drier. Decision makers need to deal with significant uncertainty about future climate and population. In particular the science of climate change is such that the accuracy of model predictions of future climate is limited by fundamental irreducible uncertainties. It would be unwise to unduly rely on projections made by climate models and prudent to favour solutions that are robust across a range of possible climate futures. This study presents and demonstrates a methodology that addresses the problem of finding “good” solutions for urban bulk water systems in the presence of deep uncertainty about future climate. The methodology involves three key steps: 1) Build a simulation model of the bulk water system; 2) Construct replicates of future climate that reproduce natural variability seen in the instrumental record and that reflect a plausible range of future climates; and 3) Use multi-objective optimisation to efficiently search through potentially trillions of solutions to identify a set of “good” solutions that optimally trade-off expected performance against robustness or sensitivity of performance over the range of future climates. A case study based on the Lower Hunter in New South Wales demonstrates the methodology. It is important to note that the case study does not consider the full suite of options and objectives; preliminary information on plausible options has been generalised for demonstration purposes and therefore its results should only be used in the context of evaluating the methodology. “Dry” and “wet” climate scenarios that represent the likely span of climate in 2070 based on the A1F1 emissions scenario were constructed. Using the WATHNET5 model, a simulation model of the Lower Hunter was constructed and validated. The search for “good” solutions was conducted by minimizing two criteria, 1) the expected present worth cost of capital and operational costs and social costs due to restrictions and emergency rationing, and 2) the difference in present worth cost between the “dry” and “wet” 2070 climate scenarios. The constraint was imposed that solutions must be able to supply (reduced) demand in the worst drought. Two demand scenarios were considered, “1.28 x current demand” representing expected consumption in 2060 and “2 x current demand” representing a highly stressed system. The optimisation considered a representative range of options including desalination, new surface water sources, demand substitution using rainwater tanks, drought contingency measures and operating rules. It was found the sensitivity of solutions to uncertainty about future climate varied considerably. For the “1.28 x demand” scenario there was limited sensitivity to the climate scenarios resulting in a narrow range of trade-offs. In contrast, for the “2 x demand” scenario, the trade-off between expected present worth cost and robustness was considerable. The main policy implication is that (possibly large) uncertainty about future climate may not necessarily produce significantly different performance trajectories. The sensitivity is determined not only by differences between climate scenarios but also by other external stresses imposed on the system such as population growth and by constraints on the available options to secure the system against drought. Please cite this report as: Mortazavi, M, Kuczera, G, Kiem, AS, Henley, B, Berghout, B,Turner, E, 2013 Robust optimisation of urban drought security for an uncertain climate. National Climate Change Adaptation Research Facility, Gold Coast, pp. 74

    PasMoQAP: A Parallel Asynchronous Memetic Algorithm for solving the Multi-Objective Quadratic Assignment Problem

    Full text link
    Multi-Objective Optimization Problems (MOPs) have attracted growing attention during the last decades. Multi-Objective Evolutionary Algorithms (MOEAs) have been extensively used to address MOPs because are able to approximate a set of non-dominated high-quality solutions. The Multi-Objective Quadratic Assignment Problem (mQAP) is a MOP. The mQAP is a generalization of the classical QAP which has been extensively studied, and used in several real-life applications. The mQAP is defined as having as input several flows between the facilities which generate multiple cost functions that must be optimized simultaneously. In this study, we propose PasMoQAP, a parallel asynchronous memetic algorithm to solve the Multi-Objective Quadratic Assignment Problem. PasMoQAP is based on an island model that structures the population by creating sub-populations. The memetic algorithm on each island individually evolve a reduced population of solutions, and they asynchronously cooperate by sending selected solutions to the neighboring islands. The experimental results show that our approach significatively outperforms all the island-based variants of the multi-objective evolutionary algorithm NSGA-II. We show that PasMoQAP is a suitable alternative to solve the Multi-Objective Quadratic Assignment Problem.Comment: 8 pages, 3 figures, 2 tables. Accepted at Conference on Evolutionary Computation 2017 (CEC 2017
    • 

    corecore