49,856 research outputs found

    Evolutionary multi-objective worst-case robust optimisation

    Get PDF
    Many real-world problems are subject to uncertainty, and often solutions should not only be good, but also robust against environmental disturbances or deviations from the decision variables. While most papers dealing with robustness aim at finding solutions with a high expected performance given a distribution of the uncertainty, we examine the trade-off between the allowed deviations from the decision variables (tolerance level), and the worst case performance given the allowed deviations. In this research work, we suggest two multi-objective evolutionary algorithms to compute the available trade-offs between allowed tolerance level and worst-case quality of the solutions, and the tolerance level is defined as robustness which could also be the variations from parameters. Both algorithms are 2-level nested algorithms. While the first algorithm is point-based in the sense that the lower level computes a point of worst case for each upper level solution, the second algorithm is envelope-based, in the sense that the lower level computes a whole trade-off curve between worst-case fitness and tolerance level for each upper level solution. Our problem can be considered as a special case of bi-level optimisation, which is computationally expensive, because each upper level solution is evaluated by calling a lower level optimiser. We propose and compare several strategies to improve the efficiency of both algorithms. Later, we also suggest surrogate-assisted algorithms to accelerate both algorithms

    Analysis-of-marginal-Tail-Means (ATM): a robust method for discrete black-box optimization

    Full text link
    We present a new method, called Analysis-of-marginal-Tail-Means (ATM), for effective robust optimization of discrete black-box problems. ATM has important applications to many real-world engineering problems (e.g., manufacturing optimization, product design, molecular engineering), where the objective to optimize is black-box and expensive, and the design space is inherently discrete. One weakness of existing methods is that they are not robust: these methods perform well under certain assumptions, but yield poor results when such assumptions (which are difficult to verify in black-box problems) are violated. ATM addresses this via the use of marginal tail means for optimization, which combines both rank-based and model-based methods. The trade-off between rank- and model-based optimization is tuned by first identifying important main effects and interactions, then finding a good compromise which best exploits additive structure. By adaptively tuning this trade-off from data, ATM provides improved robust optimization over existing methods, particularly in problems with (i) a large number of factors, (ii) unordered factors, or (iii) experimental noise. We demonstrate the effectiveness of ATM in simulations and in two real-world engineering problems: the first on robust parameter design of a circular piston, and the second on product family design of a thermistor network

    Multi-objective worst case optimization by means of evolutionary algorithms

    Get PDF
    Many real-world optimization problems are subject to uncertainty. A possible goal is then to find a solution which is robust in the sense that it has the best worst-case performance over all possible scenarios. However, if the problem also involves mul- tiple objectives, which scenario is “best” or “worst” depends on the user’s weighting of the different criteria, which is generally difficult to specify before alternatives are known. Evolutionary multi-objective optimization avoids this problem by searching for the whole front of Pareto optimal solutions. This paper extends the concept of Pareto dominance to worst case optimization problems and demonstrates how evolu- tionary algorithms can be used for worst case optimization in a multi-objective setting

    Robust optimisation of urban drought security for an uncertain climate

    Get PDF
    Abstract Recent experience with drought and a shifting climate has highlighted the vulnerability of urban water supplies to “running out of water” in Perth, south-east Queensland, Sydney, Melbourne and Adelaide and has triggered major investment in water source infrastructure which ultimately will run into tens of billions of dollars. With the prospect of continuing population growth in major cities, the provision of acceptable drought security will become more pressing particularly if the future climate becomes drier. Decision makers need to deal with significant uncertainty about future climate and population. In particular the science of climate change is such that the accuracy of model predictions of future climate is limited by fundamental irreducible uncertainties. It would be unwise to unduly rely on projections made by climate models and prudent to favour solutions that are robust across a range of possible climate futures. This study presents and demonstrates a methodology that addresses the problem of finding “good” solutions for urban bulk water systems in the presence of deep uncertainty about future climate. The methodology involves three key steps: 1) Build a simulation model of the bulk water system; 2) Construct replicates of future climate that reproduce natural variability seen in the instrumental record and that reflect a plausible range of future climates; and 3) Use multi-objective optimisation to efficiently search through potentially trillions of solutions to identify a set of “good” solutions that optimally trade-off expected performance against robustness or sensitivity of performance over the range of future climates. A case study based on the Lower Hunter in New South Wales demonstrates the methodology. It is important to note that the case study does not consider the full suite of options and objectives; preliminary information on plausible options has been generalised for demonstration purposes and therefore its results should only be used in the context of evaluating the methodology. “Dry” and “wet” climate scenarios that represent the likely span of climate in 2070 based on the A1F1 emissions scenario were constructed. Using the WATHNET5 model, a simulation model of the Lower Hunter was constructed and validated. The search for “good” solutions was conducted by minimizing two criteria, 1) the expected present worth cost of capital and operational costs and social costs due to restrictions and emergency rationing, and 2) the difference in present worth cost between the “dry” and “wet” 2070 climate scenarios. The constraint was imposed that solutions must be able to supply (reduced) demand in the worst drought. Two demand scenarios were considered, “1.28 x current demand” representing expected consumption in 2060 and “2 x current demand” representing a highly stressed system. The optimisation considered a representative range of options including desalination, new surface water sources, demand substitution using rainwater tanks, drought contingency measures and operating rules. It was found the sensitivity of solutions to uncertainty about future climate varied considerably. For the “1.28 x demand” scenario there was limited sensitivity to the climate scenarios resulting in a narrow range of trade-offs. In contrast, for the “2 x demand” scenario, the trade-off between expected present worth cost and robustness was considerable. The main policy implication is that (possibly large) uncertainty about future climate may not necessarily produce significantly different performance trajectories. The sensitivity is determined not only by differences between climate scenarios but also by other external stresses imposed on the system such as population growth and by constraints on the available options to secure the system against drought. Recent experience with drought and a shifting climate has highlighted the vulnerability of urban water supplies to “running out of water” in Perth, south-east Queensland, Sydney, Melbourne and Adelaide and has triggered major investment in water source infrastructure which ultimately will run into tens of billions of dollars. With the prospect of continuing population growth in major cities, the provision of acceptable drought security will become more pressing particularly if the future climate becomes drier. Decision makers need to deal with significant uncertainty about future climate and population. In particular the science of climate change is such that the accuracy of model predictions of future climate is limited by fundamental irreducible uncertainties. It would be unwise to unduly rely on projections made by climate models and prudent to favour solutions that are robust across a range of possible climate futures. This study presents and demonstrates a methodology that addresses the problem of finding “good” solutions for urban bulk water systems in the presence of deep uncertainty about future climate. The methodology involves three key steps: 1) Build a simulation model of the bulk water system; 2) Construct replicates of future climate that reproduce natural variability seen in the instrumental record and that reflect a plausible range of future climates; and 3) Use multi-objective optimisation to efficiently search through potentially trillions of solutions to identify a set of “good” solutions that optimally trade-off expected performance against robustness or sensitivity of performance over the range of future climates. A case study based on the Lower Hunter in New South Wales demonstrates the methodology. It is important to note that the case study does not consider the full suite of options and objectives; preliminary information on plausible options has been generalised for demonstration purposes and therefore its results should only be used in the context of evaluating the methodology. “Dry” and “wet” climate scenarios that represent the likely span of climate in 2070 based on the A1F1 emissions scenario were constructed. Using the WATHNET5 model, a simulation model of the Lower Hunter was constructed and validated. The search for “good” solutions was conducted by minimizing two criteria, 1) the expected present worth cost of capital and operational costs and social costs due to restrictions and emergency rationing, and 2) the difference in present worth cost between the “dry” and “wet” 2070 climate scenarios. The constraint was imposed that solutions must be able to supply (reduced) demand in the worst drought. Two demand scenarios were considered, “1.28 x current demand” representing expected consumption in 2060 and “2 x current demand” representing a highly stressed system. The optimisation considered a representative range of options including desalination, new surface water sources, demand substitution using rainwater tanks, drought contingency measures and operating rules. It was found the sensitivity of solutions to uncertainty about future climate varied considerably. For the “1.28 x demand” scenario there was limited sensitivity to the climate scenarios resulting in a narrow range of trade-offs. In contrast, for the “2 x demand” scenario, the trade-off between expected present worth cost and robustness was considerable. The main policy implication is that (possibly large) uncertainty about future climate may not necessarily produce significantly different performance trajectories. The sensitivity is determined not only by differences between climate scenarios but also by other external stresses imposed on the system such as population growth and by constraints on the available options to secure the system against drought. Please cite this report as: Mortazavi, M, Kuczera, G, Kiem, AS, Henley, B, Berghout, B,Turner, E, 2013 Robust optimisation of urban drought security for an uncertain climate. National Climate Change Adaptation Research Facility, Gold Coast, pp. 74

    Bootstrap Robust Prescriptive Analytics

    Full text link
    We address the problem of prescribing an optimal decision in a framework where its cost depends on uncertain problem parameters YY that need to be learned from data. Earlier work by Bertsimas and Kallus (2014) transforms classical machine learning methods that merely predict YY from supervised training data [(x1,y1),…,(xn,yn)][(x_1, y_1), \dots, (x_n, y_n)] into prescriptive methods taking optimal decisions specific to a particular covariate context X=xˉX=\bar x. Their prescriptive methods factor in additional observed contextual information on a potentially large number of covariates X=xˉX=\bar x to take context specific actions z(xˉ)z(\bar x) which are superior to any static decision zz. Any naive use of limited training data may, however, lead to gullible decisions over-calibrated to one particular data set. In this paper, we borrow ideas from distributionally robust optimization and the statistical bootstrap of Efron (1982) to propose two novel prescriptive methods based on (nw) Nadaraya-Watson and (nn) nearest-neighbors learning which safeguard against overfitting and lead to improved out-of-sample performance. Both resulting robust prescriptive methods reduce to tractable convex optimization problems and enjoy a limited disappointment on bootstrap data. We illustrate the data-driven decision-making framework and our novel robustness notion on a small news vendor problem as well as a small portfolio allocation problem

    Adaptive Robust Traffic Engineering in Software Defined Networks

    Full text link
    One of the key advantages of Software-Defined Networks (SDN) is the opportunity to integrate traffic engineering modules able to optimize network configuration according to traffic. Ideally, network should be dynamically reconfigured as traffic evolves, so as to achieve remarkable gains in the efficient use of resources with respect to traditional static approaches. Unfortunately, reconfigurations cannot be too frequent due to a number of reasons related to route stability, forwarding rules instantiation, individual flows dynamics, traffic monitoring overhead, etc. In this paper, we focus on the fundamental problem of deciding whether, when and how to reconfigure the network during traffic evolution. We propose a new approach to cluster relevant points in the multi-dimensional traffic space taking into account similarities in optimal routing and not only in traffic values. Moreover, to provide more flexibility to the online decisions on when applying a reconfiguration, we allow some overlap between clusters that can guarantee a good-quality routing regardless of the transition instant. We compare our algorithm with state-of-the-art approaches in realistic network scenarios. Results show that our method significantly reduces the number of reconfigurations with a negligible deviation of the network performance with respect to the continuous update of the network configuration.Comment: 10 pages, 8 figures, submitted to IFIP Networking 201

    Geography Rules Too! Economic Development and the Geography of Institutions

    Get PDF
    To explain cross-country income differences, research has recently focused on the so-called deep determinants of economic development, notably institutions and geography. This paper sheds a different light on these determinants. We use spatial econometrics to analyse the importance of the geography of institutions. We show that it is not only absolute geography, in terms of for instance climate, but also relative geography, the spatial linkages between countries, that matters for a country’s gdp per capita. Apart from a country’s own institutions, institutions in neighboring countries turn out to be relevant as well. This finding is robust to various alternative specifications.
    • …
    corecore