2,088 research outputs found

    Robust optimisation of urban drought security for an uncertain climate

    Get PDF
    Abstract Recent experience with drought and a shifting climate has highlighted the vulnerability of urban water supplies to “running out of water” in Perth, south-east Queensland, Sydney, Melbourne and Adelaide and has triggered major investment in water source infrastructure which ultimately will run into tens of billions of dollars. With the prospect of continuing population growth in major cities, the provision of acceptable drought security will become more pressing particularly if the future climate becomes drier. Decision makers need to deal with significant uncertainty about future climate and population. In particular the science of climate change is such that the accuracy of model predictions of future climate is limited by fundamental irreducible uncertainties. It would be unwise to unduly rely on projections made by climate models and prudent to favour solutions that are robust across a range of possible climate futures. This study presents and demonstrates a methodology that addresses the problem of finding “good” solutions for urban bulk water systems in the presence of deep uncertainty about future climate. The methodology involves three key steps: 1) Build a simulation model of the bulk water system; 2) Construct replicates of future climate that reproduce natural variability seen in the instrumental record and that reflect a plausible range of future climates; and 3) Use multi-objective optimisation to efficiently search through potentially trillions of solutions to identify a set of “good” solutions that optimally trade-off expected performance against robustness or sensitivity of performance over the range of future climates. A case study based on the Lower Hunter in New South Wales demonstrates the methodology. It is important to note that the case study does not consider the full suite of options and objectives; preliminary information on plausible options has been generalised for demonstration purposes and therefore its results should only be used in the context of evaluating the methodology. “Dry” and “wet” climate scenarios that represent the likely span of climate in 2070 based on the A1F1 emissions scenario were constructed. Using the WATHNET5 model, a simulation model of the Lower Hunter was constructed and validated. The search for “good” solutions was conducted by minimizing two criteria, 1) the expected present worth cost of capital and operational costs and social costs due to restrictions and emergency rationing, and 2) the difference in present worth cost between the “dry” and “wet” 2070 climate scenarios. The constraint was imposed that solutions must be able to supply (reduced) demand in the worst drought. Two demand scenarios were considered, “1.28 x current demand” representing expected consumption in 2060 and “2 x current demand” representing a highly stressed system. The optimisation considered a representative range of options including desalination, new surface water sources, demand substitution using rainwater tanks, drought contingency measures and operating rules. It was found the sensitivity of solutions to uncertainty about future climate varied considerably. For the “1.28 x demand” scenario there was limited sensitivity to the climate scenarios resulting in a narrow range of trade-offs. In contrast, for the “2 x demand” scenario, the trade-off between expected present worth cost and robustness was considerable. The main policy implication is that (possibly large) uncertainty about future climate may not necessarily produce significantly different performance trajectories. The sensitivity is determined not only by differences between climate scenarios but also by other external stresses imposed on the system such as population growth and by constraints on the available options to secure the system against drought. Recent experience with drought and a shifting climate has highlighted the vulnerability of urban water supplies to “running out of water” in Perth, south-east Queensland, Sydney, Melbourne and Adelaide and has triggered major investment in water source infrastructure which ultimately will run into tens of billions of dollars. With the prospect of continuing population growth in major cities, the provision of acceptable drought security will become more pressing particularly if the future climate becomes drier. Decision makers need to deal with significant uncertainty about future climate and population. In particular the science of climate change is such that the accuracy of model predictions of future climate is limited by fundamental irreducible uncertainties. It would be unwise to unduly rely on projections made by climate models and prudent to favour solutions that are robust across a range of possible climate futures. This study presents and demonstrates a methodology that addresses the problem of finding “good” solutions for urban bulk water systems in the presence of deep uncertainty about future climate. The methodology involves three key steps: 1) Build a simulation model of the bulk water system; 2) Construct replicates of future climate that reproduce natural variability seen in the instrumental record and that reflect a plausible range of future climates; and 3) Use multi-objective optimisation to efficiently search through potentially trillions of solutions to identify a set of “good” solutions that optimally trade-off expected performance against robustness or sensitivity of performance over the range of future climates. A case study based on the Lower Hunter in New South Wales demonstrates the methodology. It is important to note that the case study does not consider the full suite of options and objectives; preliminary information on plausible options has been generalised for demonstration purposes and therefore its results should only be used in the context of evaluating the methodology. “Dry” and “wet” climate scenarios that represent the likely span of climate in 2070 based on the A1F1 emissions scenario were constructed. Using the WATHNET5 model, a simulation model of the Lower Hunter was constructed and validated. The search for “good” solutions was conducted by minimizing two criteria, 1) the expected present worth cost of capital and operational costs and social costs due to restrictions and emergency rationing, and 2) the difference in present worth cost between the “dry” and “wet” 2070 climate scenarios. The constraint was imposed that solutions must be able to supply (reduced) demand in the worst drought. Two demand scenarios were considered, “1.28 x current demand” representing expected consumption in 2060 and “2 x current demand” representing a highly stressed system. The optimisation considered a representative range of options including desalination, new surface water sources, demand substitution using rainwater tanks, drought contingency measures and operating rules. It was found the sensitivity of solutions to uncertainty about future climate varied considerably. For the “1.28 x demand” scenario there was limited sensitivity to the climate scenarios resulting in a narrow range of trade-offs. In contrast, for the “2 x demand” scenario, the trade-off between expected present worth cost and robustness was considerable. The main policy implication is that (possibly large) uncertainty about future climate may not necessarily produce significantly different performance trajectories. The sensitivity is determined not only by differences between climate scenarios but also by other external stresses imposed on the system such as population growth and by constraints on the available options to secure the system against drought. Please cite this report as: Mortazavi, M, Kuczera, G, Kiem, AS, Henley, B, Berghout, B,Turner, E, 2013 Robust optimisation of urban drought security for an uncertain climate. National Climate Change Adaptation Research Facility, Gold Coast, pp. 74

    Optimisation of confinement in a fusion reactor using a nonlinear turbulence model

    Full text link
    The confinement of heat in the core of a magnetic fusion reactor is optimised using a multidimensional optimisation algorithm. For the first time in such a study, the loss of heat due to turbulence is modelled at every stage using first-principles nonlinear simulations which accurately capture the turbulent cascade and large-scale zonal flows. The simulations utilise a novel approach, with gyrofluid treatment of the small-scale drift waves and gyrokinetic treatment of the large-scale zonal flows. A simple near-circular equilibrium with standard parameters is chosen as the initial condition. The figure of merit, fusion power per unit volume, is calculated, and then two control parameters, the elongation and triangularity of the outer flux surface, are varied, with the algorithm seeking to optimise the chosen figure of merit. A two-fold increase in the plasma power per unit volume is achieved by moving to higher elongation and strongly negative triangularity.Comment: 32 pages, 8 figures, accepted to JP

    Methods for Reducing Monitoring Overhead in Runtime Verification

    Get PDF
    Runtime verification is a lightweight technique that serves to complement existing approaches, such as formal methods and testing, to ensure system correctness. In runtime verification, monitors are synthesized to check a system at run time against a set of properties the system is expected to satisfy. Runtime verification may be used to determine software faults before and after system deployment. The monitor(s) can be synthesized to notify, steer and/or perform system recovery from detected software faults at run time. The research and proposed methods presented in this thesis aim to reduce the monitoring overhead of runtime verification in terms of memory and execution time by leveraging time-triggered techniques for monitoring system events. Traditionally, runtime verification frameworks employ event-triggered monitors, where the invocation of the monitor occurs after every system event. Because systems events can be sporadic or bursty in nature, event-triggered monitoring behaviour is difficult to predict. Time-triggered monitors, on the other hand, periodically preempt and process system events, making monitoring behaviour predictable. However, software system state reconstruction is not guaranteed (i.e., missed state changes/events between samples). The first part of this thesis analyzes three heuristics that efficiently solve the NP-complete problem of minimizing the amount of memory required to store system state changes to guarantee accurate state reconstruction. The experimental results demonstrate that adopting near-optimal algorithms do not greatly change the memory consumption and execution time of monitored programs; hence, NP-completeness is likely not an obstacle for time-triggered runtime verification. The second part of this thesis introduces a novel runtime verification technique called hybrid runtime verification. Hybrid runtime verification enables the monitor to toggle between event- and time-triggered modes of operation. The aim of this approach is to reduce the overall runtime monitoring overhead with respect to execution time. Minimizing the execution time overhead by employing hybrid runtime verification is not in NP. An integer linear programming heuristic is formulated to determine near-optimal hybrid monitoring schemes. Experimental results show that the heuristic typically selects monitoring schemes that are equal to or better than naively selecting exclusively one operation mode for monitoring

    Analysis of Layered Social Networks

    Get PDF
    Prevention of near-term terrorist attacks requires an understanding of current terrorist organizations to include their composition, the actors involved, and how they operate to achieve their objectives. To aid this understanding, operations research, sociological, and behavioral theory relevant to the study of social networks are applied, thereby providing theoretical foundations for new methodologies to analyze non-cooperative organizations, defined as those trying to hide their structure or are unwilling to provide information regarding their operations. Techniques applying information regarding multiple dimensions of interpersonal relationships, inferring from them the strengths of interpersonal ties, are explored. A layered network construct is offered that provides new analytic opportunities and insights generally unaccounted for in traditional social network analyses. These provide decision makers improved courses of action designed to impute influence upon an adversarial network, thereby achieving a desired influence, perception, or outcome to one or more actors within the target network. This knowledge may also be used to identify key individuals, relationships, and organizational practices. Subsequently, such analysis may lead to the identification of exploitable weaknesses to either eliminate the network as a whole, cause it to become operationally ineffective, or influence it to directly or indirectly support National Security Strategy

    Spatial Optimization of Six Conservation Practices Using Swat in Tile‐Drained Agricultural Watersheds

    Full text link
    Targeting of agricultural conservation practices to the most effective locations in a watershed can promote wise use of conservation funds to protect surface waters from agricultural nonpoint source pollution. A spatial optimization procedure using the Soil and Water Assessment Tool was used to target six widely used conservation practices, namely no‐tillage, cereal rye cover crops (CC), filter strips (FS), grassed waterways (GW), created wetlands, and restored prairie habitats, in two west‐central Indiana watersheds. These watersheds were small, fairly flat, extensively agricultural, and heavily subsurface tile‐drained. The targeting approach was also used to evaluate the model's representation of conservation practices in cost and water quality improvement, defined as export of total nitrogen, total phosphorus, and sediment from cropped fields. FS, GW, and habitats were the most effective at improving water quality, while CC and wetlands made the greatest water quality improvement in lands with multiple existing conservation practices. Spatial optimization resulted in similar cost‐environmental benefit tradeoff curves for each watershed, with the greatest possible water quality improvement being a reduction in total pollutant loads by approximately 60%, with nitrogen reduced by 20‐30%, phosphorus by 70%, and sediment by 80‐90%.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/112253/1/jawr12338.pd

    Evolutionary approaches toward practical network coding

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 133-137).There have been numerous studies showing various benefits of network coding. However, in order to have network coding widely deployed in real networks, it is also important to show that the amount of overhead incurred by network coding can be kept minimal and eventually be outweighed by the benefits network coding provides. Owing to the mathematical operations required, network coding necessarily incurs some additional cost such as computational overhead or transmission delay, and as a practical matter, the cost of special hardware and/or software for network coding. While most network coding solutions assume that the coding operations are performed at all nodes, it is often possible to achieve the network coding advantage for multicast by coding only at a subset of nodes. However, determining a minimal set of the nodes where coding is required is NP-hard, as is its close approximation; hence there are only a few existing approaches each with certain limitations. In this thesis, we develop an evolutionary approach toward a practical multicast protocol that achieves the full benefit of network coding in terms of throughput, while performing coding operations only when required at as few nodes as possible. We show that our approach operates in a very efficient and practical manner such that it is distributed over the network both spatially and temporally, yielding a sufficiently good solution, which is at least as good as those obtained by existing centralized approaches but often turns out to be much superior in practice. We broaden the application areas of our evolutionary approach by generalizing it in several ways. First, we show that a generalized version of our approach can effectively reveal the possible tradeoff between the costs of network coding and link usage, enabling more informed decisions on where to deploy network coding. Also, we demonstrate that our approach can be applied to investigate many important but, because of the lack of appropriate tools, largely unanswered questions arising in practical scenarios based on heterogeneous wireless ad hoc networks and fault-tolerant optical networks.(cont.) Finally, further generalizing our evolutionary approach, we propose a novel network coding scheme for the general connection problem beyond multicast, for which no optimal network coding strategy is known. Our coding scheme allows general random linear coding over a large finite field, in which decoding is done only at the receivers and the mixture of information at interior nodes is controlled by evolutionary mechanisms.by Minkyu Kim.Ph.D

    Assessment of joint inventory replenishment: a cooperative games approach

    Get PDF
    This research deals with the design of a logistics strategy with a collaborative approach between non-competing companies, who through joint coordination of the replenishment of their inventories reduce their costs thanks to the exploitation of economies of scale. The collaboration scope includes sharing logistic resources with limited capacities; transport units, warehouses, and management processes. These elements conform a novel extension of the Joint Replenishment Problem (JRP) named the Schochastic Collaborative Joint replenishment Problem (S-CJRP). The introduction of this model helps to increase practical elements into the inventory replenishment problem and to assess to what extent collaboration in inventory replenishment and logistics resources sharing might reduce the inventory costs. Overall, results showed that the proposed model could be a viable alternative to reduce logistics costs and demonstrated how the model can be a financially preferred alternative than individual investments to leverage resources capacity expansions. Furthermore, for a practical instance, the work shows the potential of JRP models to help decision-makers to better understand the impacts of fleet renewal and inventory replenishment decisions over the cost and CO2 emissions.DoctoradoDoctor en IngenierĂ­a Industria

    SUSTAINABLE LIFETIME VALUE CREATION THROUGH INNOVATIVE PRODUCT DESIGN: A PRODUCT ASSURANCE MODEL

    Get PDF
    In the field of product development, many organizations struggle to create a value proposition that can overcome the headwinds of technology change, regulatory requirements, and intense competition, in an effort to satisfy the long-term goals of sustainability. Today, organizations are realizing that they have lost portfolio value due to poor reliability, early product retirement, and abandoned design platforms. Beyond Lean and Green Manufacturing, shareholder value can be enhanced by taking a broader perspective, and integrating sustainability innovation elements into product designs in order to improve the delivery process and extend the life of product platforms. This research is divided into two parts that lead to closing the loop towards Sustainable Value Creation in product development. The first section presents a framework for achieving Sustainable Lifetime Value through a toolset that bridges the gap between financial success and sustainable product design. Focus is placed on the analysis of the sustainable value proposition between producers, consumers, society, and the environment and the half-life of product platforms. The Half-Life Return Model is presented, designed to provide feedback to producers in the pursuit of improving the return on investment for the primary stakeholders. The second part applies the driving aspects of the framework with the development of an Adaptive Genetic Search Algorithm. The algorithm is designed to improve fault detection and mitigation during the product delivery process. A computer simulation is used to study the effectiveness of primary aspects introduced in the search algorithm, in order to attempt to improve the reliability growth of the system during the development life-cycle. The results of the analysis draw attention to the sensitivity of the driving aspects identified in the product development lifecycle, which affect the long term goals of sustainable product development. With the use of the techniques identified in this research, cost effective test case generation can be improved without a major degradation in the diversity of the search patterns required to insure a high level of fault detection. This in turn can lead to improvements in the driving aspects of the Half-Life Return Model, and ultimately the goal of designing sustainable products and processes

    Stochastic planning for active distribution networks hosting fast charging stations

    Get PDF
    With the advent of electric vehicles (EVs), charging infrastructure needs to become more available and electricity providers must build additional power generation capacity to support the grid. In siting and sizing of fast charging stations (FCSs), both the distribution network constraints, as well as the traffic network limitations, must be considered because FCSs exist on both levels. Moreover, the siting and sizing of wind-powered distributed generation (WPDG) is a solution to gradually decarbonizing the grid; therefore, reducing our carbon footprint. In addition to providing capacity, they also have other benefits in the distribution network such as reducing transmission losses. In this thesis, a new framework is proposed which successfully implements a novel scoring technique to rate the attractiveness of FCS candidate locations thus, determining the expected FCS demand in each candidate location and uses WPDGs to support that load. A study has been conducted to compare the suitability of industrial-scale turbines versus micro-wind turbines in an urban area. A method for selecting candidate locations for the later has been developed. A stochastic program is proposed to account for the non-deterministic elements of the problem including generic loads, residential electric vehicle loads, FCS loads, and wind speed where they are accounted for collectively using a method called convolution. This comes hand-in-hand with a mixed-integer non-linear programming model that sites and sizes both FCSs and WPDGs with an objective of maximizing profits to incentivize investments. A list of novel constraints has been introduced that connect the traffic network to the power network. The problem is modeled from the perspective of electric utilities but also considers the perspectives of the urban planners and potential investors. A case study was implemented showing how the scoring technique works and the results show that the math model considered all the parameters and respected all the constraints delivering a holistic set of decisions to site and size both FCSs and micro WPDGs in an urban area
    • 

    corecore