224 research outputs found

    Surrogate-based pumping optimization of coastal aquifers under limited computational budgets

    Get PDF
    The computationally expensive variable density and salt transport numerical models hinder the implementation of simulation-optimization routines for coastal aquifer management. To reduce the computational cost, surrogate models have been utilized in pumping optimization of coastal aquifers. However, it has not been previously addressed whether surrogate modelling is effective given a limited number of numerical simulations with the seawater intrusion model. To that end, two surrogate-based optimization (SBO) frameworks are employed and compared against the direct optimization approach, under restricted computational budgets. The first, a surrogate-assisted algorithm, employs a strategy which aims at a fast local improvement of the surrogate model around optimal values. The other, balances global and local improvement of the surrogate model and is applied for the first time in coastal aquifer management. The performance of the algorithms is investigated for optimization problems of moderate and large dimensionalities. The statistical analysis indicates that for the specified computational budgets, the sample means of the SBO methods are statistically significantly better than those of the direct optimization. Additionally, the selection of cubic radial basis functions as surrogate models, enables the construction of very fast approximations for problems with up to 40 decision variables and 40 constraint functions

    An adaptive multi-fidelity optimization framework based on co-Kriging surrogate models and stochastic sampling with application to coastal aquifer management

    Get PDF
    Surrogate modelling has been used successfully to alleviate the computational burden that results from high-fidelity numerical models of seawater intrusion in simulation-optimization routines. Nevertheless, little attention has been given to multi-fidelity modelling methods to address cases where only limited runs with computationally expensive seawater intrusion models are considered affordable imposing a limiting factor for single-fidelity surrogate-based optimization as well. In this work, a new adaptive multi-fidelity optimization framework is proposed based on co-Kriging surrogate models considering two model fidelities of seawater intrusion. The methodology is tailored to the needs of solving pumping optimization problems with computationally expensive constraint functions and utilizes only small high-fidelity training datasets. Results from both hypothetical and real-world optimization problems demonstrate the efficiency and practicality of the proposed framework to provide a steep improvement of the objective function while it outperforms a comprehensive single-fidelity surrogate-based optimization method. The method can also be used to locate optimal solutions in the region of the global optimum when larger high-fidelity training datasets are available

    Pumping Optimization of Coastal Aquifers Using Seawater Intrusion Models of Variable-Fidelity and Evolutionary Algorithms

    Get PDF
    Variable-fidelity modelling has been utilized in several engineering optimization studies to construct surrogate models. However, similar approaches have received much less attention in coastal aquifer management problems. A variable-fidelity optimization framework was developed utilizing a lower-fidelity and computationally cheap model of seawater intrusion, based on the sharp interface assumption, and a simple correction process. The variable-fidelity method was compared to the direct optimization with the high-fidelity variable density and salt transport model and to conventional surrogate-based optimization. The surrogate-based approaches were embedded into the operations of an evolutionary algorithm to implement an efficient online update of the surrogate models and control the feasibility of the optimal solutions. Multiple independent optimization runs were performed to provide more insightful comparison outcomes. Although the variable-fidelity method found a better optimum than the conventional approach, the overall sample statistics showed that the surrogate-based optimization frameworks performed equally well and provided good approximations to the high-fidelity solution. Despite the potential for an improved exploration of the optimal search space by using the variable-fidelity method, the conventional approach had a 30% faster average convergence time

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    A rainfall disaggregation scheme for sub-hourly time scales: coupling a Bartlett-Lewis based model with adjusting procedures

    Get PDF
    Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions

    Energy Optimization Using a Pump Scheduling Tool in Water Distribution Systems

    Get PDF
    Water distribution management system is a costly practice and with the growth of population, the needs for creating more cost-effective solutions are vital. This paper presents a tool for optimization of pump operation in water systems. The pump scheduling tool (PST) is a fully dynamic tool that can handle four different types of fixed speed pump schedule representations (on and off, time control, time-length control, and simple control [water levels in tanks]). The PST has been developed using Visual Basic programming language and has a linkage between the EPANET hydraulic solver with the GANetXL optimization algorithm. It has a user-friendly interface which allows the simulation of water systems based on (1) a hydraulic model (EPANET) input file, (2) an interactive interface which can be modified by the user, and (3) a pump operation schedule generated by the optimization algorithm. It also has the interface of dynamic results which automatically visualizes generated solutions. The capabilities of the PST have been demonstrated by application to two real case studies, Anytown water distribution system (WDS) and Richmond WDS as a real one in the United Kingdom. The results show that PST is able to generate high-quality practical solutions

    Developing Efficient Strategies for Automatic Calibration of Computationally Intensive Environmental Models

    Get PDF
    Environmental simulation models have been playing a key role in civil and environmental engineering decision making processes for decades. The utility of an environmental model depends on how well the model is structured and calibrated. Model calibration is typically in an automated form where the simulation model is linked to a search mechanism (e.g., an optimization algorithm) such that the search mechanism iteratively generates many parameter sets (e.g., thousands of parameter sets) and evaluates them through running the model in an attempt to minimize differences between observed data and corresponding model outputs. The challenge rises when the environmental model is computationally intensive to run (with run-times of minutes to hours, for example) as then any automatic calibration attempt would impose a large computational burden. Such a challenge may make the model users accept sub-optimal solutions and not achieve the best model performance. The objective of this thesis is to develop innovative strategies to circumvent the computational burden associated with automatic calibration of computationally intensive environmental models. The first main contribution of this thesis is developing a strategy called “deterministic model preemption” which opportunistically evades unnecessary model evaluations in the course of a calibration experiment and can save a significant portion of the computational budget (even as much as 90% in some cases). Model preemption monitors the intermediate simulation results while the model is running and terminates (i.e., pre-empts) the simulation early if it recognizes that further running the model would not guide the search mechanism. This strategy is applicable to a range of automatic calibration algorithms (i.e., search mechanisms) and is deterministic in that it leads to exactly the same calibration results as when preemption is not applied. One other main contribution of this thesis is developing and utilizing the concept of “surrogate data” which is basically a reasonably small but representative proportion of a full set of calibration data. This concept is inspired by the existing surrogate modelling strategies where a surrogate model (also called a metamodel) is developed and utilized as a fast-to-run substitute of an original computationally intensive model. A framework is developed to efficiently calibrate hydrologic models to the full set of calibration data while running the original model only on surrogate data for the majority of candidate parameter sets, a strategy which leads to considerable computational saving. To this end, mapping relationships are developed to approximate the model performance on the full data based on the model performance on surrogate data. This framework can be applicable to the calibration of any environmental model where appropriate surrogate data and mapping relationships can be identified. As another main contribution, this thesis critically reviews and evaluates the large body of literature on surrogate modelling strategies from various disciplines as they are the most commonly used methods to relieve the computational burden associated with computationally intensive simulation models. To reliably evaluate these strategies, a comparative assessment and benchmarking framework is developed which presents a clear computational budget dependent definition for the success/failure of surrogate modelling strategies. Two large families of surrogate modelling strategies are critically scrutinized and evaluated: “response surface surrogate” modelling which involves statistical or data–driven function approximation techniques (e.g., kriging, radial basis functions, and neural networks) and “lower-fidelity physically-based surrogate” modelling strategies which develop and utilize simplified models of the original system (e.g., a groundwater model with a coarse mesh). This thesis raises fundamental concerns about response surface surrogate modelling and demonstrates that, although they might be less efficient, lower-fidelity physically-based surrogates are generally more reliable as they to-some-extent preserve the physics involved in the original model. Five different surface water and groundwater models are used across this thesis to test the performance of the developed strategies and elaborate the discussions. However, the strategies developed are typically simulation-model-independent and can be applied to the calibration of any computationally intensive simulation model that has the required characteristics. This thesis leaves the reader with a suite of strategies for efficient calibration of computationally intensive environmental models while providing some guidance on how to select, implement, and evaluate the appropriate strategy for a given environmental model calibration problem

    The Application of Nature-inspired Metaheuristic Methods for Optimising Renewable Energy Problems and the Design of Water Distribution Networks

    Get PDF
    This work explores the technical challenges that emerge when applying bio-inspired optimisation methods to real-world engineering problems. A number of new heuristic algorithms were proposed and tested to deal with these challenges. The work is divided into three main dimensions: i) One of the most significant industrial optimisation problems is optimising renewable energy systems. Ocean wave energy is a promising technology for helping to meet future growth in global energy demand. However, the current technologies of wave energy converters (WECs) are not fully developed because of technical engineering and design challenges. This work proposes new hybrid heuristics consisting of cooperative coevolutionary frameworks and neuro-surrogate optimisation methods for optimising WECs problem in three domains, including position, control parameters, and geometric parameters. Our problem-specific algorithms perform better than existing approaches in terms of higher quality results and the speed of convergence. ii) The second part applies search methods to the optimization of energy output in wind farms. Wind energy has key advantages in terms of technological maturity, cost, and life-cycle greenhouse gas emissions. However, designing an accurate local wind speed and power prediction is challenging. We propose two models for wind speed and power forecasting for two wind farms located in Sweden and the Baltic Sea by a combination of recurrent neural networks and evolutionary search algorithms. The proposed models are superior to other applied machine learning methods. iii) Finally, we investigate the design of water distribution systems (WDS) as another challenging real-world optimisation problem. WDS optimisation is demanding because it has a high-dimensional discrete search space and complex constraints. A hybrid evolutionary algorithm is suggested for minimising the cost of various water distribution networks and for speeding up the convergence rate of search.Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 202

    Lost in optimisation of water distribution systems? A literature review of system design

    Get PDF
    This is the final version of the article. Available from MDPI via the DOI in this record.Optimisation of water distribution system design is a well-established research field, which has been extremely productive since the end of the 1980s. Its primary focus is to minimise the cost of a proposed pipe network infrastructure. This paper reviews in a systematic manner articles published over the past three decades, which are relevant to the design of new water distribution systems, and the strengthening, expansion and rehabilitation of existing water distribution systems, inclusive of design timing, parameter uncertainty, water quality, and operational considerations. It identifies trends and limits in the field, and provides future research directions. Exclusively, this review paper also contains comprehensive information from over one hundred and twenty publications in a tabular form, including optimisation model formulations, solution methodologies used, and other important details
    • 

    corecore