41 research outputs found

    Genetic-algorithm-based design of groundwater quality monitoring system

    Get PDF
    This research builds on the work of Meyer and Brill [I988] and subsequent work by Meyer et al. [1990], Meyer et al. [1992], and Meyer [I992] on the optimal location of a network of groundwater monitoring wells under conditions of uncertainty. A method of optimization is developed using genetic algorithms (GAS) which allows consideration of the two objectives of Meyer et al. [1992], maximizing reliability and minimizing contaminated area, separately yet simultaneously. The GA-based solution method can generate both convex and non-convex points of the tradeoff curve, can accommodate non-linearities in the two objective functions, and is not restricted to the peculiarities of a weighted objective function. Furthermore, GAS can generate large portions of the tradeoff curve in a single iteration and may be more efficient than methods that generate only a single point at a time.Four multi-objective GAS formulations are investigated and their performance in generating the multi-objective tradeoff curve is evaluated for the groundwater monitoring problem using two example data sets. The GA formulations are compared to each other and to simulated annealing on both performance and computational intensity.The simulated annealing based technique used by Meyer et al. [I992] relies on a weighted objective function which finds only a single point along the tradeoff curve for each iteration, while the multiple-objective GA formulations are able to find many convex and nonconvex points along the tradeoff curve in a single iteration. Each iteration of simulated annealing is approximately five times faster than an iteration of the genetic algorithm, but several simulated annealing iterations are required to generate the tradeoff curve. GAS are able to find a larger number of non-dominated points on the tradeoff curve in a single iteration, and are therefore just as computationally efficient as simulated annealing in terms of generating the tradeoff curves.None of the GA formulations demonstrate the ability to generate the entire tradeoff curve in a single iteration, but they yield either a good estimation of all regions of the tradeoff curve except the very highest and very lowest reliability ends or a good estimation of the high reliability end alone.U.S. Department of the InteriorU.S. Geological Surve

    Optimization of municipal solid waste management using externality costs

    Get PDF
    Economic and environmental impacts associated with solid waste management (SWM) systems should be considered to ensure sustainability of such systems. Societal life cycle costing (S-LCC) can be used for this purpose since it includes “budget costs” and “externality costs.” While budget costs represent market goods and services in monetary terms, i.e. economic impacts, externality costs include effects outside the economic system such as environmental impacts (translated in monetary terms).1 Numerous models have been developed to determine the environmental and economic impacts associated with SWM systems (e.g., EASETECH2) by using “what-if” scenario analyses. While these models are an essential foundation that enables a systematic integrated analysis of SWM systems, they do not provide information about the overall optimal solution as done with optimization models such as SWOLF.3 This study represents the first attempt to optimize SWM systems using externality costs in SWOLF. The assessment identifies the waste strategy that minimizes externality costs and other criteria (budget costs and landfilling) for a specific case study. The latter represents a hypothetical U.S. county with annual waste generation of 320,000 Mg. The externality cost includes the damage costs of fossil CO2, CH4, N2O, PM2.5, PM10, NOX, SO2 , VOC, CO, NH3, CO, Hg, Pb, Cd, Cr (VI), Ni, As, and dioxins. Table 1 shows the results of the optimization including: i) optimization criteria, ii) waste flows and iii) eco-efficiency indicator (ratio between externality costs and budget costs). Minimal externality costs are obtained when incinerating most of the waste (88%) and commingled collection of recyclables (12%). The eco-efficiency of this waste strategy corresponds to -0.6, i.e. its environmental benefits (negative externality costs) correspond to approximately half of its budget costs. On the other hand, there is the solution with minimal budget costs (100% of the waste is landfilled) in which the environmental load (positive externality cost) represent one third of the budget costs (positive eco-efficiency indicator). In between these options, there is a strategy with minimal landfilling in which the organic waste is sent to anaerobic digestion, the recyclables to a single stream MRF and the residual to a mixed waste MRF. Most of the externality costs of the three strategies stem from SO2, NOx and GHG as suggested by Woon & Lo.4 The case study shows that waste solutions identified by optimization modelling differ from common SWM systems selected for analysis in state-of-the-art accounting modelling Please click Additional Files below to see the full abstract

    The Battle of the Water Networks II (BWN-II)

    Get PDF
    The Battle of the Water Networks II (BWN-II) is the latest of a series of competitions related to the design and operation of water distribution systems (WDSs) undertaken within the Water Distribution Systems Analysis (WDSA) Symposium series. The BWN-II problem specification involved a broadly defined design and operation problem for an existing network that has to be upgraded for increased future demands, and the addition of a new development area. The design decisions involved addition of new and parallel pipes, storage, operational controls for pumps and valves, and sizing of backup power supply. Design criteria involved hydraulic, water quality, reliability, and environmental performance measures. Fourteen teams participated in the Battle and presented their results at the 14th Water Distribution Systems Analysis (WDSA 2012) conference in Adelaide, Australia, September 2012. This paper summarizes the approaches used by the participants and the results they obtained. Given the complexity of the BWN-II problem and the innovative methods required to deal with the multi-objective, high dimensional and computationally demanding nature of the problem, this paper represents a snap-shot of state of the art methods for the design and operation of water distribution systems. A general finding of this paper is that there is benefit in using a combination of heuristic engineering experience and sophisticated optimization algorithms when tackling complex real-world water distribution system design problems.Angela Marchi...Angus R. Simpson, Aaron C. Zecchin, Holger R. Maier...Christopher Stokes, Wenyan Wu, Graeme C. Dandy...et al

    The Battle of the Water Networks II (BWN-II)

    Get PDF
    The Battle of the Water Networks II (BWN-II) is the latest of a series of competitions related to the design and operation of water distribution systems (WDSs) undertaken within the Water Distribution Systems Analysis (WDSA) Symposium series. The BWN-II problem specification involved a broadly defined design and operation problem for an existing network that has to be upgraded for increased future demands, and the addition of a new development area. The design decisions involved addition of new and parallel pipes, storage, operational controls for pumps and valves, and sizing of backup power supply. Design criteria involved hydraulic, water quality, reliability, and environmental performance measures. Fourteen teams participated in the Battle and presented their results at the 14th Water Distribution Systems Analysis (WDSA 2012) conference in Adelaide, Australia, September 2012. This paper summarizes the approaches used by the participants and the results they obtained. Given the complexity of the BWN-II problem and the innovative methods required to deal with the multi-objective, high dimensional and computationally demanding nature of the problem, this paper represents a snap-shot of state of the art methods for the design and operation of water distribution systems. A general finding of this paper is that there is benefit in using a combination of heuristic engineering experience and sophisticated optimization algorithms when tackling complex real-world water distribution system design problems.Angela Marchi...Angus R. Simpson, Aaron C. Zecchin, Holger R. Maier...Christopher Stokes, Wenyan Wu, Graeme C. Dandy...et al

    A neural network-based method for evaluating a spatially distributed parameter field: An application in groundwater remediation under uncertainty

    No full text
    Uncertainty due to spatial variability of hydraulic conductivity is an important issue in the design of reliable groundwater remediation strategies. Using groundwater management models based on a stochastic approach to groundwater flow, where the log-hydraulic conductivity is represented as a random field, is a frequently studied technique for the design of aquifer remediation in the presence of uncertainty. Such an approach employs the solution of a management model for a large set of equally probable realizations of the hydraulic conductivity. However, only a few critical realizations out of the large set will influence the final design. Incorporation of only a few of the critical realizations in the design procedure would result in a robust design with high reliability level. This reliability level is comparable to those of the designs obtained using many realizations.The spatial distribution of the hydraulic conductivity values in a realization, and the degree of variation of the hydraulic conductivity values within a realization are identified as two important features that determine the level of criticalness of a realization. The association between the hydraulic conductivity pattern and the level of criticalness is not known explicitly and needs to be captured for efficient screening. The screening method presented here utilizes the pattern classification capability of a neural network and its ability to learn from examples. The performance of predicting the critical realizations by this method is evaluated to be versatile in a range of design scenarios. The application of the screening method in a pump-and-treat design problem is illustrated via two examples. In the first example, it is shown that incorporation of as few as 10 critical realizations, as identified by the screening method, in a groundwater management model yields designs with greater than 90% reliability levels. These designs are comparable to those obtained with 100 unscreened realizations. The second example shows that the cost-reliability trade-off obtained with a small set of critical realizations is comparable to that obtained with four times as many realizations. The reduction in the number of realizations incorporated in the management model results in a 80% savings in cpu time for the solution of the management model.U of I OnlyETDs are only available to UIUC Users without author permissio

    PERFORMANCE MODELING USING A GENETIC PROGRAMMING BASED MODEL ERROR CORRECTION PROCEDURE

    No full text
    Model Error Correction Procedure. (Under the direction of Dr. G Mahinthakumar.) Application performance models provide insight to designers of high performance computing (HPC) systems on the role of subsystems such as the processor or the network in determining application performance and allow HPC centers to more accurately target procurements to resource requirements. Performance models can also be used to identify application performance bottlenecks and to provide insights about scalability issues. The suitability of a performance model, however, for a particular performance investigation is a function of both the accuracy and the cost of the model. A semi-empirical model developed in an earlier publication for an astrophysics application was shown to be inaccurate when predicting communication cost for large numbers of processors. It was hypothesized that this deficiency is due to the inability of the model to adequately capture communication contention (threshold effects) as well as other un-modeled components such as noise and I/O contention. This thesis demonstrates a new approach to capture these unknown features to improve the predictive capabilities of the model. This approach uses a systematic model error correction procedure that use

    Proceedings of PVP2005 2005 ASME Pressure Vessels and Piping Division Conference PVP2005-71634 AUTOMATED PARAMETER DETERMINATION OF ADVANCED CONSTITUTIVE MODELS

    No full text
    ABSTRACT Parameter determination of advanced cyclic plasticity models which are developed for simulation of cyclic stressstrain and ratcheting responses is complex. This is mainly because of the large number of model parameters which are interdependent and three or more experimental responses are used in parameter determination. Hence the manual trial and error approach becomes quite tedious and time consuming for determining a reasonable set of parameters. Moreover, manual parameter determination for an advanced plasticity model requires in-depth knowledge of the model and experience with its parameter determination. These are few of the primary reasons for advanced cyclic plasticity models not being widely used for analysis and design of fatigue critical structures. These problems could be overcome through developing an automated parameter optimization system using heuristic search technique (e.g. genetic algorithm). This paper discusses the development of such an automatic parameter determination scheme for improved Chaboche model developed by Bari and Hassan [4]. A new stepped GA optimization approach which is found to be more efficient over the conventional GA approach in terms of fitness quality and optimization time is presented

    ABSTRACT ZECHMAN, EMILY MICHELLE. Improving Predictability of Simulation Models using Evolutionary Computation-Based Methods for Model Error Correction. (Under the

    No full text
    Simulation models are important tools for managing water resources systems. An optimization method coupled with a simulation model can be used to identify effective decisions to efficiently manage a system. The value of a model in decision-making is degraded when that model is not able to accurately predict system response for new management decisions. Typically, calibration is used to improve the predictability of models to match more closely the system observations. Calibration is limited as it can only correct parameter error in a model. Models may also contain structural errors that arise from mis-specification of model equations. This research develops and presents a new model error correction procedure (MECP) to improve the predictive capabilities of a simulation model. MECP is able to simultaneously correct parameter error and structural error through the identification of suitable parameter values and a function to correct misspecifications in model equations. An evolutionary computation (EC)-based implementation of MECP builds upon and extends existing evolutionary algorithms to simultaneously conduct numeric and symbolic searches for the parameter values and the function, respectively. Non-uniqueness is an inherent issue in such system identificatio
    corecore