39 research outputs found

    Fibonacci lattices for the evaluation and optimization of map projections

    Full text link
    [EN] Latitude-longitude grids are frequently used in geosciences for global numerical modelling although they are remarkably inhomogeneous due to meridian convergence. In contrast, Fibonacci lattices are highly isotropic and homogeneous so that the area represented by each lattice point is virtually the same. In the present paper we show the higher performance of Fibonacci versus latitude-longitude lattices for evaluating distortion coefficients of map projections. In particular, we obtain first a typical distortion for the Lambert Conformal Conic projection with their currently defined parameters and geographic boundaries for Europe that has been adopted as standard by the INSPIRE directive. Further, we optimize the defining parameters of this projection, lower and upper standard parallel latitudes, so that the typical distortion for Europe is reduced a 10% when they are set to 36 degrees and 61.5 degrees, respectively. We also apply the optimization procedure to the determination of the best standard parallels for using this projection in Spain, whose values remained unspecified by the National decree that commanded its official adoption, and obtain optimum values of 37 degrees and 42 degrees and a resulting typical distortion of 828 ppm.Baselga Moreno, S. (2018). Fibonacci lattices for the evaluation and optimization of map projections. Computers & Geosciences. 117:1-8. https://doi.org/10.1016/j.cageo.2018.04.012S1811

    Optimal sensor placement for sewer capacity risk management

    Get PDF
    2019 Spring.Includes bibliographical references.Complex linear assets, such as those found in transportation and utilities, are vital to economies, and in some cases, to public health. Wastewater collection systems in the United States are vital to both. Yet effective approaches to remediating failures in these systems remains an unresolved shortfall for system operators. This shortfall is evident in the estimated 850 billion gallons of untreated sewage that escapes combined sewer pipes each year (US EPA 2004a) and the estimated 40,000 sanitary sewer overflows and 400,000 backups of untreated sewage into basements (US EPA 2001). Failures in wastewater collection systems can be prevented if they can be detected in time to apply intervention strategies such as pipe maintenance, repair, or rehabilitation. This is the essence of a risk management process. The International Council on Systems Engineering recommends that risks be prioritized as a function of severity and occurrence and that criteria be established for acceptable and unacceptable risks (INCOSE 2007). A significant impediment to applying generally accepted risk models to wastewater collection systems is the difficulty of quantifying risk likelihoods. These difficulties stem from the size and complexity of the systems, the lack of data and statistics characterizing the distribution of risk, the high cost of evaluating even a small number of components, and the lack of methods to quantify risk. This research investigates new methods to assess risk likelihood of failure through a novel approach to placement of sensors in wastewater collection systems. The hypothesis is that iterative movement of water level sensors, directed by a specialized metaheuristic search technique, can improve the efficiency of discovering locations of unacceptable risk. An agent-based simulation is constructed to validate the performance of this technique along with testing its sensitivity to varying environments. The results demonstrated that a multi-phase search strategy, with a varying number of sensors deployed in each phase, could efficiently discover locations of unacceptable risk that could be managed via a perpetual monitoring, analysis, and remediation process. A number of promising well-defined future research opportunities also emerged from the performance of this research

    Geophysical modeling for groundwater and soil contamination risk assessment

    Get PDF
    This PhD thesis is focused on the study of environmental problems linked to contaminant detection and transport in soil and groundwater. The research has two main objectives: development, testing and application of geophysical data inversion methods for identifying and characterizing possible anomaly sources of contamination and development and application of numerical models for simulating contaminant propagation in saturated and unsaturated conditions. Initially, three different approaches for self-potential (SP) data inversion, based on spectral, tomographical and global optimization methods, respectively, are proposed to characterize the SP anomalous sources and to study their time evolution. The developed approaches are first tested on synthetic SP data generated by simple polarized structures, (like sphere, vertical cylinder, horizontal cylinder and inclined sheet) and, then, applied to SP field data taken from literature. In particular, the comparison of the results with those coming from other numerical approaches strengthens their usefulness. As it concerns the modelling of groundwater flow and contaminant transport, two cellular automata (CA) models have been developed to simulate diffusion-dispersion processes in unsaturated and saturated conditions, respectively, and to delineate the most dangerous scenarios in terms of maximum distances travelled by the contaminant. The developed CA models have been applied to two study areas affected by a different phenomenon of contamination. The first area is located in the western basin of the Crete island (Greece), which is affected by organic contaminant due to olive oil mills wastes (OOMWs). The numerical simulations provided by the CA model predict contaminant infiltration in the saturated zone and such results are in very good agreement with the high phenol concentrations provided by geochemical analyses on soil samples collected in the survey area at different depths and times. The second case study refers to an area located in the western basin of Solofrana river valley (southern Italy), which is often affected by heavy flooding and contamination from agricultural and industrial activities in the surroundings. The application of a multidisciplinary approach, which integrates geophysical data with hydrogeological and geochemical studies, and the development of a CA model for contaminant propagation in saturated conditions, have permitted to identify a possible phenomenon of contamination and the delineation of the most dangerous scenarios in terms of infiltration rates are currently in progress

    Modelling and optimisation of water loss management strategies in a water distribution system: a case of Moshi Urban Water Supply and Sanitation Authority (MUWSA)

    Get PDF
    A Dissertation Submitted in Partial Fulfilment of the Requirements for the Degree of Master’s in Mathematical and Computer Sciences and Engineering of the Nelson Mandela African Institution of Science and TechnologyWater loss in water distribution systems (WDS) is a serious problem in developing countries. A lot of water is lost on its way from the sources before reaching the consumers due to leakage, illegal use, and theft of infrastructures among others. The effect of water loss in the WDS includes reduction of revenue, water shortage, disruption of water quality, and inflation of operation and maintenance cost of the water authorities. The control of water loss in the WDS is closely dependent on the commitment of the decision-makers, the strategies used and budget set for water loss management (WLM). This study presents a combined model of Multi-Criteria Decision Making (MCDM) and Integer Linear Programming (ILP) methods which may help decision-makers to prioritise and select the best strategies for WLM. The MCDM family methods; the MAVT, SMARTER, SAW, and COPRAS were used to evaluate and prioritize the strategies, while ILP was used to select the best strategies. Additionally, the study compared the SAW and COPRAS methods in prioritising and selecting the strategies. The data used were collected at MUWSA. The results show that the COPRAS and SAW methods rank the given alternatives differently while when integrated with the ILP technique, the formulated models select the same portfolios of alternatives. Thirteen alternatives which cost 97% of the total budgets set for WLM were selected. Furthermore, the ILP models showed robustness in selecting the portfolio of alternatives as they select the same alternatives despite the ranking of alternatives and change of weights of evaluation criteria. Finally, the study proposed the decision model framework which can be used by decisionmakers to evaluate and select the best strategies for WLM in WDS

    Development of a post-form strength prediction model for a 6xxx aluminium alloy in a novel forming process

    Get PDF
    Accurate prediction of the post-form strength of structural components made from 6xxx series aluminium alloys has been a challenge, especially when the alloy undergoes complex thermo-mechanical processes such as the Fast light Alloys Stamping Technology (FAST). This process involves ultra-fast heating, high temperature plastic deformation, rapid quenching and is followed by multi-stage artificial ageing heat treatment. The strength of the material evolves with the formation of second phase precipitates during the entire process. The widely accepted precipitation sequence is SSSS - clusters - β” - β’ - β. However, due to the complexity of deformations and thermal profile during the process, the classic theory is not applicable. Therefore, in this research, precipitation behaviour during ultra-fast heating, viscoplastic behaviour, effect of residual dislocations generated during high temperature deformation, quenching sensitivity and multi-stage artificial ageing response have been comprehensively studied. A set of experiments, including ultra-fast heating tests, uniaxial tensile tests, pre-straining uniaxial tensile tests, quenching tests, artificial ageing tests and TEM observations were conducted to provide a thorough understanding of the novel forming technology. The underlying mechanisms for the FAST process were investigated through the in-depth analysis of experimental results. ·Under ultra-fast heating conditions, most of the precipitates are dissolved and the spherical pre-β” precipitates are formed and finely dispersed in the aluminium matrix, which are beneficial to accelerate the subsequent precipitation process. ·The residual dislocations, generated during plastic deformation, strengthen the material and act as nucleation sites for precipitates. The peak strength is reduced owing to the uneven accumulation of precipitates around dislocations. ·The coarse β’ and β precipitates induced due to the insufficient quenching are detrimental to precipitation response. These quench-induced precipitates consume both solute atoms and vacancies, which are unable to be reversely transferred to the preferred needle-shaped β” precipitates. Based on the scientific achievements, a mechanism-based unified post-form strength (PFS) prediction model was developed ab-initio to predict the strength evolution of the material during the entire complex FAST process with highly efficient computation. Constitutive equations were proposed to model the viscoplastic behaviour at elevated temperature. Important microstructural parameters, including dislocation density, volume fraction, radius of precipitates and solute concentration were correlated to predict the material strength. The particle size distribution (PSD) sub-model was further established to accurately interpret the detailed microstructural changes during the complex thermo-mechanical processes. Furthermore, the model has been programmed into an advanced functional module ‘Tailor’ and implemented into a cloud based FEA platform. The predictive capability of the module was verified by conducting forming tests of a U-shaped component in a dedicated pilot production line. It was found that the ‘Tailor’ module was able to precisely predict the post-form strength in agreement with experiments, with a deviation of less than 7% compared to experimental results.Open Acces

    Palm tree detection in UAV images: a hybrid approach based on multimodal particle swarm optimisation

    Get PDF
    In recent years, there has been a surge of interest in palm tree detection using unmanned aerial vehicle (UAV) images, with implications for sustainability, productivity, and profitability. Similar to other object detection problems in the field of computer vision, palm tree detection typically involves classifying palm trees from non-palm tree objects or background and localising every palm tree instance in an image. Palm tree detection in large-scale high-resolution UAV images is challenging due to the large number of pixels that need to be visited by the object detector, which is computationally costly. In this thesis, we design a novel hybrid approach based on multimodal particle swarm optimisation (MPSO) algorithm that can speed up the localisation process whilst maintaining optimal accuracy for palm tree detection in UAV images. The proposed method uses a feature-extraction-based classifier as the MPSO's objective function to seek multiple positions and scales in an image that maximise the detection score. The feature-extraction-based classifier was carefully selected through empirical study and was proven seven times faster than the state-of-the-art convolutional neural network (CNN) with comparable accuracy. The research goes on with the development of a new k-d tree-structured MPSO algorithm, which is called KDT-SPSO that significantly speeds up MPSO's nearest neighbour search by only exploring the subspaces that most likely contain the query point's neighbours. KDT-SPSO was demonstrated effective in solving multimodal benchmark functions and outperformed other competitors when applied on UAV images. Finally, we devise a new approach that utilises a 3D digital surface model (DSM) to generate high confidence proposals for KDT-SPSO and existing region-based CNN (R-CNN) for palm tree detection. The use of DSM as prior information about the number and location of palm trees reduces the search space within images and decreases overall computation time. Our hybrid approach can be executed in non-specialised hardware without long training hours, achieving similar accuracy as the state-of-the-art R-CNN

    Development of sustainable groundwater management methodologies to control saltwater intrusion into coastal aquifers with application to a tropical Pacific island country

    Get PDF
    Saltwater intrusion due to the over-exploitation of groundwater in coastal aquifers is a critical challenge facing groundwater-dependent coastal communities throughout the world. Sustainable management of coastal aquifers for maintaining abstracted groundwater quality within permissible salinity limits is regarded as an important groundwater management problem necessitating urgent reliable and optimal management methodologies. This study focuses on the development and evaluation of groundwater salinity prediction tools, coastal aquifer multi-objective management strategies, and adaptive management strategies using new prediction models, coupled simulation-optimization (S/O) models, and monitoring network design, respectively. Predicting the extent of saltwater intrusion into coastal aquifers in response to existing and changing pumping patterns is a prerequisite of any groundwater management framework. This study investigates the feasibility of using support vector machine regression (SVMR), an innovative artificial intelligence-based machine learning algorithm, to predict salinity at monitoring wells in an illustrative aquifer under variable groundwater pumping conditions. For evaluation purposes, the prediction results of SVMR are compared with well-established genetic programming (GP) based surrogate models. The prediction capabilities of the two learning machines are evaluated using several measures to ensure their practicality and generalisation ability. Also, a sensitivity analysis methodology is proposed for assessing the impact of pumping rates on salt concentrations at monitoring locations. The performance evaluations suggest that the predictive capability of SVMR is superior to that of GP models. The sensitivity analysis identifies a subset of the most influential pumping rates, which is used to construct new SVMR surrogate models with improved predictive capabilities. The improved predictive capability and generalisation ability of SVMR models, together with the ability to improve the accuracy of prediction by refining the dataset used for training, make the use of SVMR models more attractive. Coupled S/O models are efficient tools that are used for designing multi-objective coastal aquifer management strategies. This study applies a regional-scale coupled S/O methodology with a Pareto front clustering technique to prescribe optimal groundwater withdrawal patterns from the Bonriki aquifer in the Pacific Island of Kiribati. A numerical simulation model is developed, calibrated and validated using field data from the Bonriki aquifer. For computational feasibility, SVMR surrogate models are trained and tested utilizing input-output datasets generated using the flow and transport numerical simulation model. The developed surrogate models were externally coupled with a multi-objective genetic algorithm optimization (MOGA) model, as a substitute for the numerical model. The study area consisted of freshwater pumping wells for extracting groundwater. Pumping from barrier wells installed along the coastlines is also considered as a management option to hydraulically control saltwater intrusion. The objective of the multi-objective management model was to maximise pumping from production wells and minimize pumping from barrier wells (which provide a hydraulic barrier) to ensure that the water quality at different monitoring locations remains within pre-specified limits. The executed multi-objective coupled S/O model generated 700 Pareto-optimal solutions. Analysing a large set of Pareto-optimal solution is a challenging task for the decision-makers. Hence, the k-means clustering technique was utilized to reduce the large Pareto-optimal solution set and help solve the large-scale saltwater intrusion problem in the Bonriki aquifer. The S/O-based management models have delivered optimal saltwater intrusion management strategies. However, at times, uncertainties in the numerical simulation model due to uncertain aquifer parameters are not incorporated into the management models. The present study explicitly incorporates aquifer parameter uncertainty into a multi-objective management model for the optimal design of groundwater pumping strategies from the unconfined Bonriki aquifer. To achieve computational efficiency and feasibility of the management model, the calibrated numerical simulation model in the S/O model was is replaced with ensembles of SVMR surrogate models. Each SVMR standalone surrogate model in the ensemble is constructed using datasets from different numerical simulation models with different hydraulic conductivity and porosity values. These ensemble SVMR models were coupled to the MOGA model to solve the Bonriki aquifer management problem for ensuring sustainable withdrawal rates that maintain specified salinity limits. The executed optimization model presented a Pareto-front with 600 non-dominated optimal trade-off pumping solutions. The reliability of the management model, established after validation of the optimal solution results, suggests that the implemented constraints of the optimization problem were satisfied; i.e., the salinities at monitoring locations remained within the pre-specified limits. The correct implementation of a prescribed optimal management strategy based on the coupled S/O model is always a concern for decision-makers. The management strategy actually implemented in the field sometimes deviates from the recommended optimal strategy, resulting in field-level deviations. Monitoring such field-level deviations during actual implementation of the recommended optimal management strategy and sequentially updating the strategy using feedback information is an important step towards adaptive management of coastal groundwater resources. In this study, a three-phase adaptive management framework for a coastal aquifer subjected to saltwater intrusion is applied and evaluated for a regional-scale coastal aquifer study area. The methodology adopted includes three sequential components. First, an optimal management strategy (consisting of groundwater extraction from production and barrier wells) is derived and implemented for the optimal management of the aquifer. The implemented management strategy is obtained by solving a homogeneous ensemble-based coupled S/O model. Second, a regional-scale optimal monitoring network is designed for the aquifer system, which considers possible user noncompliance of a recommended management strategy and uncertainty in aquifer parameter estimates. A new monitoring network design is formulated to ensure that candidate monitoring wells are placed at high risk (highly contaminated) locations. In addition, a k-means clustering methodology is utilized to select candidate monitoring wells in areas representative of the entire model domain. Finally, feedback information in the form of salinity measurements at monitoring wells is used to sequentially modify pumping strategies for future time periods in the management horizon. The developed adaptive management framework is evaluated by applying it to the Bonriki aquifer system. Overall, the results of this study suggest that the implemented adaptive management strategy has the potential to address practical implementation issues arising due to user noncompliance, as well as deviations between predicted and actual consequences of implementing a management strategy, and uncertainty in aquifer parameters. The use of ensemble prediction models is known to be more accurate standalone prediction models. The present study develops and utilises homogeneous and heterogeneous ensemble models based on several standalone evolutionary algorithms, including artificial neural networks (ANN), GP, SVMR and Gaussian process regression (GPR). These models are used to predict groundwater salinity in the Bonriki aquifer. Standalone and ensemble prediction models are trained and validated using identical pumping and salinity concentration datasets generated by solving numerical 3D transient density-dependent coastal aquifer flow and transport numerical simulation models. After validation, the ensemble models are used to predict salinity concentration at selected monitoring wells in the modelled aquifer under variable groundwater pumping conditions. The predictive capabilities of the developed ensemble models are quantified using standard statistical procedures. The performance evaluation results suggest that the predictive capabilities of the standalone prediction models (ANN, GP, SVMR and GPR) are comparable to those of the groundwater variable-density flow and salt transport numerical simulation model. However, GPR standalone models had better predictive capabilities than the other standalone models. Also, SVMR and GPR standalone models were more efficient (in terms of computational training time) than other standalone models. In terms of ensemble models, the performance of the homogeneous GPR ensemble model was found to be superior to that of the other homogeneous and heterogeneous ensemble models. Employing data-driven predictive models as replacements for complex groundwater flow and transport models enables the prediction of future scenarios and also helps save computational time, effort and requirements when developing optimal coastal aquifer management strategies based on coupled S/O models. In this study, a new data-driven model, namely Group method for data handling (GMDH) approach is developed and utilized to predict salinity concentration in a coastal aquifer and, simultaneously, determine the most influential input predictor variables (pumping rates) that had the most impact onto the outcomes (salinity at monitoring locations). To confirm the importance of variables, three tests are conducted, in which new GMDH models are constructed using subsets of the original datasets. In TEST 1, new GMDH models are constructed using a set of most influential variables only. In TEST 2, a subset of 20 variables (10 most and 10 least influential variables) are used to develop new GMDH models. In TEST 3, a subset of the least influential variables is used to develop GMDH models. A performance evaluation demonstrates that the GMDH models developed using the entire dataset have reasonable predictive accuracy and efficiency. A comparison of the performance evaluations of the three tests highlights the importance of appropriately selecting input pumping rates when developing predictive models. These results suggest that incorporating the least influential variables decreases model accuracy; thus, only considering the most influential variables in salinity prediction models is beneficial and appropriate. This study also investigated the efficiency and viability of using artificial freshwater recharge (AFR) to increase fresh groundwater pumping rates from production wells. First, the effect of AFR on the inland encroachment of saline water is quantified for existing scenarios. Specifically, groundwater head and salinity differences at monitoring locations before and after artificial recharge are presented. Second, a multi-objective management model incorporating groundwater pumping and AFR is implemented to control groundwater salinization in an illustrative coastal aquifer system. A coupled SVMR-MOGA model is developed for prescribing optimal management strategies that incorporate AFR and groundwater pumping wells. The Pareto-optimal front obtained from the SVMR-MOGA optimization model presents a set of optimal solutions for the sustainable management of the coastal aquifer. The pumping strategies obtained as Pareto-optimal solutions with and without freshwater recharge shows that saltwater intrusion is sensitive to AFR. Also, the hydraulic head lenses created by AFR can be used as one practical option to control saltwater intrusion. The developed 3D saltwater intrusion model, the predictive capabilities of the developed SVMR models, and the feasibility of using the proposed coupled multi-objective SVMR-MOGA optimization model make the proposed methodology potentially suitable for solving large-scale regional saltwater intrusion management problems. Overall, the development and evaluation of various groundwater numerical simulation models, predictive models, multi-objective management strategies and adaptive methodologies will provide decision-makers with tools for the sustainable management of coastal aquifers. It is envisioned that the outcomes of this research will provide useful information to groundwater managers and stakeholders, and offer potential resolutions to policy-makers regarding the sustainable management of groundwater resources. The real-life case study of the Bonriki aquifer presented in this study provides the scientific community with a broader understanding of groundwater resource issues in coastal aquifers and establishes the practical utility of the developed management strategies

    Multi-objective optimisation metrics for combining seismic and production data in automated reservoir history matching

    Get PDF
    Information from the time-lapse (4D) seismic data can be integrated with those from producing wells to calibrate reservoir models. 4D seismic data provides valuable information at high spatial resolution while the information provided by the production data are at high temporal resolution. However, combining the two data sources can be challenging as they are often conflicting. In addition, information from production wells themselves are often correlated and can also be conflicting especially in reservoirs of complex geology. This study will examine alternative approaches to integrating data of different sources in the automatic history matching loop. The study will focus on using multiple-objective methods in history matching to identify those that are most appropriate for the data available. The problem of identifying suitable metrics for comparing data is investigated in the context of data assimilation, formulation of objective functions, optimisation methods and parameterisation scheme. Traditional data assimilation based on global misfit functions or weighted multi-objective functions create bias which result in predictions from some areas of the model having a good fit to the data and others having very poor fit. The key to rectifying the bias was found in the approaches proposed in this study which are based on the concept of dominance. A new set of algorithms called the Dynamic Screening of Fronts in Multiobjective Optimisation (DSFMO) has been developed which enables the handling of many objectives in multi-objective fashion. With DSFMO approach, several options for selecting models for next iteration are studied and their performance appraised using different analytical functions of many objectives and parameters. The proposed approaches are also tested and validated by applying them to some synthetic reservoir models. DSFMO is then implemented in resolving the problem of many conflicting objectives in the seismic and production history matching of the Statoil Norne Field. Compared to the traditional stochastic approaches, results show that DSFMO yield better data-fitting models that reflect the uncertainty in model predictions. We also investigated the use of experimental design techniques in calibrating proxy models and suggested ways of improving the quality of proxy models in history matching. We thereafter proposed a proxy-based approach for model appraisal and uncertainty assessment in Bayesian context. We found that Markov Chain Monte Carlo resampling with the proxy model takes minutes instead of hours

    Improving the convergence rate of seismic history matching with a proxy derived method to aid stochastic sampling

    Get PDF
    History matching is a very important activity during the continued development and management of petroleum reservoirs. Time-lapse (4D) seismic data provide information on the dynamics of fluids in reservoirs, relating variations of seismic signal to saturation and pressure changes. This information can be integrated with history matching to improve convergence towards a simulation model that predicts available data. The main aim of this thesis is to develop a method to speed up the convergence rate of assisted seismic history matching using proxy derived gradient method. Stochastic inversion algorithms often rely on simple assumptions for selecting new models by random processes. In this work, we improve the way that such approaches learn about the system they are searching and thus operate more efficiently. To this end, a new method has been developed called NA with Proxy derived Gradients (NAPG). To improve convergence, we use a proxy model to understand how parameters control the misfit and then use a global stochastic method with these sensitivities to optimise the search of the parameter space. This leads to an improved set of final reservoir models. These in turn can be used more effectively in reservoir management decisions. To validate the proposed approach, we applied the new approach on a number of analytical functions and synthetic cases. In addition, we demonstrate the proposed method by applying it to the UKCS Schiehallion field. The results show that the new method speeds up the rate of convergence by a factor of two to three generally. The performance of NAPG is much improved by updating the regression equation coefficients instead of keeping it fixed. In addition, we found that the initial number of models to start NAPG or NA could be reduced by using Experimental Design instead of using random initialization. Ultimately, with all of these approaches combined, the number of models required to find a good match reduced by an order of magnitude. We have investigated the criteria for stopping the SHM loop, particularly the use of a proxy model to help. More research is needed to complete this work but the approach is promising. Quantifying parameter uncertainty using NA and NAPG was studied using the NA-Bayes approach (NAB). We found that NAB is very sensitive to misfit magnitude but otherwise NA and NAPG produce similar uncertainty measures
    corecore