25 research outputs found

    An improved Ant Colony System for the Sequential Ordering Problem

    Full text link
    It is not rare that the performance of one metaheuristic algorithm can be improved by incorporating ideas taken from another. In this article we present how Simulated Annealing (SA) can be used to improve the efficiency of the Ant Colony System (ACS) and Enhanced ACS when solving the Sequential Ordering Problem (SOP). Moreover, we show how the very same ideas can be applied to improve the convergence of a dedicated local search, i.e. the SOP-3-exchange algorithm. A statistical analysis of the proposed algorithms both in terms of finding suitable parameter values and the quality of the generated solutions is presented based on a series of computational experiments conducted on SOP instances from the well-known TSPLIB and SOPLIB2006 repositories. The proposed ACS-SA and EACS-SA algorithms often generate solutions of better quality than the ACS and EACS, respectively. Moreover, the EACS-SA algorithm combined with the proposed SOP-3-exchange-SA local search was able to find 10 new best solutions for the SOP instances from the SOPLIB2006 repository, thus improving the state-of-the-art results as known from the literature. Overall, the best known or improved solutions were found in 41 out of 48 cases.Comment: 30 pages, 8 tables, 11 figure

    Multi-objective optimisation metrics for combining seismic and production data in automated reservoir history matching

    Get PDF
    Information from the time-lapse (4D) seismic data can be integrated with those from producing wells to calibrate reservoir models. 4D seismic data provides valuable information at high spatial resolution while the information provided by the production data are at high temporal resolution. However, combining the two data sources can be challenging as they are often conflicting. In addition, information from production wells themselves are often correlated and can also be conflicting especially in reservoirs of complex geology. This study will examine alternative approaches to integrating data of different sources in the automatic history matching loop. The study will focus on using multiple-objective methods in history matching to identify those that are most appropriate for the data available. The problem of identifying suitable metrics for comparing data is investigated in the context of data assimilation, formulation of objective functions, optimisation methods and parameterisation scheme. Traditional data assimilation based on global misfit functions or weighted multi-objective functions create bias which result in predictions from some areas of the model having a good fit to the data and others having very poor fit. The key to rectifying the bias was found in the approaches proposed in this study which are based on the concept of dominance. A new set of algorithms called the Dynamic Screening of Fronts in Multiobjective Optimisation (DSFMO) has been developed which enables the handling of many objectives in multi-objective fashion. With DSFMO approach, several options for selecting models for next iteration are studied and their performance appraised using different analytical functions of many objectives and parameters. The proposed approaches are also tested and validated by applying them to some synthetic reservoir models. DSFMO is then implemented in resolving the problem of many conflicting objectives in the seismic and production history matching of the Statoil Norne Field. Compared to the traditional stochastic approaches, results show that DSFMO yield better data-fitting models that reflect the uncertainty in model predictions. We also investigated the use of experimental design techniques in calibrating proxy models and suggested ways of improving the quality of proxy models in history matching. We thereafter proposed a proxy-based approach for model appraisal and uncertainty assessment in Bayesian context. We found that Markov Chain Monte Carlo resampling with the proxy model takes minutes instead of hours

    Combining evolutionary algorithms and agent-based simulation for the development of urbanisation policies

    Get PDF
    Urban-planning authorities continually face the problem of optimising the allocation of green space over time in developing urban environments. To help in these decision-making processes, this thesis provides an empirical study of using evolutionary approaches to solve sequential decision making problems under uncertainty in stochastic environments. To achieve this goal, this work is underpinned by developing a theoretical framework based on the economic model of Alonso and the associated methodology for modelling spatial and temporal urban growth, in order to better understand the complexity inherent in this kind of system and to generate and improve relevant knowledge for the urban planning community. The model was hybridised with cellular automata and agent-based model and extended to encompass green space planning based on urban cost and satisfaction. Monte Carlo sampling techniques and the use of the urban model as a surrogate tool were the two main elements investigated and applied to overcome the noise and uncertainty derived from dealing with future trends and expectations. Once the evolutionary algorithms were equipped with these mechanisms, the problem under consideration was deļ¬ned and characterised as a type of adaptive submodular. Afterwards, the performance of a non-adaptive evolutionary approach with a random search and a very smart greedy algorithm was compared and in which way the complexity that is linked with the conļ¬guration of the problem modiļ¬es the performance of both algorithms was analysed. Later on, the application of very distinct frameworks incorporating evolutionary algorithm approaches for this problem was explored: (i) an ā€˜oļ¬„ineā€™ approach, in which a candidate solution encodes a complete set of decisions, which is then evaluated by full simulation, and (ii) an ā€˜onlineā€™ approach which involves a sequential series of optimizations, each making only a single decision, and starting its simulations from the endpoint of the previous run

    The Social Question in the Twenty-First Century:A Global View

    Get PDF
    corecore