347 research outputs found

    Population-based algorithms for improved history matching and uncertainty quantification of Petroleum reservoirs

    Get PDF
    In modern field management practices, there are two important steps that shed light on a multimillion dollar investment. The first step is history matching where the simulation model is calibrated to reproduce the historical observations from the field. In this inverse problem, different geological and petrophysical properties may provide equally good history matches. Such diverse models are likely to show different production behaviors in future. This ties the history matching with the second step, uncertainty quantification of predictions. Multiple history matched models are essential for a realistic uncertainty estimate of the future field behavior. These two steps facilitate decision making and have a direct impact on technical and financial performance of oil and gas companies. Population-based optimization algorithms have been recently enjoyed growing popularity for solving engineering problems. Population-based systems work with a group of individuals that cooperate and communicate to accomplish a task that is normally beyond the capabilities of each individual. These individuals are deployed with the aim to solve the problem with maximum efficiency. This thesis introduces the application of two novel population-based algorithms for history matching and uncertainty quantification of petroleum reservoir models. Ant colony optimization and differential evolution algorithms are used to search the space of parameters to find multiple history matched models and, using a Bayesian framework, the posterior probability of the models are evaluated for prediction of reservoir performance. It is demonstrated that by bringing latest developments in computer science such as ant colony, differential evolution and multiobjective optimization, we can improve the history matching and uncertainty quantification frameworks. This thesis provides insights into performance of these algorithms in history matching and prediction and develops an understanding of their tuning parameters. The research also brings a comparative study of these methods with a benchmark technique called Neighbourhood Algorithms. This comparison reveals the superiority of the proposed methodologies in various areas such as computational efficiency and match quality

    Echo state model of non-Markovian reinforcement learning, An

    Get PDF
    Department Head: Dale H. Grit.2008 Spring.Includes bibliographical references (pages 137-142).There exists a growing need for intelligent, autonomous control strategies that operate in real-world domains. Theoretically the state-action space must exhibit the Markov property in order for reinforcement learning to be applicable. Empirical evidence, however, suggests that reinforcement learning also applies to domains where the state-action space is approximately Markovian, a requirement for the overwhelming majority of real-world domains. These domains, termed non-Markovian reinforcement learning domains, raise a unique set of practical challenges. The reconstruction dimension required to approximate a Markovian state-space is unknown a priori and can potentially be large. Further, spatial complexity of local function approximation of the reinforcement learning domain grows exponentially with the reconstruction dimension. Parameterized dynamic systems alleviate both embedding length and state-space dimensionality concerns by reconstructing an approximate Markovian state-space via a compact, recurrent representation. Yet this representation extracts a cost; modeling reinforcement learning domains via adaptive, parameterized dynamic systems is characterized by instability, slow-convergence, and high computational or spatial training complexity. The objectives of this research are to demonstrate a stable, convergent, accurate, and scalable model of non-Markovian reinforcement learning domains. These objectives are fulfilled via fixed point analysis of the dynamics underlying the reinforcement learning domain and the Echo State Network, a class of parameterized dynamic system. Understanding models of non-Markovian reinforcement learning domains requires understanding the interactions between learning domains and their models. Fixed point analysis of the Mountain Car Problem reinforcement learning domain, for both local and nonlocal function approximations, suggests a close relationship between the locality of the approximation and the number and severity of bifurcations of the fixed point structure. This research suggests the likely cause of this relationship: reinforcement learning domains exist within a dynamic feature space in which trajectories are analogous to states. The fixed point structure maps dynamic space onto state-space. This explanation suggests two testable hypotheses. Reinforcement learning is sensitive to state-space locality because states cluster as trajectories in time rather than space. Second, models using trajectory-based features should exhibit good modeling performance and few changes in fixed point structure. Analysis of performance of lookup table, feedforward neural network, and Echo State Network (ESN) on the Mountain Car Problem reinforcement learning domain confirm these hypotheses. The ESN is a large, sparse, randomly-generated, unadapted recurrent neural network, which adapts a linear projection of the target domain onto the hidden layer. ESN modeling results on reinforcement learning domains show it achieves performance comparable to lookup table and neural network architectures on the Mountain Car Problem with minimal changes to fixed point structure. Also, the ESN achieves lookup table caliber performance when modeling Acrobot, a four-dimensional control problem, but is less successful modeling the lower dimensional Modified Mountain Car Problem. These performance discrepancies are attributed to the ESN’s excellent ability to represent complex short term dynamics, and its inability to consolidate long temporal dependencies into a static memory. Without memory consolidation, reinforcement learning domains exhibiting attractors with multiple dynamic scales are unlikely to be well-modeled via ESN. To mediate this problem, a simple ESN memory consolidation method is presented and tested for stationary dynamic systems. These results indicate the potential to improve modeling performance in reinforcement learning domains via memory consolidation

    Efficient Learning Machines

    Get PDF
    Computer scienc

    Topology optimization for additive manufacture

    Get PDF
    Additive manufacturing (AM) offers a way to manufacture highly complex designs with potentially enhanced performance as it is free from many of the constraints associated with traditional manufacturing. However, current design and optimisation tools, which were developed much earlier than AM, do not allow efficient exploration of AM's design space. Among these tools are a set of numerical methods/algorithms often used in the field of structural optimisation called topology optimisation (TO). These powerful techniques emerged in the 1980s and have since been used to achieve structural solutions with superior performance to those of other types of structural optimisation. However, such solutions are often constrained during optimisation to minimise structural complexities, thereby, ensuring that solutions can be manufactured via traditional manufacturing methods. With the advent of AM, it is necessary to restructure these techniques to maximise AM's capabilities. Such restructuring should involve identification and relaxation of the optimisation constraints within the TO algorithms that restrict design for AM. These constraints include the initial design, optimisation parameters and mesh characteristics of the optimisation problem being solved. A typical TO with certain mesh characteristics would involve the movement of an assumed initial design to another with improved structural performance. It was anticipated that the complexity and performance of a solution would be affected by the optimisation constraints. This work restructured a TO algorithm called the bidirectional evolutionary structural optimisation (BESO) for AM. MATLAB and MSC Nastran were coupled to study and investigate BESO for both two and three dimensional problems. It was observed that certain parametric values promote the realization of complex structures and this could be further enhanced by including an adaptive meshing strategy (AMS) in the TO. Such a strategy reduced the degrees of freedom initially required for this solution quality without the AMS

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Characterization and uncertainty analysis of siliciclastic aquifer-fault system

    Get PDF
    The complex siliciclastic aquifer system underneath the Baton Rouge area, Louisiana, USA, is fluvial in origin. The east-west trending Baton Rouge fault and Denham Springs-Scotlandville fault cut across East Baton Rouge Parish and play an important role in groundwater flow and aquifer salinization. To better understand the salinization underneath Baton Rouge, it is imperative to study the hydrofacies architecture and the groundwater flow field of the Baton Rogue aquifer-fault system. This is done through developing multiple detailed hydrofacies architecture models and multiple groundwater flow models of the aquifer-fault system, representing various uncertain model propositions. The hydrofacies architecture models focus on the Miocene-Pliocene depth interval that consists of the “1,200-foot” sand, “1,500-foot” sand, “1,700-foot” sand and the “2,000-foot” sand, as these aquifer units are classified and named by their approximate depth below ground level. The groundwater flow models focus only on the “2,000-foot” sand. The study reveals the complexity of the Baton Rouge aquifer-fault system where the sand deposition is non-uniform, different sand units are interconnected, the sand unit displacement on the faults is significant, and the spatial distribution of flow pathways through the faults is sporadic. The identified locations of flow pathways through the Baton Rouge fault provide useful information on possible windows for saltwater intrusion from the south. From the results we learn that the “1,200-foot” sand, “1,500-foot” sand and the “1,700-foot” sand should not be modeled separately since they are very well connected near the Baton Rouge fault, while the “2,000-foot” sand between the two faults is a separate unit. Results suggest that at the “2,000-foot” sand the Denham Springs-Scotlandville fault has much lower permeability in comparison to the Baton Rouge fault, and that the Baton Rouge fault plays an important role in the aquifer salinization
    corecore