2,918 research outputs found

    How to Model Condensate Banking in a Simulation Model to Get Reliable Forecasts? Case Story of Elgin/Franklin

    Get PDF
    Imperial Users onl

    Improving reservoir characterisation and simulation using near-wellbore upscaling

    Get PDF
    In this thesis, novel workflows involving high resolution near-wellbore modelling (NWM) are illustrated, which allow integration of multi-scale geological and petrophysical data from highly heterogeneous reservoirs in field-scale reservoir simulations. When applied to a clastic reservoir with high variance at small scale, NWM significantly improved reservoir characterisation and calibration of reservoir model with well test data. Results show that using NWM tools for reservoir modelling yields more precise flow calculations and improves our fundamental understanding of the interactions between the reservoir and the wellbore. Furthermore, this thesis employs an integrated NWM workflow to identify and evaluate the geological heterogeneities that enhanced reservoir permeability in a giant carbonate reservoir with a long production history. Key among these heterogeneities are mechanically weak zones of solution-enhanced porosity, leached stylolites and associated tension gashes, which were developed during late stage diagenetic corrosion. The results of this investigation confirmed the critical role of diagenetic corrosion in enhancing the permeability of the reservoir. One of the key aims of this thesis is to develop a novel near-wellbore upscaling (NWU) workflow that addresses the challenges associated with conventional carbonate modelling workflows. The NWU workflow developed in this thesis provides a systematic geostatistical approach to obtain more realistic representation of the above multi-scale geological-petrophysical heterogeneities in the reservoir simulation model of the carbonate field. The NWU results were used to generate global porosity-permeability and vertical-horizontal permeability relationships for reservoir simulation. Instead of applying artificial permeability multipliers that do not necessarily capture the impacts of geological heterogeneities, the NWU workflow incorporates representations of fine-scale heterogeneities in the reservoir simulation model. Another aim of this thesis is to develop a new near-wellbore rock-typing and upscaling approach to improve the integration of reservoir rock-typing and simulation in carbonate reservoirs. The rock-typing and upscaling methodology described in this work involves the geological-petrophysical classification of the reservoir heterogeneities through systematic evaluation of the key diagenetic events, including the key associations between the depositional and diagenetic features, and their impact on reservoir flow properties. The near-wellbore rock-typing and upscaling workflow yielded consistent initialisation of the reservoir simulation model and therefore improved the calculation of volumes of fluids-in-place. Subsequently, the cumulative production curves computed by the reservoir simulation model agreed well with the historic production data. The revised simulation model is now much better constrained to the reservoir geology and provides an improved geological-prior for history matching. This thesis therefore provides valuable insights to the means by which a geologically consistent field-level history match can be achieved for complex carbonate reservoirs

    A study of waterflood sweep efficiency in a complex viscous oil reservoir

    Get PDF
    Master's Project (M.S.) University of Alaska Fairbanks, 2014West Sak is a multi-billion barrel viscous oil accumulation on the North Slope of Alaska. The unique geologic complexities and fluid properties of the West Sak reservoir make understanding ultimate sweep efficiency under waterflood a challenge. This project uses uncertainty modeling to evaluate the ultimate sweep efficiency in the West Sak reservoir and honors a rich dataset gathered from 30 years of development history. A sector model encompassing the area of the West Sak commercial pilot was developed and a sensitivity analysis conducted to determine the most important parameters affecting sweep efficiency. As part of this process unique constraints were incorporated into the model including measured saturations at the end of history, and observed completion performance. The workflow for this project was documented and can be adapted for use in larger scale models. The workflow includes the development of static cell properties which accurately represent field behavior, a preliminary history match using conventional methods and a sensitivity analysis employing a multi-run visualization tool to effectively navigate and process large amounts of data. The main contributions of this work include the identification of key parameters affecting sweep efficiency in the West Sak oil field, a documented workflow, and increased insight into observed production behavior

    Integrating multiple clusters for compute-intensive applications

    Get PDF
    Multicluster grids provide one promising solution to satisfying the growing computational demands of compute-intensive applications. However, it is challenging to seamlessly integrate all participating clusters in different domains into a single virtual computational platform. In order to fully utilize the capabilities of multicluster grids, computer scientists need to deal with the issue of joining together participating autonomic systems practically and efficiently to execute grid-enabled applications. Driven by several compute-intensive applications, this theses develops a multicluster grid management toolkit called Pelecanus to bridge the gap between user\u27s needs and the system\u27s heterogeneity. Application scientists will be able to conduct very large-scale execution across multiclusters with transparent QoS assurance. A novel model called DA-TC (Dynamic Assignment with Task Containers) is developed and is integrated into Pelecanus. This model uses the concept of a task container that allows one to decouple resource allocation from resource binding. It employs static load balancing for task container distribution and dynamic load balancing for task assignment. The slowest resources become useful rather than be bottlenecks in this manner. A cluster abstraction is implemented, which not only provides various cluster information for the DA-TC execution model, but also can be used as a standalone toolkit to monitor and evaluate the clusters\u27 functionality and performance. The performance of the proposed DA-TC model is evaluated both theoretically and experimentally. Results demonstrate the importance of reducing queuing time in decreasing the total turnaround time for an application. Experiments were conducted to understand the performance of various aspects of the DA-TC model. Experiments showed that our model could significantly reduce turnaround time and increase resource utilization for our targeted application scenarios. Four applications are implemented as case studies to determine the applicability of the DA-TC model. In each case the turnaround time is greatly reduced, which demonstrates that the DA-TC model is efficient for assisting application scientists in conducting their research. In addition, virtual resources were integrated into the DA-TC model for application execution. Experiments show that the execution model proposed in this thesis can work seamlessly with multiple hybrid grid/cloud resources to achieve reduced turnaround time

    Dynamic workflow management for large scale scientific applications

    Get PDF
    The increasing computational and data requirements of scientific applications have made the usage of large clustered systems as well as distributed resources inevitable. Although executing large applications in these environments brings increased performance, the automation of the process becomes more and more challenging. The use of complex workflow management systems has been a viable solution for this automation process. In this thesis, we study a broad range of workflow management tools and compare their capabilities especially in terms of dynamic and conditional structures they support, which are crucial for the automation of complex applications. We then apply some of these tools to two real-life scientific applications: i) simulation of DNA folding, and ii) reservoir uncertainty analysis. Our implementation is based on Pegasus workflow planning tool, DAGMan workflow execution system, Condor-G computational scheduler, and Stork data scheduler. The designed abstract workflows are converted to concrete workflows using Pegasus where jobs are matched to resources; DAGMan makes sure these jobs execute reliably and in the correct order on the remote resources; Condor-G performs the scheduling for the computational tasks and Stork optimizes the data movement between different components. Integrated solution with these tools allows automation of large scale applications, as well as providing complete reliability and efficiency in executing complex workflows. We have also developed a new site selection mechanism on top of these systems, which can choose the most available computing resources for the submission of the tasks. The details of our design and implementation, as well as experimental results are presented

    A Massive Data Parallel Computational Framework for Petascale/Exascale Hybrid Computer Systems

    Full text link
    Heterogeneous systems are becoming more common on High Performance Computing (HPC) systems. Even using tools like CUDA and OpenCL it is a non-trivial task to obtain optimal performance on the GPU. Approaches to simplifying this task include Merge (a library based framework for heterogeneous multi-core systems), Zippy (a framework for parallel execution of codes on multiple GPUs), BSGP (a new programming language for general purpose computation on the GPU) and CUDA-lite (an enhancement to CUDA that transforms code based on annotations). In addition, efforts are underway to improve compiler tools for automatic parallelization and optimization of affine loop nests for GPUs and for automatic translation of OpenMP parallelized codes to CUDA. In this paper we present an alternative approach: a new computational framework for the development of massively data parallel scientific codes applications suitable for use on such petascale/exascale hybrid systems built upon the highly scalable Cactus framework. As the first non-trivial demonstration of its usefulness, we successfully developed a new 3D CFD code that achieves improved performance.Comment: Parallel Computing 2011 (ParCo2011), 30 August -- 2 September 2011, Ghent, Belgiu

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware
    • …
    corecore