1,347 research outputs found

    Interfacial Area Transport Equation Models and Validation against High Resolution Experimental Data for Small and Large Diameter Vertical Pipes.

    Full text link
    For analyses of Nuclear Power Plants, the current state-of-the-art model for predicting the behavior of two-phase flows is the two-fluid model. In the two-fluid model, balance equations are coupled together through transfer terms that depend on the area of the interface between liquid and gas. Efforts in the past have been focused on the development of an interfacial area transport equation model (IATE) in order to eliminate the drawbacks of static flow regime maps currently used in best-estimate thermal-hydraulic system codes. The IATE attempts to model the dynamic evolution of the gas/liquid interface by accounting for the different interaction mechanisms (i.e. bubble break-up and coalescence). The further development and validation of IATE models has been hindered by the lack of adequate experimental databases in regions beyond the bubbly flow regime. At the TOPFLOW test facility, experiments utilizing wire-mesh sensors have been performed over a wide range of flow conditions, establishing a database of high resolution (in space and time) data. The objective of the dissertation is to evaluate and improve current IATE models using the TOPFLOW database and to assess the uncertainty in the reconstructed interfacial area measured using wire-mesh sensors. The small-diameter Fu-Ishii model was assessed against the TOPFLOW 52 mm data. The model was found to perform well (within the experimental uncertainty of ±10%) for low void fractions. At high void fractions, the bubble interaction mechanism responsible for poor performance of the model was identified. A genetic algorithm was then used to quantify the correct incidence of this mechanism on the overall evolution of the interfacial area concentration along the pipe vertical axis. The large-diameter Smith-Schlegel model was assessed against the TOPFLOW 198 mm data. This model was also found to perform well at low void fractions. At high void fractions, the good agreement between the model predictions and the experimental data were found to be due to a compensation of errors. Studies using the genetic algorithm indicated significant performance improvement for the DN200 data. However, the improvement in prediction capabilities could not be reproduced when the model was assessed against independent large-diameter databases available in the literature.PhDNuclear Engineering and Radiological SciencesUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133279/1/akshayjd_1.pd

    Numerical Analysis of Two Phase Flow and Heat Transfer in Condensers

    Get PDF
    Steam surface condensers are commonly used in the power generation industry and their performance significantly affects the efficiency of the power plant. Therefore, it is vital to acquire a better understanding of the complex phenomena occurring inside condensers. A general three-dimensional numerical model is developed in this study to simulate the two-phase flow and heat transfer inside full-scale industrial condensers with irregular tube bundle shapes. The Eulerian-Eulerian two-phase model is selected to simulate gas and liquid flows and the interaction between them. A porous media approach is adopted to model the presence of large number of tubes in the condenser. The effect of the turbulence on the primary phase is accounted for by solving the transport equations for turbulent kinetic energy and dissipation rate. Various types of turbulence models are evaluated to select the best model for the condenser analysis. Also, the modified k-ε and RNG k-ε models are proposed to model the flow and heat transfer in condensers by adding the corresponding terms to the transport equations of the turbulence model to account for the effects of the tube bundle and condensate droplets on the primary phase turbulence, momentum and heat transfer. The proposed model provides excellent match with the experimental data and a significant improvement over the previous models. Furthermore, the proposed numerical model is coupled with a novel swarm intelligence multi-objective optimization algorithm to evaluate the performance of the new design candidates and introduce a set of condenser designs based on various input parameters and objective functions

    Data-driven deconvolution for large eddy simulations of Kraichnan turbulence

    Get PDF
    In this article, we demonstrate the use of artificial neural networks as optimal maps which are utilized for convolution and deconvolution of coarse-grained fields to account for sub-grid scale turbulence effects. We demonstrate that an effective eddy-viscosity is predicted by our purely data-driven large eddy simulation framework without explicit utilization of phenomenological arguments. In addition, our data-driven framework precludes the knowledge of true sub-grid stress information during the training phase due to its focus on estimating an effective filter and its inverse so that grid-resolved variables may be related to direct numerical simulation data statistically. The proposed predictive framework is also combined with a statistical truncation mechanism for ensuring numerical realizability in an explicit formulation. Through this we seek to unite structural and functional modeling strategies for modeling non-linear partial differential equations using reduced degrees of freedom. Both a priori and a posteriori results are shown for a two-dimensional decaying turbulence case in addition to a detailed description of validation and testing. A hyperparameter sensitivity study also shows that the proposed dual network framework simplifies learning complexity and is viable with exceedingly simple network architectures. Our findings indicate that the proposed framework approximates a robust and stable sub-grid closure which compares favorably to the Smagorinsky and Leith hypotheses for capturing the theoretical k3k^{-3} scaling in Kraichnan turbulence

    Exploring the adoption of a conceptual data analytics framework for subsurface energy production systems: a study of predictive maintenance, multi-phase flow estimation, and production optimization

    Get PDF
    Als die Technologie weiter fortschreitet und immer stärker in der Öl- und Gasindustrie integriert wird, steht eine enorme Menge an Daten in verschiedenen Wissenschaftsdisziplinen zur Verfügung, die neue Möglichkeiten bieten, informationsreiche und handlungsorientierte Informationen zu gewinnen. Die Konvergenz der digitalen Transformation mit der Physik des Flüssigkeitsflusses durch poröse Medien und Pipeline hat die Entwicklung und Anwendung von maschinellem Lernen (ML) vorangetrieben, um weiteren Mehrwert aus diesen Daten zu gewinnen. Als Folge hat sich die digitale Transformation und ihre zugehörigen maschinellen Lernanwendungen zu einem neuen Forschungsgebiet entwickelt. Die Transformation von Brownfields in digitale Ölfelder kann bei der Energieproduktion helfen, indem verschiedene Ziele erreicht werden, einschließlich erhöhter betrieblicher Effizienz, Produktionsoptimierung, Zusammenarbeit, Datenintegration, Entscheidungsunterstützung und Workflow-Automatisierung. Diese Arbeit zielt darauf ab, ein Rahmenwerk für diese Anwendungen zu präsentieren, insbesondere durch die Implementierung virtueller Sensoren, Vorhersageanalytik mithilfe von Vorhersagewartung für die Produktionshydraulik-Systeme (mit dem Schwerpunkt auf elektrischen Unterwasserpumpen) und präskriptiven Analytik für die Produktionsoptimierung in Dampf- und Wasserflutprojekten. In Bezug auf virtuelle Messungen ist eine genaue Schätzung von Mehrphasenströmen für die Überwachung und Verbesserung von Produktionsprozessen entscheidend. Diese Studie präsentiert einen datengetriebenen Ansatz zur Berechnung von Mehrphasenströmen mithilfe von Sensormessungen in elektrischen untergetauchten Pumpbrunnen. Es wird eine ausführliche exploratorische Datenanalyse durchgeführt, einschließlich einer Ein Variablen Studie der Zielausgänge (Flüssigkeitsrate und Wasseranteil), einer Mehrvariablen-Studie der Beziehungen zwischen Eingaben und Ausgaben sowie einer Datengruppierung basierend auf Hauptkomponentenprojektionen und Clusteralgorithmen. Feature Priorisierungsexperimente werden durchgeführt, um die einflussreichsten Parameter in der Vorhersage von Fließraten zu identifizieren. Die Modellvergleich erfolgt anhand des mittleren absoluten Fehlers, des mittleren quadratischen Fehlers und des Bestimmtheitskoeffizienten. Die Ergebnisse zeigen, dass die CNN-LSTM-Netzwerkarchitektur besonders effektiv bei der Zeitreihenanalyse von ESP-Sensordaten ist, da die 1D-CNN-Schichten automatisch Merkmale extrahieren und informative Darstellungen von Zeitreihendaten erzeugen können. Anschließend wird in dieser Studie eine Methodik zur Umsetzung von Vorhersagewartungen für künstliche Hebesysteme, insbesondere bei der Wartung von Elektrischen Untergetauchten Pumpen (ESP), vorgestellt. Conventional maintenance practices for ESPs require extensive resources and manpower, and are often initiated through reactive monitoring of multivariate sensor data. Um dieses Problem zu lösen, wird die Verwendung von Hauptkomponentenanalyse (PCA) und Extreme Gradient Boosting Trees (XGBoost) zur Analyse von Echtzeitsensordaten und Vorhersage möglicher Ausfälle in ESPs eingesetzt. PCA wird als unsupervised technique eingesetzt und sein Ausgang wird weiter vom XGBoost-Modell für die Vorhersage des Systemstatus verarbeitet. Das resultierende Vorhersagemodell hat gezeigt, dass es Signale von möglichen Ausfällen bis zu sieben Tagen im Voraus bereitstellen kann, mit einer F1-Bewertung größer als 0,71 im Testset. Diese Studie integriert auch Model-Free Reinforcement Learning (RL) Algorithmen zur Unterstützung bei Entscheidungen im Rahmen der Produktionsoptimierung. Die Aufgabe, die optimalen Injektionsstrategien zu bestimmen, stellt Herausforderungen aufgrund der Komplexität der zugrundeliegenden Dynamik, einschließlich nichtlinearer Formulierung, zeitlicher Variationen und Reservoirstrukturheterogenität. Um diese Herausforderungen zu bewältigen, wurde das Problem als Markov-Entscheidungsprozess reformuliert und RL-Algorithmen wurden eingesetzt, um Handlungen zu bestimmen, die die Produktion optimieren. Die Ergebnisse zeigen, dass der RL-Agent in der Lage war, den Netto-Barwert (NPV) durch kontinuierliche Interaktion mit der Umgebung und iterative Verfeinerung des dynamischen Prozesses über mehrere Episoden signifikant zu verbessern. Dies zeigt das Potenzial von RL-Algorithmen, effektive und effiziente Lösungen für komplexe Optimierungsprobleme im Produktionsbereich zu bieten.As technology continues to advance and become more integrated in the oil and gas industry, a vast amount of data is now prevalent across various scientific disciplines, providing new opportunities to gain insightful and actionable information. The convergence of digital transformation with the physics of fluid flow through porous media and pipelines has driven the advancement and application of machine learning (ML) techniques to extract further value from this data. As a result, digital transformation and its associated machine-learning applications have become a new area of scientific investigation. The transformation of brownfields into digital oilfields can aid in energy production by accomplishing various objectives, including increased operational efficiency, production optimization, collaboration, data integration, decision support, and workflow automation. This work aims to present a framework of these applications, specifically through the implementation of virtual sensing, predictive analytics using predictive maintenance on production hydraulic systems (with a focus on electrical submersible pumps), and prescriptive analytics for production optimization in steam and waterflooding projects. In terms of virtual sensing, the accurate estimation of multi-phase flow rates is crucial for monitoring and improving production processes. This study presents a data-driven approach for calculating multi-phase flow rates using sensor measurements located in electrical submersible pumped wells. An exhaustive exploratory data analysis is conducted, including a univariate study of the target outputs (liquid rate and water cut), a multivariate study of the relationships between inputs and outputs, and data grouping based on principal component projections and clustering algorithms. Feature prioritization experiments are performed to identify the most influential parameters in the prediction of flow rates. Model comparison is done using the mean absolute error, mean squared error and coefficient of determination. The results indicate that the CNN-LSTM network architecture is particularly effective in time series analysis for ESP sensor data, as the 1D-CNN layers are capable of extracting features and generating informative representations of time series data automatically. Subsequently, the study presented herein a methodology for implementing predictive maintenance on artificial lift systems, specifically regarding the maintenance of Electrical Submersible Pumps (ESPs). Conventional maintenance practices for ESPs require extensive resources and manpower and are often initiated through reactive monitoring of multivariate sensor data. To address this issue, the study employs the use of principal component analysis (PCA) and extreme gradient boosting trees (XGBoost) to analyze real-time sensor data and predict potential failures in ESPs. PCA is utilized as an unsupervised technique and its output is further processed by the XGBoost model for prediction of system status. The resulting predictive model has been shown to provide signals of potential failures up to seven days in advance, with an F1 score greater than 0.71 on the test set. In addition to the data-driven modeling approach, The present study also in- corporates model-free reinforcement learning (RL) algorithms to aid in decision-making in production optimization. The task of determining the optimal injection strategy poses challenges due to the complexity of the underlying dynamics, including nonlinear formulation, temporal variations, and reservoir heterogeneity. To tackle these challenges, the problem was reformulated as a Markov decision process and RL algorithms were employed to determine actions that maximize production yield. The results of the study demonstrate that the RL agent was able to significantly enhance the net present value (NPV) by continuously interacting with the environment and iteratively refining the dynamic process through multiple episodes. This showcases the potential for RL algorithms to provide effective and efficient solutions for complex optimization problems in the production domain. In conclusion, this study represents an original contribution to the field of data-driven applications in subsurface energy systems. It proposes a data-driven method for determining multi-phase flow rates in electrical submersible pumped (ESP) wells utilizing sensor measurements. The methodology includes conducting exploratory data analysis, conducting experiments to prioritize features, and evaluating models based on mean absolute error, mean squared error, and coefficient of determination. The findings indicate that a convolutional neural network-long short-term memory (CNN-LSTM) network is an effective approach for time series analysis in ESPs. In addition, the study implements principal component analysis (PCA) and extreme gradient boosting trees (XGBoost) to perform predictive maintenance on ESPs and anticipate potential failures up to a seven-day horizon. Furthermore, the study applies model-free reinforcement learning (RL) algorithms to aid decision-making in production optimization and enhance net present value (NPV)

    Adaptive swarm optimisation assisted surrogate model for pipeline leak detection and characterisation.

    Get PDF
    Pipelines are often subject to leakage due to ageing, corrosion and weld defects. It is difficult to avoid pipeline leakage as the sources of leaks are diverse. Various pipeline leakage detection methods, including fibre optic, pressure point analysis and numerical modelling, have been proposed during the last decades. One major issue of these methods is distinguishing the leak signal without giving false alarms. Considering that the data obtained by these traditional methods are digital in nature, the machine learning model has been adopted to improve the accuracy of pipeline leakage detection. However, most of these methods rely on a large training dataset for accurate training models. It is difficult to obtain experimental data for accurate model training. Some of the reasons include the huge cost of an experimental setup for data collection to cover all possible scenarios, poor accessibility to the remote pipeline, and labour-intensive experiments. Moreover, datasets constructed from data acquired in laboratory or field tests are usually imbalanced, as leakage data samples are generated from artificial leaks. Computational fluid dynamics (CFD) offers the benefits of providing detailed and accurate pipeline leakage modelling, which may be difficult to obtain experimentally or with the aid of analytical approach. However, CFD simulation is typically time-consuming and computationally expensive, limiting its pertinence in real-time applications. In order to alleviate the high computational cost of CFD modelling, this study proposed a novel data sampling optimisation algorithm, called Adaptive Particle Swarm Optimisation Assisted Surrogate Model (PSOASM), to systematically select simulation scenarios for simulation in an adaptive and optimised manner. The algorithm was designed to place a new sample in a poorly sampled region or regions in parameter space of parametrised leakage scenarios, which the uniform sampling methods may easily miss. This was achieved using two criteria: population density of the training dataset and model prediction fitness value. The model prediction fitness value was used to enhance the global exploration capability of the surrogate model, while the population density of training data samples is beneficial to the local accuracy of the surrogate model. The proposed PSOASM was compared with four conventional sequential sampling approaches and tested on six commonly used benchmark functions in the literature. Different machine learning algorithms are explored with the developed model. The effect of the initial sample size on surrogate model performance was evaluated. Next, pipeline leakage detection analysis - with much emphasis on a multiphase flow system - was investigated in order to find the flow field parameters that provide pertinent indicators in pipeline leakage detection and characterisation. Plausible leak scenarios which may occur in the field were performed for the gas-liquid pipeline using a three-dimensional RANS CFD model. The perturbation of the pertinent flow field indicators for different leak scenarios is reported, which is expected to help in improving the understanding of multiphase flow behaviour induced by leaks. The results of the simulations were validated against the latest experimental and numerical data reported in the literature. The proposed surrogate model was later applied to pipeline leak detection and characterisation. The CFD modelling results showed that fluid flow parameters are pertinent indicators in pipeline leak detection. It was observed that upstream pipeline pressure could serve as a critical indicator for detecting leakage, even if the leak size is small. In contrast, the downstream flow rate is a dominant leakage indicator if the flow rate monitoring is chosen for leak detection. The results also reveal that when two leaks of different sizes co-occur in a single pipe, detecting the small leak becomes difficult if its size is below 25% of the large leak size. However, in the event of a double leak with equal dimensions, the leak closer to the pipe upstream is easier to detect. The results from all the analyses demonstrate the PSOASM algorithm's superiority over the well-known sequential sampling schemes employed for evaluation. The test results show that the PSOASM algorithm can be applied for pipeline leak detection with limited training datasets and provides a general framework for improving computational efficiency using adaptive surrogate modelling in various real-life applications

    How to Model Condensate Banking in a Simulation Model to Get Reliable Forecasts? Case Story of Elgin/Franklin

    Get PDF
    Imperial Users onl

    Doctor of Philosophy

    Get PDF
    dissertationThe objective of this dissertation is to estimate possible leakage pathways such as abandoned wells and fault zones in the deep subsurface for CO2 storage using inverse analysis. Leakage pathways through a cap rock may cause CO2 to migrate into the layers above cap rock. An inverse analysis using iTOUGH2 was applied to estimate possible leakage pathways using pressure anomalies in the overlying formation induced by brine and/or CO2 leaks. Prior to applying inverse analysis, sensitivity analysis and forward modeling were conducted. In addition, an inverse model was developed for single-phase flow and it was applied to the leakage pathway estimation in a brine/CO2 system. Migration of brine/CO2 through the leakage pathway was simulated in the generic homogeneous and heterogeneous domains. The increased pressure gradient due to CO2 injection continuously induced brine leaks through the leakage pathway. Capillary pressure was induced by the migration of CO2 along the leakage pathway saturated by brine. Pressure anomalies due to capillary pressures were propagated to the entire overlying formation. The sensitivity analysis was focused on how the hydrogeological properties affect the pressure signals at monitoring wells. Parameter estimation using the iTOUGH2 model was applied to detect locations of leakage pathways in homogeneous and heterogeneous model domains. For homogeneous models, the parameterization of uncertain permeability in an overlying formation could improve location estimation accuracy. Residual analysis illustrated that pressure anomalies in the overlying formation induced by leaks are critical information for the leakage pathway estimation. For heterogeneous models, the calibration of renormalized permeability values could reduce systematic modeling errors and should improve the leakage pathway location estimation accuracy. The weighting factors significantly influenced the accuracy of the leakage pathway estimation. The developed inverse model was applied to estimate the leakage pathway in a brine/CO2 system using pressure anomalies induced by only brine leaks. To estimate a possible leakage pathway, the developed inverse model calibrated each integrated parameter (of both cross-sectional area and vertical hydraulic conductivity) of initial guesses of the leakage pathway. This application can provide warning before the CO2 leaks, and will be useful in mitigating the risk of CO2 leaks

    Theoretical Investigation of Immiscible Multiphase Flow Mechanisms in Porous Media with Capillarity

    Get PDF
    The correct description of multiphase flow mechanism in porous media is an important aspect of research in fluid mechanics, water resources and petroleum engineering. The thorough understanding of these mechanisms is important for many applications such as waterflood, CO2 sequestration, and enhanced oil recovery. Being different from single phase flow that is well described by Darcy’s law and well understood for over 160 years, the multiphase flow mechanism requires more mathematical involvement with more complex fluid interaction which inevitably will incorporate relative permeability and capillary pressure into its description. For typical two-phase flow problems, especially at the conventional reservoir scale, the Buckley-Leverett flow equations are normally applied with negligible capillarity to capture the flow behavior. However, as we extend our studies to higher resolution using multiscale calculations, or evaluate tighter or higher contrast heterogeneous reservoirs, capillarity becomes increasingly important. Also, for situations such as spontaneous imbibition that wetting fluid is displaced by non-wetting invading fluid, it is possible that capillary force becomes the dominating driving force with negligible viscous and gravity contributions. To better characterize the multiphase flow mechanism with capillarity, in this research, a detailed investigation is carried out in pursuit of more rigorous mathematical description and broader applicability. The numerical simulation analysis of the described problem has long been a subject of interest with numerous publications addressing it. Being different from the traditional methods where numerical simulation is used, we pursue the analytical description of the flow behavior using Lagrangian approach which is better in describing these frontal propagation problems. Also, the analytical solution tends to give more insight into the underlying physical characteristics of the problem itself. As one of the most important outcomes, the methodology derives a new dimensionless capillary group that characterizes the relative strength of capillarity at the continuum scale based on the analytical solution. Knowledge of this can be used for stability analyses, with future potential application in the design of computational grids to properly resolve the capillary physics

    Theoretical Investigation of Immiscible Multiphase Flow Mechanisms in Porous Media with Capillarity

    Get PDF
    The correct description of multiphase flow mechanism in porous media is an important aspect of research in fluid mechanics, water resources and petroleum engineering. The thorough understanding of these mechanisms is important for many applications such as waterflood, CO2 sequestration, and enhanced oil recovery. Being different from single phase flow that is well described by Darcy’s law and well understood for over 160 years, the multiphase flow mechanism requires more mathematical involvement with more complex fluid interaction which inevitably will incorporate relative permeability and capillary pressure into its description. For typical two-phase flow problems, especially at the conventional reservoir scale, the Buckley-Leverett flow equations are normally applied with negligible capillarity to capture the flow behavior. However, as we extend our studies to higher resolution using multiscale calculations, or evaluate tighter or higher contrast heterogeneous reservoirs, capillarity becomes increasingly important. Also, for situations such as spontaneous imbibition that wetting fluid is displaced by non-wetting invading fluid, it is possible that capillary force becomes the dominating driving force with negligible viscous and gravity contributions. To better characterize the multiphase flow mechanism with capillarity, in this research, a detailed investigation is carried out in pursuit of more rigorous mathematical description and broader applicability. The numerical simulation analysis of the described problem has long been a subject of interest with numerous publications addressing it. Being different from the traditional methods where numerical simulation is used, we pursue the analytical description of the flow behavior using Lagrangian approach which is better in describing these frontal propagation problems. Also, the analytical solution tends to give more insight into the underlying physical characteristics of the problem itself. As one of the most important outcomes, the methodology derives a new dimensionless capillary group that characterizes the relative strength of capillarity at the continuum scale based on the analytical solution. Knowledge of this can be used for stability analyses, with future potential application in the design of computational grids to properly resolve the capillary physics
    corecore