15,337 research outputs found

    Implicit high-order gas-kinetic schemes for compressible flows on three-dimensional unstructured meshes

    Full text link
    In the previous studies, the high-order gas-kinetic schemes (HGKS) have achieved successes for unsteady flows on three-dimensional unstructured meshes. In this paper, to accelerate the rate of convergence for steady flows, the implicit non-compact and compact HGKSs are developed. For non-compact scheme, the simple weighted essentially non-oscillatory (WENO) reconstruction is used to achieve the spatial accuracy, where the stencils for reconstruction contain two levels of neighboring cells. Incorporate with the nonlinear generalized minimal residual (GMRES) method, the implicit non-compact HGKS is developed. In order to improve the resolution and parallelism of non-compact HGKS, the implicit compact HGKS is developed with Hermite WENO (HWENO) reconstruction, in which the reconstruction stencils only contain one level of neighboring cells. The cell averaged conservative variable is also updated with GMRES method. Simultaneously, a simple strategy is used to update the cell averaged gradient by the time evolution of spatial-temporal coupled gas distribution function. To accelerate the computation, the implicit non-compact and compact HGKSs are implemented with the graphics processing unit (GPU) using compute unified device architecture (CUDA). A variety of numerical examples, from the subsonic to supersonic flows, are presented to validate the accuracy, robustness and efficiency of both inviscid and viscous flows.Comment: arXiv admin note: text overlap with arXiv:2203.0904

    A family of total Lagrangian Petrov-Galerkin Cosserat rod finite element formulations

    Full text link
    The standard in rod finite element formulations is the Bubnov-Galerkin projection method, where the test functions arise from a consistent variation of the ansatz functions. This approach becomes increasingly complex when highly nonlinear ansatz functions are chosen to approximate the rod's centerline and cross-section orientations. Using a Petrov-Galerkin projection method, we propose a whole family of rod finite element formulations where the nodal generalized virtual displacements and generalized velocities are interpolated instead of using the consistent variations and time derivatives of the ansatz functions. This approach leads to a significant simplification of the expressions in the discrete virtual work functionals. In addition, independent strategies can be chosen for interpolating the nodal centerline points and cross-section orientations. We discuss three objective interpolation strategies and give an in-depth analysis concerning locking and convergence behavior for the whole family of rod finite element formulations.Comment: arXiv admin note: text overlap with arXiv:2301.0559

    Optimal Control of the Landau-de Gennes Model of Nematic Liquid Crystals

    Full text link
    We present an analysis and numerical study of an optimal control problem for the Landau-de Gennes (LdG) model of nematic liquid crystals (LCs), which is a crucial component in modern technology. They exhibit long range orientational order in their nematic phase, which is represented by a tensor-valued (spatial) order parameter Q=Q(x)Q = Q(x). Equilibrium LC states correspond to QQ functions that (locally) minimize an LdG energy functional. Thus, we consider an L2L^2-gradient flow of the LdG energy that allows for finding local minimizers and leads to a semi-linear parabolic PDE, for which we develop an optimal control framework. We then derive several a priori estimates for the forward problem, including continuity in space-time, that allow us to prove existence of optimal boundary and external ``force'' controls and to derive optimality conditions through the use of an adjoint equation. Next, we present a simple finite element scheme for the LdG model and a straightforward optimization algorithm. We illustrate optimization of LC states through numerical experiments in two and three dimensions that seek to place LC defects (where Q(x)=0Q(x) = 0) in desired locations, which is desirable in applications.Comment: 26 pages, 9 figure

    Regularised Learning with Selected Physics for Power System Dynamics

    Full text link
    Due to the increasing system stability issues caused by the technological revolutions of power system equipment, the assessment of the dynamic security of the systems for changing operating conditions (OCs) is nowadays crucial. To address the computational time problem of conventional dynamic security assessment tools, many machine learning (ML) approaches have been proposed and well-studied in this context. However, these learned models only rely on data, and thus miss resourceful information offered by the physical system. To this end, this paper focuses on combining the power system dynamical model together with the conventional ML. Going beyond the classic Physics Informed Neural Networks (PINNs), this paper proposes Selected Physics Informed Neural Networks (SPINNs) to predict the system dynamics for varying OCs. A two-level structure of feed-forward NNs is proposed, where the first NN predicts the generator bus rotor angles (system states) and the second NN learns to adapt to varying OCs. We show a case study on an IEEE-9 bus system that considering selected physics in model training reduces the amount of needed training data. Moreover, the trained model effectively predicted long-term dynamics that were beyond the time scale of the collected training dataset (extrapolation)

    Safe Zeroth-Order Optimization Using Quadratic Local Approximations

    Full text link
    This paper addresses black-box smooth optimization problems, where the objective and constraint functions are not explicitly known but can be queried. The main goal of this work is to generate a sequence of feasible points converging towards a KKT primal-dual pair. Assuming to have prior knowledge on the smoothness of the unknown objective and constraints, we propose a novel zeroth-order method that iteratively computes quadratic approximations of the constraint functions, constructs local feasible sets and optimizes over them. Under some mild assumptions, we prove that this method returns an η\eta-KKT pair (a property reflecting how close a primal-dual pair is to the exact KKT condition) within O(1/η2)O({1}/{\eta^{2}}) iterations. Moreover, we numerically show that our method can achieve faster convergence compared with some state-of-the-art zeroth-order approaches. The effectiveness of the proposed approach is also illustrated by applying it to nonconvex optimization problems in optimal control and power system operation.Comment: arXiv admin note: text overlap with arXiv:2211.0264

    Model Diagnostics meets Forecast Evaluation: Goodness-of-Fit, Calibration, and Related Topics

    Get PDF
    Principled forecast evaluation and model diagnostics are vital in fitting probabilistic models and forecasting outcomes of interest. A common principle is that fitted or predicted distributions ought to be calibrated, ideally in the sense that the outcome is indistinguishable from a random draw from the posited distribution. Much of this thesis is centered on calibration properties of various types of forecasts. In the first part of the thesis, a simple algorithm for exact multinomial goodness-of-fit tests is proposed. The algorithm computes exact pp-values based on various test statistics, such as the log-likelihood ratio and Pearson\u27s chi-square. A thorough analysis shows improvement on extant methods. However, the runtime of the algorithm grows exponentially in the number of categories and hence its use is limited. In the second part, a framework rooted in probability theory is developed, which gives rise to hierarchies of calibration, and applies to both predictive distributions and stand-alone point forecasts. Based on a general notion of conditional T-calibration, the thesis introduces population versions of T-reliability diagrams and revisits a score decomposition into measures of miscalibration, discrimination, and uncertainty. Stable and efficient estimators of T-reliability diagrams and score components arise via nonparametric isotonic regression and the pool-adjacent-violators algorithm. For in-sample model diagnostics, a universal coefficient of determination is introduced that nests and reinterprets the classical R2R^2 in least squares regression. In the third part, probabilistic top lists are proposed as a novel type of prediction in classification, which bridges the gap between single-class predictions and predictive distributions. The probabilistic top list functional is elicited by strictly consistent evaluation metrics, based on symmetric proper scoring rules, which admit comparison of various types of predictions

    neuroAIx-Framework: design of future neuroscience simulation systems exhibiting execution of the cortical microcircuit model 20Ă— faster than biological real-time

    Get PDF
    IntroductionResearch in the field of computational neuroscience relies on highly capable simulation platforms. With real-time capabilities surpassed for established models like the cortical microcircuit, it is time to conceive next-generation systems: neuroscience simulators providing significant acceleration, even for larger networks with natural density, biologically plausible multi-compartment models and the modeling of long-term and structural plasticity.MethodsStressing the need for agility to adapt to new concepts or findings in the domain of neuroscience, we have developed the neuroAIx-Framework consisting of an empirical modeling tool, a virtual prototype, and a cluster of FPGA boards. This framework is designed to support and accelerate the continuous development of such platforms driven by new insights in neuroscience.ResultsBased on design space explorations using this framework, we devised and realized an FPGA cluster consisting of 35 NetFPGA SUME boards.DiscussionThis system functions as an evaluation platform for our framework. At the same time, it resulted in a fully deterministic neuroscience simulation system surpassing the state of the art in both performance and energy efficiency. It is capable of simulating the microcircuit with 20Ă— acceleration compared to biological real-time and achieves an energy efficiency of 48nJ per synaptic event

    Discovering the hidden structure of financial markets through bayesian modelling

    Get PDF
    Understanding what is driving the price of a financial asset is a question that is currently mostly unanswered. In this work we go beyond the classic one step ahead prediction and instead construct models that create new information on the behaviour of these time series. Our aim is to get a better understanding of the hidden structures that drive the moves of each financial time series and thus the market as a whole. We propose a tool to decompose multiple time series into economically-meaningful variables to explain the endogenous and exogenous factors driving their underlying variability. The methodology we introduce goes beyond the direct model forecast. Indeed, since our model continuously adapts its variables and coefficients, we can study the time series of coefficients and selected variables. We also present a model to construct the causal graph of relations between these time series and include them in the exogenous factors. Hence, we obtain a model able to explain what is driving the move of both each specific time series and the market as a whole. In addition, the obtained graph of the time series provides new information on the underlying risk structure of this environment. With this deeper understanding of the hidden structure we propose novel ways to detect and forecast risks in the market. We investigate our results with inferences up to one month into the future using stocks, FX futures and ETF futures, demonstrating its superior performance according to accuracy of large moves, longer-term prediction and consistency over time. We also go in more details on the economic interpretation of the new variables and discuss the created graph structure of the market.Open Acces

    Predictive Maintenance of Critical Equipment for Floating Liquefied Natural Gas Liquefaction Process

    Get PDF
    Predictive Maintenance of Critical Equipment for Liquefied Natural Gas Liquefaction Process Meeting global energy demand is a massive challenge, especially with the quest of more affinity towards sustainable and cleaner energy. Natural gas is viewed as a bridge fuel to a renewable energy. LNG as a processed form of natural gas is the fastest growing and cleanest form of fossil fuel. Recently, the unprecedented increased in LNG demand, pushes its exploration and processing into offshore as Floating LNG (FLNG). The offshore topsides gas processes and liquefaction has been identified as one of the great challenges of FLNG. Maintaining topside liquefaction process asset such as gas turbine is critical to profitability and reliability, availability of the process facilities. With the setbacks of widely used reactive and preventive time-based maintenances approaches, to meet the optimal reliability and availability requirements of oil and gas operators, this thesis presents a framework driven by AI-based learning approaches for predictive maintenance. The framework is aimed at leveraging the value of condition-based maintenance to minimises the failures and downtimes of critical FLNG equipment (Aeroderivative gas turbine). In this study, gas turbine thermodynamics were introduced, as well as some factors affecting gas turbine modelling. Some important considerations whilst modelling gas turbine system such as modelling objectives, modelling methods, as well as approaches in modelling gas turbines were investigated. These give basis and mathematical background to develop a gas turbine simulated model. The behaviour of simple cycle HDGT was simulated using thermodynamic laws and operational data based on Rowen model. Simulink model is created using experimental data based on Rowen’s model, which is aimed at exploring transient behaviour of an industrial gas turbine. The results show the capability of Simulink model in capture nonlinear dynamics of the gas turbine system, although constraint to be applied for further condition monitoring studies, due to lack of some suitable relevant correlated features required by the model. AI-based models were found to perform well in predicting gas turbines failures. These capabilities were investigated by this thesis and validated using an experimental data obtained from gas turbine engine facility. The dynamic behaviours gas turbines changes when exposed to different varieties of fuel. A diagnostics-based AI models were developed to diagnose different gas turbine engine’s failures associated with exposure to various types of fuels. The capabilities of Principal Component Analysis (PCA) technique have been harnessed to reduce the dimensionality of the dataset and extract good features for the diagnostics model development. Signal processing-based (time-domain, frequency domain, time-frequency domain) techniques have also been used as feature extraction tools, and significantly added more correlations to the dataset and influences the prediction results obtained. Signal processing played a vital role in extracting good features for the diagnostic models when compared PCA. The overall results obtained from both PCA, and signal processing-based models demonstrated the capabilities of neural network-based models in predicting gas turbine’s failures. Further, deep learning-based LSTM model have been developed, which extract features from the time series dataset directly, and hence does not require any feature extraction tool. The LSTM model achieved the highest performance and prediction accuracy, compared to both PCA-based and signal processing-based the models. In summary, it is concluded from this thesis that despite some challenges related to gas turbines Simulink Model for not being integrated fully for gas turbine condition monitoring studies, yet data-driven models have proven strong potentials and excellent performances on gas turbine’s CBM diagnostics. The models developed in this thesis can be used for design and manufacturing purposes on gas turbines applied to FLNG, especially on condition monitoring and fault detection of gas turbines. The result obtained would provide valuable understanding and helpful guidance for researchers and practitioners to implement robust predictive maintenance models that will enhance the reliability and availability of FLNG critical equipment.Petroleum Technology Development Funds (PTDF) Nigeri

    An explicit stabilised finite element method for Navier-Stokes-Brinkman equations

    Get PDF
    We present an explicit stabilised finite element method for solving Navier-Stokes-Brinkman equations. The proposed algorithm has several advantages. First, the lower equal-order finite element space for velocity and pressure is ideal for presenting the pixel images. Stabilised finite element allows the continuity of both tangential and normal velocities at the interface between regions of different micro-permeability or at the interface free/porous domain. Second, the algorithm is fully explicit and versatile for describing complex boundary conditions. Third, the fully explicit matrix–free finite element implementation is ideal for parallelism on high-performance computers. In the last, the implicit treatment of Darcy term allowed larger time stepping and a stable computation, even if the velocity varies for several orders of magnitude in the micro-porous regions (Darcy regime). The stabilisation parameter, that may affect the velocity field, has been discussed and an optimal parameter was chosen based on the numerical examples. Velocity stability at interface between different micro-permeability has been also studied with mesh refinement. We analysed the influence of the micro-permeability field on the regime of the flow (Stokes flow, Darcy flow or a transitional regime). These benchmark tests provide guidelines for choosing the resolution of the grayscale image and its segmentation. We applied the method on real Berea Sandstone micro-CT images, and proceeded the three-phases segmentation. We studied the influence of the micro-porosity field, using the well-known Kozeny-Carman relation to derive the micro-permeability field from the micro-porosity field, on the effective permeability computed. Our analysis shows that a small fraction of micro-porosity in the rock has a significant influence on the effective permeability computed
    • …
    corecore