19,048 research outputs found

    Process intensification of oxidative coupling of methane

    No full text

    Soliton Gas: Theory, Numerics and Experiments

    Full text link
    The concept of soliton gas was introduced in 1971 by V. Zakharov as an infinite collection of weakly interacting solitons in the framework of Korteweg-de Vries (KdV) equation. In this theoretical construction of a diluted soliton gas, solitons with random parameters are almost non-overlapping. More recently, the concept has been extended to dense gases in which solitons strongly and continuously interact. The notion of soliton gas is inherently associated with integrable wave systems described by nonlinear partial differential equations like the KdV equation or the one-dimensional nonlinear Schr\"odinger equation that can be solved using the inverse scattering transform. Over the last few years, the field of soliton gases has received a rapidly growing interest from both the theoretical and experimental points of view. In particular, it has been realized that the soliton gas dynamics underlies some fundamental nonlinear wave phenomena such as spontaneous modulation instability and the formation of rogue waves. The recently discovered deep connections of soliton gas theory with generalized hydrodynamics have broadened the field and opened new fundamental questions related to the soliton gas statistics and thermodynamics. We review the main recent theoretical and experimental results in the field of soliton gas. The key conceptual tools of the field, such as the inverse scattering transform, the thermodynamic limit of finite-gap potentials and the Generalized Gibbs Ensembles are introduced and various open questions and future challenges are discussed.Comment: 35 pages, 8 figure

    Hybrid time-dependent Ginzburg-Landau simulations of block copolymer nanocomposites: nanoparticle anisotropy

    Full text link
    Block copolymer melts are perfect candidates to template the position of colloidal nanoparticles in the nanoscale, on top of their well-known suitability for lithography applications. This is due to their ability to self-assemble into periodic ordered structures, in which nanoparticles can segregate depending on the polymer-particle interactions, size and shape. The resulting coassembled structure can be highly ordered as a combination of both the polymeric and colloidal properties. The time-dependent Ginzburg-Landau model for the block copolymer was combined with Brownian dynamics for nanoparticles, resulting in an efficient mesoscopic model to study the complex behaviour of block copolymer nanocomposites. This review covers recent developments of the time-dependent Ginzburg-Landau/Brownian dynamics scheme. This includes efforts to parallelise the numerical scheme and applications of the model. The validity of the model is studied by comparing simulation and experimental results for isotropic nanoparticles. Extensions to simulate nonspherical and inhomogeneous nanoparticles are discussed and simulation results are discussed. The time-dependent Ginzburg-Landau/Brownian dynamics scheme is shown to be a flexible method which can account for the relatively large system sizes required to study block copolymer nanocomposite systems, while being easily extensible to simulate nonspherical nanoparticles

    Similarity and variability of blocked weather-regime dynamics in the Atlantic–European region

    Get PDF
    Weather regimes govern an important part of the sub-seasonal variability of the mid-latitude circulation. Due to their role in weather extremes and atmospheric predictability, regimes that feature a blocking anticyclone are of particular interest. This study investigates the dynamics of these “blocked” regimes in the North Atlantic–European region from a year-round perspective. For a comprehensive diagnostic, wave activity concepts and a piecewise potential vorticity (PV) tendency framework are combined. The latter essentially quantifies the well-established PV perspective of mid-latitude dynamics. The four blocked regimes (namely Atlantic ridge, European blocking, Scandinavian blocking, and Greenland blocking) during the 1979–2021 period of ERA5 reanalysis are considered. Wave activity characteristics exhibit distinct differences between blocked regimes. After regime onset, Greenland blocking is associated with a suppression of wave activity flux, whereas Atlantic ridge and European blocking are associated with a northward deflection of the flux without a clear net change. During onset, the envelope of Rossby wave activity retracts upstream for Greenland blocking, whereas the envelope extends downstream for Atlantic ridge and European blocking. Scandinavian blocking exhibits intermediate wave activity characteristics. From the perspective of piecewise PV tendencies projected onto the respective regime pattern, the dynamics that govern regime onset exhibit a large degree of similarity: linear Rossby wave dynamics and nonlinear eddy PV fluxes dominate and are of approximately equal relative importance, whereas baroclinic coupling and divergent amplification make minor contributions. Most strikingly, all blocked regimes exhibit very similar (intra-regime) variability: a retrograde and an upstream pathway to regime onset. The retrograde pathway is dominated by nonlinear PV eddy fluxes, whereas the upstream pathway is dominated by linear Rossby wave dynamics. Importantly, there is a large degree of cancellation between the two pathways for some of the mechanisms before regime onset. The physical meaning of a regime-mean perspective before onset can thus be severely limited. Implications of our results for understanding predictability of blocked regimes are discussed. Further discussed are the limitations of projected tendencies in capturing the importance of moist-baroclinic growth, which tends to occur in regions where the amplitude of the regime pattern, and thus the projection onto it, is small. Finally, it is stressed that this study investigates the variability of the governing dynamics without prior empirical stratification of data by season or by type of regime transition. It is demonstrated, however, that our dynamics-centered approach does not merely reflect variability that is associated with these factors. The main modes of dynamical variability revealed herein and the large similarity of the blocked regimes in exhibiting this variability are thus significant results.</p

    Equations discovery of organized cloud fields: Stochastic generator and dynamical insights

    Full text link
    The emergence of organized multiscale patterns resulting from convection is ubiquitous, observed throughout different cloud types. The reproduction of such patterns by general circulation models remains a challenge due to the complex nature of clouds, characterized by processes interacting over a wide range of spatio-temporal scales. The new advances in data-driven modeling techniques have raised a lot of promises to discover dynamical equations from partial observations of complex systems. This study presents such a discovery from high-resolution satellite datasets of continental cloud fields. The model is made of stochastic differential equations able to simulate with high fidelity the spatio-temporal coherence and variability of the cloud patterns such as the characteristic lifetime of individual clouds or global organizational features governed by convective inertia gravity waves. This feat is achieved through the model's lagged effects associated with convection recirculation times, and hidden variables parameterizing the unobserved processes and variables.Comment: 11 pages, 9 figure

    Limit theorems for non-Markovian and fractional processes

    Get PDF
    This thesis examines various non-Markovian and fractional processes---rough volatility models, stochastic Volterra equations, Wiener chaos expansions---through the prism of asymptotic analysis. Stochastic Volterra systems serve as a conducive framework encompassing most rough volatility models used in mathematical finance. In Chapter 2, we provide a unified treatment of pathwise large and moderate deviations principles for a general class of multidimensional stochastic Volterra equations with singular kernels, not necessarily of convolution form. Our methodology is based on the weak convergence approach by Budhiraja, Dupuis and Ellis. This powerful approach also enables us to investigate the pathwise large deviations of families of white noise functionals characterised by their Wiener chaos expansion as~Xε=n=0εnIn(fnε).X^\varepsilon = \sum_{n=0}^{\infty} \varepsilon^n I_n \big(f_n^{\varepsilon} \big). In Chapter 3, we provide sufficient conditions for the large deviations principle to hold in path space, thereby refreshing a problem left open By Pérez-Abreu (1993). Hinging on analysis on Wiener space, the proof involves describing, controlling and identifying the limit of perturbed multiple stochastic integrals. In Chapter 4, we come back to mathematical finance via the route of Malliavin calculus. We present explicit small-time formulae for the at-the-money implied volatility, skew and curvature in a large class of models, including rough volatility models and their multi-factor versions. Our general setup encompasses both European options on a stock and VIX options. In particular, we develop a detailed analysis of the two-factor rough Bergomi model. Finally, in Chapter 5, we consider the large-time behaviour of affine stochastic Volterra equations, an under-developed area in the absence of Markovianity. We leverage on a measure-valued Markovian lift introduced by Cuchiero and Teichmann and the associated notion of generalised Feller property. This setting allows us to prove the existence of an invariant measure for the lift and hence of a stationary distribution for the affine Volterra process, featuring in the rough Heston model.Open Acces

    Statistical-dynamical analyses and modelling of multi-scale ocean variability

    Get PDF
    This thesis aims to provide a comprehensive analysis of multi-scale oceanic variabilities using various statistical and dynamical tools and explore the data-driven methods for correct statistical emulation of the oceans. We considered the classical, wind-driven, double-gyre ocean circulation model in quasi-geostrophic approximation and obtained its eddy-resolving solutions in terms of potential vorticity anomaly and geostrophic streamfunctions. The reference solutions possess two asymmetric gyres of opposite circulations and a strong meandering eastward jet separating them with rich eddy activities around it, such as the Gulf Stream in the North Atlantic and Kuroshio in the North Pacific. This thesis is divided into two parts. The first part discusses a novel scale-separation method based on the local spatial correlations, called correlation-based decomposition (CBD), and provides a comprehensive analysis of mesoscale eddy forcing. In particular, we analyse the instantaneous and time-lagged interactions between the diagnosed eddy forcing and the evolving large-scale PVA using the novel `product integral' characteristics. The product integral time series uncover robust causality between two drastically different yet interacting flow quantities, termed `eddy backscatter'. We also show data-driven augmentation of non-eddy-resolving ocean models by feeding them the eddy fields to restore the missing eddy-driven features, such as the merging western boundary currents, their eastward extension and low-frequency variabilities of gyres. In the second part, we present a systematic inter-comparison of Linear Regression (LR), stochastic and deep-learning methods to build low-cost reduced-order statistical emulators of the oceans. We obtain the forecasts on seasonal and centennial timescales and assess them for their skill, cost and complexity. We found that the multi-level linear stochastic model performs the best, followed by the ``hybrid stochastically-augmented deep learning models''. The superiority of these methods underscores the importance of incorporating core dynamics, memory effects and model errors for robust emulation of multi-scale dynamical systems, such as the oceans.Open Acces

    Simulation of non‑dilute fibre suspensions using RBF‑based macro–micro multiscale method

    Get PDF
    The multiscale stochastic simulation method based on the marriage of the Brownian Configuration Field (BCF) and the Radial Basis Function mesh-free approximation for dilute fibre suspensions by our group, is further developed to simulate non-dilute fibre suspensions. For the present approach, the macro and micro processes proceeded at each time step are linked to each other by a fibre contributed stress formula associated with the used kinetic model. Due to the feature of non-dilute fibre suspensions, the interaction between fibres is introduced into the evolution equation to determine fibre configurations using the BCF method. The fibre stresses are then determined based on the fibre configuration fields using the Phan–Thien–Graham model. The efficiency of the simulation method is demonstrated by the analysis of the two challenging problems, the axisymmetric contraction and expansion flows, for a range of the fibre concentration from semi-dilute to concentrated regimes. Results evidenced by numerical experiments show that the present method would be potential in analysing and simulating various suspensions in food and medical industries

    Modeling Uncertainty for Reliable Probabilistic Modeling in Deep Learning and Beyond

    Full text link
    [ES] Esta tesis se enmarca en la intersección entre las técnicas modernas de Machine Learning, como las Redes Neuronales Profundas, y el modelado probabilístico confiable. En muchas aplicaciones, no solo nos importa la predicción hecha por un modelo (por ejemplo esta imagen de pulmón presenta cáncer) sino también la confianza que tiene el modelo para hacer esta predicción (por ejemplo esta imagen de pulmón presenta cáncer con 67% probabilidad). En tales aplicaciones, el modelo ayuda al tomador de decisiones (en este caso un médico) a tomar la decisión final. Como consecuencia, es necesario que las probabilidades proporcionadas por un modelo reflejen las proporciones reales presentes en el conjunto al que se ha asignado dichas probabilidades; de lo contrario, el modelo es inútil en la práctica. Cuando esto sucede, decimos que un modelo está perfectamente calibrado. En esta tesis se exploran tres vias para proveer modelos más calibrados. Primero se muestra como calibrar modelos de manera implicita, que son descalibrados por técnicas de aumentación de datos. Se introduce una función de coste que resuelve esta descalibración tomando como partida las ideas derivadas de la toma de decisiones con la regla de Bayes. Segundo, se muestra como calibrar modelos utilizando una etapa de post calibración implementada con una red neuronal Bayesiana. Finalmente, y en base a las limitaciones estudiadas en la red neuronal Bayesiana, que hipotetizamos que se basan en un prior mispecificado, se introduce un nuevo proceso estocástico que sirve como distribución a priori en un problema de inferencia Bayesiana.[CA] Aquesta tesi s'emmarca en la intersecció entre les tècniques modernes de Machine Learning, com ara les Xarxes Neuronals Profundes, i el modelatge probabilístic fiable. En moltes aplicacions, no només ens importa la predicció feta per un model (per ejemplem aquesta imatge de pulmó presenta càncer) sinó també la confiança que té el model per fer aquesta predicció (per exemple aquesta imatge de pulmó presenta càncer amb 67% probabilitat). En aquestes aplicacions, el model ajuda el prenedor de decisions (en aquest cas un metge) a prendre la decisió final. Com a conseqüència, cal que les probabilitats proporcionades per un model reflecteixin les proporcions reals presents en el conjunt a què s'han assignat aquestes probabilitats; altrament, el model és inútil a la pràctica. Quan això passa, diem que un model està perfectament calibrat. En aquesta tesi s'exploren tres vies per proveir models més calibrats. Primer es mostra com calibrar models de manera implícita, que són descalibrats per tècniques d'augmentació de dades. S'introdueix una funció de cost que resol aquesta descalibració prenent com a partida les idees derivades de la presa de decisions amb la regla de Bayes. Segon, es mostra com calibrar models utilitzant una etapa de post calibratge implementada amb una xarxa neuronal Bayesiana. Finalment, i segons les limitacions estudiades a la xarxa neuronal Bayesiana, que es basen en un prior mispecificat, s'introdueix un nou procés estocàstic que serveix com a distribució a priori en un problema d'inferència Bayesiana.[EN] This thesis is framed at the intersection between modern Machine Learning techniques, such as Deep Neural Networks, and reliable probabilistic modeling. In many machine learning applications, we do not only care about the prediction made by a model (e.g. this lung image presents cancer) but also in how confident is the model in making this prediction (e.g. this lung image presents cancer with 67% probability). In such applications, the model assists the decision-maker (in this case a doctor) towards making the final decision. As a consequence, one needs that the probabilities provided by a model reflects the true underlying set of outcomes, otherwise the model is useless in practice. When this happens, we say that a model is perfectly calibrated. In this thesis three ways are explored to provide more calibrated models. First, it is shown how to calibrate models implicitly, which are decalibrated by data augmentation techniques. A cost function is introduced that solves this decalibration taking as a starting point the ideas derived from decision making with Bayes' rule. Second, it shows how to calibrate models using a post-calibration stage implemented with a Bayesian neural network. Finally, and based on the limitations studied in the Bayesian neural network, which we hypothesize that came from a mispecified prior, a new stochastic process is introduced that serves as a priori distribution in a Bayesian inference problem.Maroñas Molano, J. (2022). Modeling Uncertainty for Reliable Probabilistic Modeling in Deep Learning and Beyond [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181582TESI

    Predictive Maintenance of Critical Equipment for Floating Liquefied Natural Gas Liquefaction Process

    Get PDF
    Predictive Maintenance of Critical Equipment for Liquefied Natural Gas Liquefaction Process Meeting global energy demand is a massive challenge, especially with the quest of more affinity towards sustainable and cleaner energy. Natural gas is viewed as a bridge fuel to a renewable energy. LNG as a processed form of natural gas is the fastest growing and cleanest form of fossil fuel. Recently, the unprecedented increased in LNG demand, pushes its exploration and processing into offshore as Floating LNG (FLNG). The offshore topsides gas processes and liquefaction has been identified as one of the great challenges of FLNG. Maintaining topside liquefaction process asset such as gas turbine is critical to profitability and reliability, availability of the process facilities. With the setbacks of widely used reactive and preventive time-based maintenances approaches, to meet the optimal reliability and availability requirements of oil and gas operators, this thesis presents a framework driven by AI-based learning approaches for predictive maintenance. The framework is aimed at leveraging the value of condition-based maintenance to minimises the failures and downtimes of critical FLNG equipment (Aeroderivative gas turbine). In this study, gas turbine thermodynamics were introduced, as well as some factors affecting gas turbine modelling. Some important considerations whilst modelling gas turbine system such as modelling objectives, modelling methods, as well as approaches in modelling gas turbines were investigated. These give basis and mathematical background to develop a gas turbine simulated model. The behaviour of simple cycle HDGT was simulated using thermodynamic laws and operational data based on Rowen model. Simulink model is created using experimental data based on Rowen’s model, which is aimed at exploring transient behaviour of an industrial gas turbine. The results show the capability of Simulink model in capture nonlinear dynamics of the gas turbine system, although constraint to be applied for further condition monitoring studies, due to lack of some suitable relevant correlated features required by the model. AI-based models were found to perform well in predicting gas turbines failures. These capabilities were investigated by this thesis and validated using an experimental data obtained from gas turbine engine facility. The dynamic behaviours gas turbines changes when exposed to different varieties of fuel. A diagnostics-based AI models were developed to diagnose different gas turbine engine’s failures associated with exposure to various types of fuels. The capabilities of Principal Component Analysis (PCA) technique have been harnessed to reduce the dimensionality of the dataset and extract good features for the diagnostics model development. Signal processing-based (time-domain, frequency domain, time-frequency domain) techniques have also been used as feature extraction tools, and significantly added more correlations to the dataset and influences the prediction results obtained. Signal processing played a vital role in extracting good features for the diagnostic models when compared PCA. The overall results obtained from both PCA, and signal processing-based models demonstrated the capabilities of neural network-based models in predicting gas turbine’s failures. Further, deep learning-based LSTM model have been developed, which extract features from the time series dataset directly, and hence does not require any feature extraction tool. The LSTM model achieved the highest performance and prediction accuracy, compared to both PCA-based and signal processing-based the models. In summary, it is concluded from this thesis that despite some challenges related to gas turbines Simulink Model for not being integrated fully for gas turbine condition monitoring studies, yet data-driven models have proven strong potentials and excellent performances on gas turbine’s CBM diagnostics. The models developed in this thesis can be used for design and manufacturing purposes on gas turbines applied to FLNG, especially on condition monitoring and fault detection of gas turbines. The result obtained would provide valuable understanding and helpful guidance for researchers and practitioners to implement robust predictive maintenance models that will enhance the reliability and availability of FLNG critical equipment.Petroleum Technology Development Funds (PTDF) Nigeri
    corecore