60 research outputs found

    Non-intrusive reduced order models and their applications

    Get PDF
    Reduced order models (ROMs) have become prevalent in many fields of physics as they offer the potential to simulate dynamical systems with substantially increased computation efficiency in comparison to standard techniques. Among the model reduction techniques, the proper orthogonal decomposition (POD) method has proven to be an efficient means of deriving a reduced basis for high-dimensional flow systems. The intrusive ROM (IROM) is normally derived by the POD and Galerkin projection methods. The IROM is appealing for non-linear and linear model reductions and has been successfully applied to numerous research fields. However, IROMs suffer from instability and non-linearity efficiency issues. In addition, they can be complex to code because they are intrusive. In most cases the source code describing the physical system has to be modified in order to generate the reduced order model. These modifications can be complex, especially in legacy codes, or may not be possible if the source code is not available (e.g. in some commercial software). To circumvent these shortcomings, non-intrusive approaches have been introduced into ROMs. The Non-Intrusive ROM (NIROM) is independent of the original physical system. The key contribution of this thesis are: Firstly, three novel NIROMs have been presented in this thesis: POD/Taylor series, POD-Smolyak and POD-RBF (radial basis function). Secondly, two NIROMs with varying material properties have been presented. Thirdly, these newly developed NIROMs were implemented and tested under the framework of an unstructured mesh finite element model (FLUIDITY) and a combined finite-discrete element method based solid model (Y2D). Fourthly, these NIROMs have been used to construct ROMs for multi-scale 3-D free surface flows, multi-phase porous media flows, fluid-structure interaction and blasting problems.Open Acces

    Development and Optimization of Non-Hydrostatic Models for Water Waves and Fluid-Vegetation Interaction

    Get PDF
    The primary objective of this study is twofold: 1) to develop an efficient and accurate non-hydrostatic wave model for fully dispersive highly nonlinear waves, and 2) to investigate the interaction between waves and submerged flexible vegetation using a fully coupled wave-vegetation model. This research consists of three parts. Firstly, an analytical dispersion relationship is derived for waves simulated by models utilizing Keller-box scheme and central differencing for vertical discretization. The phase speed can be expressed as a rational polynomial function of the dimensionless water depth, khkh, and the layer distribution in water column becomes an optimizable parameter in this function. For a given tolerance dispersion error, the range of khkh is extended and the layer thicknesses are optimally selected. The derived theoretical dispersion relationship is tested with linear and nonlinear standing waves generated by an Euler model. The optimization method is applicable to other non-hydrostatic models for water waves. Secondly, an efficient and accurate approach is developed to solve Euler equations for fully dispersive and highly nonlinear water waves. Discontinuous Galerkin, finite difference, and spectral element formulations are used for horizontal discretization, vertical discretization, and the Poisson equation, respectively. The Keller-box scheme is adopted for its capability of resolving frequency dispersion accurately with low vertical resolution (two or three layers). A three-stage optimal Strong Stability-Preserving Runge-Kutta (SSP-RK) scheme is employed for time integration. Thirdly, a fully coupled wave-vegetation model for simulating the interaction between water waves and submerged flexible plants is presented. The complete governing equation for vegetation motion is solved with a high-order finite element method and an implicit time differencing scheme. The vegetation model is fully coupled with a wave model to explore the relationship between displacement of water particle and plant stem, as well as the effect of vegetation flexibility on wave attenuation. This vegetation deformation model can be coupled with other wave models to simulate wave-vegetation interactions

    Efficient Iterative Arbitrary High Order Methods: Adaptivity and Structure Preservation

    Get PDF
    Since its introduction in the late 17th century, in the independent works of Sir Isaac Newton and Gottfried Wilhelm Leibniz, differential calculus has shown to be a very powerful tool for the description of natural phenomena and technological processes. Its development led, in fact, to a deeper level of understanding of the general laws regulating the universe, from the microscopic to the macroscopic scale, and was indeed a crucial step towards the formalization of many fundamen- tal theories in several different fields. Countless models of natural and technological systems rely on differential formulations, either in the “simple” form of ordinary differential equations (ODEs) or in the more involved form of partial differential equations (PDEs). Extensive effort has been put, across the years, in the analytical study of such problems. In particular, their huge variety does not allow the development of a general theory and many different mathematical theories have been designed to deal with specific families of problems. However, if on the one hand such heterogeneity makes difficult to express statements holding for every differential problem, on the other hand almost all the mentioned problems share a common characteristic: the impossibility to find analytical solutions, besides very simple exceptions. At the moment in which this thesis has been written, the purely analytical approaches are not sufficient for a quantitative characterization of the analytical solutions, especially in real applications, and the only practical way to overcome such limitation is given by numerical analysis. Scientific computing constitutes indeed a valid tool to obtain accurate approximations of the analytical solutions to differential problems. Unfortunately, it does not lack critical aspects. The designing of numerical methods must be done in a careful way, as consistent discretizations may, in any case, lead to unstable schemes which are not usable in practice. Moreover, the need for very accurate approximations of the analytical solutions, which is typical of advanced applications, e.g., the design of aerospace machines, leads to massive simulations requiring huge computational resources, both in terms of computational times and memory, accessible only to supercomputers. This is why, the academic and the industrial communities, have lately put a lot of effort in trying to make such simulations more accessible. On the one hand the technological improvements, both at the hardware and at the software level, resulted in computers guaranteeing always higher performances, on the other hand several strategies have been developed to reduce the computational cost itself. In particular, in such context, two of the most successful strategies are • high order methods, which are well known to require, on smooth problems, smaller computational times for a fixed level of accuracy, • structure–preserving techniques, which are designed in such a way to enforce, at the discrete level, properties of the analytical solution, e.g., positivity of certain quantities, and they constitute the main focus of this thesis. For what concerns the first point, in this work, a paradigm for designing efficient adaptive arbitrary high order methods is formalized and investigated. The underlying idea was firstly introduced by Michael L. Minion in the early 2000s, in the context of the ladder Deferred Correc- tion (DeC) methods and of the multi-level spectral DeC methods, and is here generalized into a more abstract, flexible and rigorous framework, allowing for applications to many different settings. Arbitrary high order iterative schemes, i.e., schemes characterized by an iteration process converging to a high order discretization of the analytical solution of the problem under investi- gation, are considered and, in particular, within this family of methods, the attention is given to schemes for which one can provably show that one order of accuracy is gained at each iteration. Such methods are modified by matching the accuracy achieved in a specific iteration with the discretization accuracy of the same iteration. The new modified methods are characterized by several advantages. They are in general cheaper than the original ones, as the early iterations of the new methods involve low order discretizations, which make use of smaller structures and less computations. Moreover, they offer a natural way to perform p-adaptivity, i.e., adaptivity with respect to the order of accuracy adopted, stopping the iterations on the fly when appropriate conditions are met, in terms of convergence tolerances or in terms of structure preservation. This is, indeed, a valuable property for a numerical scheme. If on the one hand, we have remarked how the adoption of high order methods leads to smaller computational times for a fixed level of accuracy, on the other hand, the error of the method is not known a priori. In concrete applications, users are not interested in the order of the employed method, but rather on the final error being smaller than a prescribed tolerance. In principle, for a given discretization refinement, a low order method may be sufficient for reaching a certain accuracy, with higher order methods being more expensive and providing an unnecessary higher level of precision. Moreover, it is well known that in presence of non-smooth solutions, characterized by shocks, high order methods suffer from instabilities which may lead to blow-ups of the numerical solutions and, in the best case, the order of accuracy drops down to first order, disregarding for the accuracy of the adopted discretization. This is why, in real applications, high order methods should be always coupled with adaptive strategies capable of estimating the error and selecting the order or the discretization refinement accordingly. The designing of such strategies is far from being trivial, however, the described approach offers a natural way to do it. The framework is broadly investigated in the context of ODEs and hyperbolic PDEs, with applications to several schemes and stability analyses, showing that in many cases the efficient modifications have no impact on the stability. In particular, the idea is characterized for the abstract DeC formalism introduced by Re ́mi Abgrall in 2017 and, consequently, applied to many schemes which can be put in such formalism. Applications to arbitrary derivative (ADER) methods, Discontinuous Galerkin (DG) and Continuous Galerkin (CG) finite element methods (FEM) and Spectral Difference (SD) methods are reported to show the great advantages, the flexibility, the robustness and the potentiality of the introduced approach. For what concerns structure preservation, the main contributions address the preservation of the positivity of some physical quantities and the preservation, either exact or through super- convergence with respect to the order of accuracy of the underlying discretization, of particular analytical solutions, namely “well-balancing” or “C-property”, in the context of hyperbolic PDEs. Both aspects constitute open challenges for the community. • The violation, at the discrete level, of some physical constraints, e.g., the positivity of density and pressure in the Euler or Navier–Stokes equations or the positivity of the water height in the Shallow Water (SW) equations, results in an immediate failure of the simulation. The preservation of the positivity of the mentioned variables can be obtained for very high levels of refinement, resulting in very long computational times, or via suitable modifications of the adopted schemes. However, such modifications, in the context of high order methods, determine strong restrictions on the time discretization, which may balance the advantages associated with the adoption of high order methods. • Physical systems admit, in general, nontrivial steady states and can spend very long times in a neighborhood of such states. This is why we are interested, in many applications, in studying the evolution of small perturbations of such stationary solutions. In this context, we would like to avoid confusing the evolution of the perturbations with the natural noise due to the numerical discretization. This can be achieved either via mesh refinement, increasing the computational cost, or via suitable modifications of the numerical scheme to ensure that it preserves exactly the analytical solution of interest without wasting the accuracy toward any other general solution. The latter option looks indeed very appealing, however, it is also very challenging since the steady states are usually not available in closed- form and, more in general, we rely on numerics because we do not have the analytical solutions to the systems of PDEs that we are trying to solve. The novel adaptive framework, previously described, is exploited to design a robust limiter, which ensures the positivity of density and pressure in the predictor of ADER methods for the numerical resolution of the Euler and Navier–Stokes equations. Further, an alternative modification of DeC methods for ODEs, based on the modified Patankar trick, is applied in combination with a WENO space discretization to construct the first, up to the author’s knowledge, arbitrary high order method for the numerical resolution of the SW equations, which is positivity–preserving with respect to the water height without time step restrictions, with pre-existing unconditionally positive schemes up to order 3 only. The robustness of the approach has been tested on tough benchmarks, involving shocks and wet–dry areas, with Courant–Friedrichs–Lewy (CFL) values up to 0.9. Finally, several arbitrary high order well–balanced (WB) continuous interior penalty (CIP) stabilizations for CG/RD schemes for the numerical resolution of the SW equations are presented. All of them are specifically designed to exactly preserve the lake at rest steady state. Nevertheless, some of those, which make use of the notions of space residual and global flux, address the problem of the preservation of general steady states, whose analytical expression is not known in closed-form. In this context, such stabilizations have shown excellent performances, with strong superconvergences and a notable capability of capturing the constant momentum. Even though the monodimensional SW equations have been considered as reference setting, many of the presented stabilizations can be extended to other systems, e.g., the Euler equations with gravity, and to a multidimensional unstructured framework. All the methods are supported with a theoretical analysis and validated with proofs and numerical results

    Adaptive meshing methodology based on topometric data for dambreak inundation assessments

    Get PDF
    Flood maps are the final products of dam failure studies that are required by dam safety regulations. A flood limit, which represents the maximum envelope reached by flood waves, is generally the result of a dam-break scenario simulated by a hydraulic numerical model. However, the numerical model uses only a limited portion of the available bathymetry data to build the terrain model (2D mesh plus topometric elevation at nodes). This is particularly so in the cases where the topo-metric data recorded by LIDAR was estimated in several million points. But the hydraulic numerical models rarely exceed hundreds of thousands of nodes, in particular because of the computer constraints and time associated with the operation of these models. The production of the final flood map requires consistency between projected levels and elevations for all points on the map. This verification may be tedious for a large area with several small secondary valleys of tributary streams that have not been represented by the original hydraulic numerical model. The aim of this work is to propose an automatic remeshing strategy that uses the envelope of the maximum dimensions reached by the original model coupled with the available LIDAR data to produce an improved mesh that can accurately capture the wet/dry fronts and the overflows of the secondary valleys. This model helps us to consider the maximum slope inside each element on the basis of the real data, instead of controlling the slope for not having negative depth or controlling the velocity. The algorithm is based on a few basic steps: (i) find the elements cut by the envelope of the wet/dry interfaces; (ii) project the topometric points onto the cut elements; (iii) if these points are very close to the interface, if they are found in a valley, or if they are more elevated than the corresponding cut elements, then these points will be added to the previous nodes and included in a subsequent triangulation step; and (iv) re-run the simulation on the new mesh. This algorithm has been implemented and validated in the study of a dambreak flow with a complex river topography on the Eastmain River and the Romaine-Puyjalon River

    Modelling of multiphase flows on adaptive unstructured meshes with applications to the dynamics of volcanic ash plumes

    No full text
    This thesis describes the development of two numerical models for the study of (1) incompressible multiphase flow and (2) compressible multiphase flow. Both models employ a state-of-the-art adaptive unstructured mesh-based approach which allows the mesh, upon which the model equations are discretised, to be optimised in order to focus numerical resolution in areas important to the dynamics and decrease it where it is not needed as a simulation progresses. The implementation of the models takes place within a computational fluid dynamics code called Fluidity. The application of the models concerns the multi-scale simulation of volcanic ash transport in aqueous solutions and in the atmosphere. Simulations of ash settling in a water tank, which mimic published laboratory experiments, are performed primarily in two dimensions. The results demonstrate that ash particles can either settle slowly and individually, or rapidly and collectively as an ash-laden cloud, referred to as a plume. Two quantities used to measure the tendency for plumes to form are then evaluated with a parameter study. Particles settling collectively are slowed by inertial drag, rather than viscous drag, and it is shown that such quantities must account for this. An improvement to the measures is proposed, along with an alternative measure which uses a more accurate expression for the collective settling timescale. Finally, a two-dimensional kilometre-scale volcanic eruption of hot gas and ash into the atmosphere is simulated. The results are compared with those from MFIX, a leading multiphase flow code. Both Fluidity and MFIX are able to successfully capture the key characteristics of an eruption event. The benefits of the adaptive unstructured mesh-based approach are highlighted throughout the thesis by demonstrating that it maintains solution accuracy whilst providing a substantial reduction in computational requirements when compared to the same simulation performed using a fixed mesh of uniform resolution.Open Acces

    MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications

    Get PDF
    Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described

    Generalized averaged Gaussian quadrature and applications

    Get PDF
    A simple numerical method for constructing the optimal generalized averaged Gaussian quadrature formulas will be presented. These formulas exist in many cases in which real positive GaussKronrod formulas do not exist, and can be used as an adequate alternative in order to estimate the error of a Gaussian rule. We also investigate the conditions under which the optimal averaged Gaussian quadrature formulas and their truncated variants are internal
    • …
    corecore