397 research outputs found

    Non-modal analysis of spectral element methods: Towards accurate and robust large-eddy simulations

    Get PDF
    We introduce a \textit{non-modal} analysis technique that characterizes the diffusion properties of spectral element methods for linear convection-diffusion systems. While strictly speaking only valid for linear problems, the analysis is devised so that it can give critical insights on two questions: (i) Why do spectral element methods suffer from stability issues in under-resolved computations of nonlinear problems? And, (ii) why do they successfully predict under-resolved turbulent flows even without a subgrid-scale model? The answer to these two questions can in turn provide crucial guidelines to construct more robust and accurate schemes for complex under-resolved flows, commonly found in industrial applications. For illustration purposes, this analysis technique is applied to the hybridized discontinuous Galerkin methods as representatives of spectral element methods. The effect of the polynomial order, the upwinding parameter and the P\'eclet number on the so-called \textit{short-term diffusion} of the scheme are investigated. From a purely non-modal analysis point of view, polynomial orders between 22 and 44 with standard upwinding are well suited for under-resolved turbulence simulations. For lower polynomial orders, diffusion is introduced in scales that are much larger than the grid resolution. For higher polynomial orders, as well as for strong under/over-upwinding, robustness issues can be expected. The non-modal analysis results are then tested against under-resolved turbulence simulations of the Burgers, Euler and Navier-Stokes equations. While devised in the linear setting, our non-modal analysis succeeds to predict the behavior of the scheme in the nonlinear problems considered

    SOLID-SHELL FINITE ELEMENT MODELS FOR EXPLICIT SIMULATIONS OF CRACK PROPAGATION IN THIN STRUCTURES

    Get PDF
    Crack propagation in thin shell structures due to cutting is conveniently simulated using explicit finite element approaches, in view of the high nonlinearity of the problem. Solidshell elements are usually preferred for the discretization in the presence of complex material behavior and degradation phenomena such as delamination, since they allow for a correct representation of the thickness geometry. However, in solid-shell elements the small thickness leads to a very high maximum eigenfrequency, which imply very small stable time-steps. A new selective mass scaling technique is proposed to increase the time-step size without affecting accuracy. New ”directional” cohesive interface elements are used in conjunction with selective mass scaling to account for the interaction with a sharp blade in cutting processes of thin ductile shells

    Efficient antenna modeling by DGTD

    Get PDF
    The work described in this article is partially funded by the Spanish National Projects TEC2013-48414-C3-01, CSD2008-00068, P09-TIC-5327, and P12-TIC-1442 and by the GENIL Excellence Network

    An unconditionally stable algorithm for generalized thermoelasticity based on operator-splitting and time-discontinuous Galerkin finite element methods

    Get PDF
    An efficient time-stepping algorithm is proposed based on operator-splitting and the space–time discontinuous Galerkin finite element method for problems in the non-classical theory of thermoelasticity. The non-classical theory incorporates three models: the classical theory based on Fourier’s law of heat conduction resulting in a hyperbolic–parabolic coupled system, a non-classical theory of a fully-hyperbolic extension, and a combination of the two. The general problem is split into two contractive sub-problems, namely the mechanical phase and the thermal phase. Each sub-problem is discretized using the space–time discontinuous Galerkin finite element method. The sub-problems are stable which then leads to unconditional stability of the global product algorithm. A number of numerical examples are presented to demonstrate the performance and capability of the method

    Numerical simulation of flooding from multiple sources using adaptive anisotropic unstructured meshes and machine learning methods

    Get PDF
    Over the past few decades, urban floods have been gaining more attention due to their increase in frequency. To provide reliable flooding predictions in urban areas, various numerical models have been developed to perform high-resolution flood simulations. However, the use of high-resolution meshes across the whole computational domain causes a high computational burden. In this thesis, a 2D control-volume and finite-element (DCV-FEM) flood model using adaptive unstructured mesh technology has been developed. This adaptive unstructured mesh technique enables meshes to be adapted optimally in time and space in response to the evolving flow features, thus providing sufficient mesh resolution where and when it is required. It has the advantage of capturing the details of local flows and wetting and drying front while reducing the computational cost. Complex topographic features are represented accurately during the flooding process. This adaptive unstructured mesh technique can dynamically modify (both, coarsening and refining the mesh) and adapt the mesh to achieve a desired precision, thus better capturing transient and complex flow dynamics as the flow evolves. A flooding event that happened in 2002 in Glasgow, Scotland, United Kingdom has been simulated to demonstrate the capability of the adaptive unstructured mesh flooding model. The simulations have been performed using both fixed and adaptive unstructured meshes, and then results have been compared with those published 2D and 3D results. The presented method shows that the 2D adaptive mesh model provides accurate results while having a low computational cost. The above adaptive mesh flooding model (named as Floodity) has been further developed by introducing (1) an anisotropic dynamic mesh optimization technique (anisotropic-DMO); (2) multiple flooding sources (extreme rainfall and sea-level events); and (3) a unique combination of anisotropic-DMO and high-resolution Digital Terrain Model (DTM) data. It has been applied to a densely urbanized area within Greve, Denmark. Results from MIKE 21 FM are utilized to validate our model. To assess uncertainties in model predictions, sensitivity of flooding results to extreme sea levels, rainfall and mesh resolution has been undertaken. The use of anisotropic-DMO enables us to capture high resolution topographic features (buildings, rivers and streets) only where and when is needed, thus providing improved accurate flooding prediction while reducing the computational cost. It also allows us to better capture the evolving flow features (wetting-drying fronts). To provide real-time spatio-temporal flood predictions, an integrated long short-term memory (LSTM) and reduced order model (ROM) framework has been developed. This integrated LSTM-ROM has the capability of representing the spatio-temporal distribution of floods since it takes advantage of both ROM and LSTM. To reduce the dimensional size of large spatial datasets in LSTM, the proper orthogonal decomposition (POD) and singular value decomposition (SVD) approaches are introduced. The performance of the LSTM-ROM developed here has been evaluated using Okushiri tsunami as test cases. The results obtained from the LSTM-ROM have been compared with those from the full model (Fluidity). Promising results indicate that the use of LSTM-ROM can provide the flood prediction in seconds, enabling us to provide real-time flood prediction and inform the public in a timely manner, reducing injuries and fatalities. Additionally, data-driven optimal sensing for reconstruction (DOSR) and data assimilation (DA) have been further introduced to LSTM-ROM. This linkage between modelling and experimental data/observations allows us to minimize model errors and determine uncertainties, thus improving the accuracy of modelling. It should be noting that after we introduced the DA approach, the prediction errors are significantly reduced at time levels when an assimilation procedure is conducted, which illustrates the ability of DOSR-LSTM-DA to significantly improve the model performance. By using DOSR-LSTM-DA, the predictive horizon can be extended by 3 times of the initial horizon. More importantly, the online CPU cost of using DOSR-LSTM-DA is only 1/3 of the cost required by running the full model.Open Acces

    Divergence error based pp-adaptive discontinuous Galerkin solution of time-domain Maxwell's equations

    Full text link
    A pp-adaptive discontinuous Galerkin time-domain method is developed to obtain high-order solutions to electromagnetic scattering problems. A novel feature of the proposed method is the use of divergence error to drive the pp-adaptive method. The nature of divergence error is explored and that it is a direct consequence of the act of discretization is established. Its relation with relative truncation error is formed which enables the use of divergence error as an inexpensive proxy to truncation error. Divergence error is used as an indicator to dynamically identify and assign spatial operators of varying accuracy to substantial regions in the computational domain. This results in a reduced computational cost than a comparable discontinuous Galerkin time-domain solution using uniform degree piecewise polynomial bases throughout.Comment: 28 pages, 22 figure

    CUDA-C implementation of the ADER-DG method for linear hyperbolic PDEs

    Get PDF
    • …
    corecore