541 research outputs found
Optimally Adapted Meshes for Finite Elements of Arbitrary Order and W1p Norms
Given a function f defined on a bidimensional bounded domain and a positive
integer N, we study the properties of the triangulation that minimizes the
distance between f and its interpolation on the associated finite element
space, over all triangulations of at most N elements. The error is studied in
the W1p norm and we consider Lagrange finite elements of arbitrary polynomial
order m-1. We establish sharp asymptotic error estimates as N tends to infinity
when the optimal anisotropic triangulation is used. A similar problem has been
studied earlier, but with the error measured in the Lp norm. The extension of
this analysis to the W1p norm is crucial in order to match more closely the
needs of numerical PDE analysis, and it is not straightforward. In particular,
the meshes which satisfy the optimal error estimate are characterized by a
metric describing the local aspect ratio of each triangle and by a geometric
constraint on their maximal angle, a second feature that does not appear for
the Lp error norm. Our analysis also provides with practical strategies for
designing meshes such that the interpolation error satisfies the optimal
estimate up to a fixed multiplicative constant. We discuss the extension of our
results to finite elements on simplicial partitions of a domain of arbitrary
dimension, and we provide with some numerical illustration in two dimensions.Comment: 37 pages, 6 figure
Continuous Mesh Model and Well-Posed Continuous Interpolation Error Estimation
Rapport de recherche INRIAIn the context of mesh adaptation, Riemannian metric spaces have been used to prescribe orientation, density and stretching of anisotropic meshes. Such structures are used to compute lengths in adaptive mesh generators. In this report, a Riemannian metric space is shown to be more than a way to compute a distance. It is proven to be a reliable continuous mesh model. In particular, we demonstrate that the linear interpolation error can be derived continuously for a continuous mesh. In its tangent space, a Riemannian metric space reduces to a constant metric tensor so that it simply spans a metric space. Metric tensors are then used to continuously model discrete elements. On this basis, geometric invariants have been extracted. They connect a metric tensor to the set of all the discrete elements which can be represented by this metric. As the behavior of a Riemannian metric space is obtained by patching together the behavior of each of its tangent spaces, the global mesh model arises from gathering together continuous element models. We complete the continuous-discrete analogy by providing a continuous interpolation error estimate and a well-posed definition of the continuous linear interpolate. The later is based on an exact relation connecting the discrete error to the continuous one. From one hand, this new continuous framework freed the analysis of the topological mesh constraints. On the other hand, powerful mathematical tools are available and well defined on the space of continuous meshes: calculus of variations, differentiation, optimization, ..., whereas these tools are not defined on the space of discrete meshes
A Toy Model for Testing Finite Element Methods to Simulate Extreme-Mass-Ratio Binary Systems
Extreme mass ratio binary systems, binaries involving stellar mass objects
orbiting massive black holes, are considered to be a primary source of
gravitational radiation to be detected by the space-based interferometer LISA.
The numerical modelling of these binary systems is extremely challenging
because the scales involved expand over several orders of magnitude. One needs
to handle large wavelength scales comparable to the size of the massive black
hole and, at the same time, to resolve the scales in the vicinity of the small
companion where radiation reaction effects play a crucial role. Adaptive finite
element methods, in which quantitative control of errors is achieved
automatically by finite element mesh adaptivity based on posteriori error
estimation, are a natural choice that has great potential for achieving the
high level of adaptivity required in these simulations. To demonstrate this, we
present the results of simulations of a toy model, consisting of a point-like
source orbiting a black hole under the action of a scalar gravitational field.Comment: 29 pages, 37 figures. RevTeX 4.0. Minor changes to match the
published versio
Very High Order Anisotropic Metric-Based Mesh Adaptation in 3D
International audienceIn this paper, we study the extension of anisotropic metric-based mesh adaptation to the case of very high-order solutions in 3D. This work is based on an extension of the continuous mesh framework and multi-scale mesh adaptation where the optimal metric is derived through a calculus of variation. Based on classical high order a priori error estimates, the point-wise leading term of the local error is a homogeneous polynomial of order k + 1. To derive the leading anisotropic direction and orientations, this polynomial is approximated by a quadratic positive definite form, taken to the power k+1 2. From a geometric point of view, this problem is equivalent to finding a maximal volume ellipsoid included in the level set one of the absolute value of the polynomial. This optimization problem is strongly non-linear both for the functional and the constraints. We first recast the continuous problem in a discrete setting in the metric-logarithm space. With this approximation, this problem becomes linear and is solved with the simplex algorithm. This optimal quadratic form in the Euclidean space is then found by iteratively solving a sequence of such log-simplex problems. From the field of the local quadratic forms that representing the high-order error, a calculus of variation is used to globally control the error in L p norm. A closed form of the optimal metric is then found. Anisotropic meshes are then generated with this metric based on the unit mesh concept. For the numerical experiments, we consider several analytical functions in 3D. Convergence rate and optimality of the meshes are then discussed for interpolation of orders 1 to 5
An automated reliable method for two-dimensional Reynolds-Averaged Navier-Stokes simulations
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 171-180).The development of computational fluid dynamics algorithms and increased computational resources have led to the ability to perform complex aerodynamic simulations. Obstacles remain which prevent autonomous and reliable simulations at accuracy levels required for engineering. To consider the solution strategy autonomous and reliable, high quality solutions must be provided without user interaction or detailed previous knowledge about the flow to facilitate either adaptation or solver robustness. One such solution strategy is presented for two-dimensional Reynolds-averaged Navier-Stokes (RANS) flows and is based on: a higher-order discontinuous Galerkin finite element method which enables higher accuracy with fewer degrees of freedom than lower-order methods; an output-based error estimation and adaptation scheme which provides quantifiable measure of solution accuracy and autonomously drives toward an improved discretization; a non-linear solver technique based on pseudo-time continuation and line-search update limiting which improves the robustness for solutions to the RANS equations; and a simplex cut-cell mesh generation which autonomously provides higher-order meshes of complex geometries. The simplex cut-cell mesh generation method presented here extends methods previously developed to improve robustness with the goal of RANS simulations. In particular, analysis is performed to expose the impact of small volume ratios between arbitrarily cut elements on linear system conditioning and solution quality. Merging of the small cut element into its larger neighbor is identified as a solution to alleviate the consequences of small volume ratios. For arbitrarily cut elements randomness in the algorithm for generating integration rules is identified as a limiting factor for accuracy and recognition of canonical element shapes are introduced to remove the randomness. The cut-cell method is linked with line-search based update limiting for improved non-linear solver robustness and Riemannian metric based anisotropic adaptation to efficiently resolve anisotropic features with arbitrary orientations in RANS flows. A fixed-fraction marking strategy is employed to redistribute element areas and steps toward meshes which equidistribute elemental errors at a fixed degree of freedom. The benefit of the higher spatial accuracy and the solution efficiency (defined as accuracy per degree of freedom) is exhibited for a wide range of RANS applications including subsonic through supersonic flows. The higher-order discretizations provide more accurate solutions than second-order methods at the same degree of freedom. Furthermore, the cut-cell meshes demonstrate comparable solution efficiency to boundary-conforming meshes while significantly decreasing the burden of mesh generation for a CFD user.by James M. Modisette.Ph.D
Linearization Errors in Discrete Goal-Oriented Error Estimation
Goal-oriented error estimation provides the ability to approximate the
discretization error in a chosen functional quantity of interest. Adaptive mesh
methods provide the ability to control this discretization error to obtain
accurate quantity of interest approximations while still remaining
computationally feasible. Traditional discrete goal-oriented error estimates
incur linearization errors in their derivation. In this paper, we investigate
the role of linearization errors in adaptive goal-oriented error simulations.
In particular, we develop a novel two-level goal-oriented error estimate that
is free of linearization errors. Additionally, we highlight how linearization
errors can facilitate the verification of the adjoint solution used in
goal-oriented error estimation. We then verify the newly proposed error
estimate by applying it to a model nonlinear problem for several quantities of
interest and further highlight its asymptotic effectiveness as mesh sizes are
reduced. In an adaptive mesh context, we then compare the newly proposed
estimate to a more traditional two-level goal-oriented error estimate. We
highlight that accounting for linearization errors in the error estimate can
improve its effectiveness in certain situations and demonstrate that localizing
linearization errors can lead to more optimal adapted meshes
E2N: Error Estimation Networks for Goal-Oriented Mesh Adaptation
Given a partial differential equation (PDE), goal-oriented error estimation
allows us to understand how errors in a diagnostic quantity of interest (QoI),
or goal, occur and accumulate in a numerical approximation, for example using
the finite element method. By decomposing the error estimates into
contributions from individual elements, it is possible to formulate adaptation
methods, which modify the mesh with the objective of minimising the resulting
QoI error. However, the standard error estimate formulation involves the true
adjoint solution, which is unknown in practice. As such, it is common practice
to approximate it with an 'enriched' approximation (e.g. in a higher order
space or on a refined mesh). Doing so generally results in a significant
increase in computational cost, which can be a bottleneck compromising the
competitiveness of (goal-oriented) adaptive simulations. The central idea of
this paper is to develop a "data-driven" goal-oriented mesh adaptation approach
through the selective replacement of the expensive error estimation step with
an appropriately configured and trained neural network. In doing so, the error
estimator may be obtained without even constructing the enriched spaces. An
element-by-element construction is employed here, whereby local values of
various parameters related to the mesh geometry and underlying problem physics
are taken as inputs, and the corresponding contribution to the error estimator
is taken as output. We demonstrate that this approach is able to obtain the
same accuracy with a reduced computational cost, for adaptive mesh test cases
related to flow around tidal turbines, which interact via their downstream
wakes, and where the overall power output of the farm is taken as the QoI.
Moreover, we demonstrate that the element-by-element approach implies
reasonably low training costs.Comment: 27 pages, 14 figure
- …