239 research outputs found

    A dual weighted residual method applied to complex periodic gratings

    Get PDF
    An extension of the dual weighted residual (DWR) method to the analysis of electromagnetic waves in a periodic diffraction grating is presented. Using the α,0-quasi-periodic transformation, an upper bound for the a posteriori error estimate is derived. This is then used to solve adaptively the associated Helmholtz problem. The goal is to achieve an acceptable accuracy in the computed diffraction efficiency while keeping the computational mesh relatively coarse. Numerical results are presented to illustrate the advantage of using DWR over the global a posteriori error estimate approach. The application of the method in biomimetic, to address the complex diffraction geometry of the Morpho butterfly wing is also discussed

    A weighted reduced basis method for parabolic PDEs with random data

    Full text link
    This work considers a weighted POD-greedy method to estimate statistical outputs parabolic PDE problems with parametrized random data. The key idea of weighted reduced basis methods is to weight the parameter-dependent error estimate according to a probability measure in the set-up of the reduced space. The error of stochastic finite element solutions is usually measured in a root mean square sense regarding their dependence on the stochastic input parameters. An orthogonal projection of a snapshot set onto a corresponding POD basis defines an optimum reduced approximation in terms of a Monte Carlo discretization of the root mean square error. The errors of a weighted POD-greedy Galerkin solution are compared against an orthogonal projection of the underlying snapshots onto a POD basis for a numerical example involving thermal conduction. In particular, it is assessed whether a weighted POD-greedy solutions is able to come significantly closer to the optimum than a non-weighted equivalent. Additionally, the performance of a weighted POD-greedy Galerkin solution is considered with respect to the mean absolute error of an adjoint-corrected functional of the reduced solution.Comment: 15 pages, 4 figure

    On a Cahn--Hilliard--Darcy system for tumour growth with solution dependent source terms

    Full text link
    We study the existence of weak solutions to a mixture model for tumour growth that consists of a Cahn--Hilliard--Darcy system coupled with an elliptic reaction-diffusion equation. The Darcy law gives rise to an elliptic equation for the pressure that is coupled to the convective Cahn--Hilliard equation through convective and source terms. Both Dirichlet and Robin boundary conditions are considered for the pressure variable, which allows for the source terms to be dependent on the solution variables.Comment: 18 pages, changed proof from fixed point argument to Galerkin approximatio

    hp-DGFEM for Partial Differential Equations with Nonnegative Characteristic Form

    Get PDF
    Presented as Invited Lecture at the International Symposium on Discontinuous Galerkin Methods: Theory, Computation and Applications, in Newport, RI, USA.\ud \ud We develop the error analysis for the hp-version of a discontinuous finite element approximation to second-order partial differential equations with nonnegative characteristic form. This class of equations includes classical examples of second-order elliptic and parabolic equations, first-order hyperbolic equations, as well as equations of mixed type. We establish an a priori error bound for the method which is of optimal order in the mesh size h and 1 order less than optimal in the polynomial degree p. In the particular case of a first-order hyperbolic equation the error bound is optimal in h and 1/2 an order less than optimal in p

    Bayesian calibration, validation and uncertainty quantification for predictive modelling of tumour growth: a tutorial

    Get PDF
    In this work we present a pedagogical tumour growth example, in which we apply calibration and validation techniques to an uncertain, Gompertzian model of tumour spheroid growth. The key contribution of this article is the discussion and application of these methods (that are not commonly employed in the field of cancer modelling) in the context of a simple model, whose deterministic analogue is widely known within the community. In the course of the example we calibrate the model against experimental data that is subject to measurement errors, and then validate the resulting uncertain model predictions. We then analyse the sensitivity of the model predictions to the underlying measurement model. Finally, we propose an elementary learning approach for tuning a threshold parameter in the validation procedure in order to maximize predictive accuracy of our validated model

    A chronology of global air quality

    Get PDF
    Air pollution has been recognized as a threat to human health since the time of Hippocrates, ca 400 BC. Successive written accounts of air pollution occur in different countries through the following two millennia until measurements, from the eighteenth century onwards, show the growing scale of poor air quality in urban centres and close to industry, and the chemical characteristics of the gases and particulate matter. The industrial revolution accelerated both the magnitude of emissions of the primary pollutants and the geographical spread of contributing countries as highly polluted cities became the defining issue, culminating with the great smog of London in 1952. Europe and North America dominated emissions and suffered the majority of adverse effects until the latter decades of the twentieth century, by which time the transboundary issues of acid rain, forest decline and ground-level ozone became the main environmental and political air quality issues. As controls on emissions of sulfur and nitrogen oxides (SO2 and NOx) began to take effect in Europe and North America, emissions in East and South Asia grew strongly and dominated global emissions by the early years of the twenty-first century. The effects of air quality on human health had also returned to the top of the priorities by 2000 as new epidemiological evidence emerged. By this time, extensive networks of surface measurements and satellite remote sensing provided global measurements of both primary and secondary pollutants. Global emissions of SO2 and NOx peaked, respectively, in ca 1990 and 2018 and have since declined to 2020 as a result of widespread emission controls. By contrast, with a lack of actions to abate ammonia, global emissions have continued to grow

    A three-scale domain decomposition method for the 3D analysis of debonding in laminates

    Full text link
    The prediction of the quasi-static response of industrial laminate structures requires to use fine descriptions of the material, especially when debonding is involved. Even when modeled at the mesoscale, the computation of these structures results in very large numerical problems. In this paper, the exact mesoscale solution is sought using parallel iterative solvers. The LaTIn-based mixed domain decomposition method makes it very easy to handle the complex description of the structure; moreover the provided multiscale features enable us to deal with numerical difficulties at their natural scale; we present the various enhancements we developed to ensure the scalability of the method. An extension of the method designed to handle instabilities is also presented

    Finite Element Modeling of Ultrasonic Waves in Viscoelastic Media

    Full text link
    Linear viscoelasticity theory offers a minimal framework within which to construct a consistent, linear and causal model of mechanical wave dispersion. The term dispersion is used here to imply temporal wave spreading and amplitude reduction due to absorptive material properties rather than due to geometrical wave spreading. Numerical modeling of wave propagation in absorptive media has been the subject of recent research in such areas as material property measurement [1] [2], seismology [3] [4] [5] and medical ultrasound [6] [7]. Previously, wave attenuation has been included in transient finite element formulations via a constant damping matrix [8] or functionally in terms of a power law relation [9]. The formulation presented here is based on representing the viscoelastic shear and bulk moduli of the medium as either a discrete or continuous spectrum of decaying exponentials [10]. As a first test of the correctness of the viscoelastic finite element formulation, the finite element results for a simple hypothetical medium are compared with an equivalent Laplace-Hankel transform domain solution.</p

    HDG-NEFEM with Degree Adaptivity for Stokes Flows

    Get PDF
    This paper presents the first degree adaptive procedure able to directly use the geometry given by a CAD model. The technique uses a hybridisable discontinuous Galerkin discretisation combined with a NURBS-enhanced rationale, completely removing the uncertainty induced by a polynomial approximation of curved boundaries that is common within an isoparametric approach. The technique is compared against two strategies to perform degree adaptivity currently in use. This paper demonstrates, for the first time, that the most extended technique for degree adaptivity can easily lead to a non-reliable error estimator if no communication with CAD software is introduced whereas if the communication with the CAD is done, it results in a substantial computing time. The proposed technique encapsulates the CAD model in the simulation and is able to produce reliable error estimators irrespectively of the initial mesh used to start the adaptive process. Several numerical examples confirm the findings and demonstrate the superiority of the proposed technique. The paper also proposes a novel idea to test the implementation of high-order solvers where different degrees of approximation are used in different elements
    • …
    corecore