1,600 research outputs found

    Strict bounding of quantities of interest in computations based on domain decomposition

    Full text link
    This paper deals with bounding the error on the estimation of quantities of interest obtained by finite element and domain decomposition methods. The proposed bounds are written in order to separate the two errors involved in the resolution of reference and adjoint problems : on the one hand the discretization error due to the finite element method and on the other hand the algebraic error due to the use of the iterative solver. Beside practical considerations on the parallel computation of the bounds, it is shown that the interface conformity can be slightly relaxed so that local enrichment or refinement are possible in the subdomains bearing singularities or quantities of interest which simplifies the improvement of the estimation. Academic assessments are given on 2D static linear mechanic problems.Comment: Computer Methods in Applied Mechanics and Engineering, Elsevier, 2015, online previe

    The LifeV library: engineering mathematics beyond the proof of concept

    Get PDF
    LifeV is a library for the finite element (FE) solution of partial differential equations in one, two, and three dimensions. It is written in C++ and designed to run on diverse parallel architectures, including cloud and high performance computing facilities. In spite of its academic research nature, meaning a library for the development and testing of new methods, one distinguishing feature of LifeV is its use on real world problems and it is intended to provide a tool for many engineering applications. It has been actually used in computational hemodynamics, including cardiac mechanics and fluid-structure interaction problems, in porous media, ice sheets dynamics for both forward and inverse problems. In this paper we give a short overview of the features of LifeV and its coding paradigms on simple problems. The main focus is on the parallel environment which is mainly driven by domain decomposition methods and based on external libraries such as MPI, the Trilinos project, HDF5 and ParMetis. Dedicated to the memory of Fausto Saleri.Comment: Review of the LifeV Finite Element librar

    Multiphysics simulations: challenges and opportunities.

    Full text link

    A novel numerical framework for simulation of multiscale spatio-temporally non-linear systems in additive manufacturing processes.

    Get PDF
    New computationally efficient numerical techniques have been formulated for multi-scale analysis in order to bridge mesoscopic and macroscopic scales of thermal and mechanical responses of a material. These numerical techniques will reduce computational efforts required to simulate metal based Additive Manufacturing (AM) processes. Considering the availability of physics based constitutive models for response at mesoscopic scales, these techniques will help in the evaluation of the thermal response and mechanical properties during layer-by-layer processing in AM. Two classes of numerical techniques have been explored. The first class of numerical techniques has been developed for evaluating the periodic spatiotemporal thermal response involving multiple time and spatial scales at the continuum level. The second class of numerical techniques is targeted at modeling multi-scale multi-energy dissipative phenomena during the solid state Ultrasonic Consolidation process. This includes bridging the mesoscopic response of a crystal plasticity finite element framework at inter- and intragranular scales and a point at the macroscopic scale. This response has been used to develop an energy dissipative constitutive model for a multi-surface interface at the macroscopic scale. An adaptive dynamic meshing strategy as a part of first class of numerical techniques has been developed which reduces computational cost by efficient node element renumbering and assembly of stiffness matrices. This strategy has been able to reduce the computational cost for solving thermal simulation of Selective Laser Melting process by ~100 times. This method is not limited to SLM processes and can be extended to any other fusion based additive manufacturing process and more generally to any moving energy source finite element problem. Novel FEM based beam theories have been formulated which are more general in nature compared to traditional beam theories for solid deformation. These theories have been the first to simulate thermal problems similar to a solid beam analysis approach. These are more general in nature and are capable of simulating general cross-section beams with an ability to match results for complete three dimensional analysis. In addition to this, a traditional Cholesky decomposition algorithm has been modified to reduce the computational cost of solving simultaneous equations involved in FEM simulations. Solid state processes have been simulated with crystal plasticity based nonlinear finite element algorithms. This algorithm has been further sped up by introduction of an interfacial contact constitutive model formulation. This framework has been supported by a novel methodology to solve contact problems without additional computational overhead to incorporate constraint equations averting the usage of penalty springs

    Calibration and Rescaling Principles for Nonlinear Inverse Heat Conduction and Parameter Estimation Problems

    Get PDF
    This dissertation provides a systematic method for resolving nonlinear inverse heat conduction problems based on a calibration formulation and its accompanying principles. It is well-known that inverse heat conduction problems are ill-posed and hence subject to stability and uniqueness issues. Regularization methods are required to extract the best prediction based on a family of solutions. To date, most studies require sophisticated and combined numerical methods and regularization schemes for producing predictions. All thermophysical and geometrical properties must be provided in the simulations. The successful application of the numerical methods relies on the accuracy of the related system parameters as previously described. Due to the existence of uncertainties in the system parameters, these numerical methods possess bias of varying magnitudes. The calibration based approaches are proposed to minimize the systematic errors since system parameters are implicitly included in the mathematical formulation based on several calibration tests. To date, most calibration inverse studies have been based on the assumption of constant thermophysical properties. In contrast, this dissertation focuses on accounting for temperature-dependent thermophysical properties that produces a nonlinear heat equation. A novel rescaling principle is introduced for linearzing the system. This concept generates a mathematical framework similar to that of the linear formulation. Unlike the linear formulation, the present approach does require knowledge of thermophysical properties. However, all geometrical properties and sensor characterization are completely removed from the system. In this dissertation, a linear one-probe calibration method is first introduced as background. After that, the calibration method is generalized to the one-probe and two-probe, one-dimensional thermal system based on the assumption of temperature-dependent thermophysical properties. All previously proposed calibration equations are expressed in terms of a Volterra integral equation of the first kind for the unknown surface (net) heat flux and hence requires regularization owning to the ill-posed nature of first kind equations. A new strategy is proposed for determining the optimal regularization parameter that is independent of the applied regularization approach. As a final application, the described calibration principle is used for estimating unknown thermophysical properties above room temperature

    Variational Data Assimilation via Sparse Regularization

    Get PDF
    This paper studies the role of sparse regularization in a properly chosen basis for variational data assimilation (VDA) problems. Specifically, it focuses on data assimilation of noisy and down-sampled observations while the state variable of interest exhibits sparsity in the real or transformed domain. We show that in the presence of sparsity, the 1\ell_{1}-norm regularization produces more accurate and stable solutions than the classic data assimilation methods. To motivate further developments of the proposed methodology, assimilation experiments are conducted in the wavelet and spectral domain using the linear advection-diffusion equation

    Novel Numerical Approaches for the Resolution of Direct and Inverse Heat Transfer Problems

    Get PDF
    This dissertation describes an innovative and robust global time approach which has been developed for the resolution of direct and inverse problems, specifically in the disciplines of radiation and conduction heat transfer. Direct problems are generally well-posed and readily lend themselves to standard and well-defined mathematical solution techniques. Inverse problems differ in the fact that they tend to be ill-posed in the sense of Hadamard, i.e., small perturbations in the input data can produce large variations and instabilities in the output. The stability problem is exacerbated by the use of discrete experimental data which may be subject to substantial measurement error. This tendency towards ill-posedness is the main difficulty in developing a suitable prediction algorithm for most inverse problems. Previous attempts to overcome the inherent instability have involved the utilization of smoothing techniques such as Tikhonov regularization and sequential function estimation (Beck’s future information method). As alternatives to the existing methodologies, two novel mathematical schemes are proposed. They are the Global Time Method (GTM) and the Function Decomposition Method (FDM). Both schemes are capable of rendering time and space in a global fashion thus resolving the temporal and spatial domains simultaneously. This process effectively treats time elliptically or as a fourth spatial dimension. AWeighted Residuals Method (WRM) is utilized in the mathematical formulation wherein the unknown function is approximated in terms of a finite series expansion. Regularization of the solution is achieved by retention of expansion terms as opposed to smoothing in the classical Tikhonov sense. In order to demonstrate the merit and flexibility of these approaches, the GTM and FDM have been applied to representative problems of direct and inverse heat transfer. Those chosen are a direct problem of radiative transport, a parameter estimation problem found in Differential Scanning Calorimetry (DSC) and an inverse heat conduction problem (IHCP). The IHCP is resolved for the cases of diagnostic deduction (discrete temperature data at the boundary) and thermal design (prescribed functional data at the boundary). Both methods are shown to provide excellent results for the conditions under which they were tested. Finally, a number of suggestions for future work are offered

    A new approach to nonlinear constrained Tikhonov regularization

    Full text link
    We present a novel approach to nonlinear constrained Tikhonov regularization from the viewpoint of optimization theory. A second-order sufficient optimality condition is suggested as a nonlinearity condition to handle the nonlinearity of the forward operator. The approach is exploited to derive convergence rates results for a priori as well as a posteriori choice rules, e.g., discrepancy principle and balancing principle, for selecting the regularization parameter. The idea is further illustrated on a general class of parameter identification problems, for which (new) source and nonlinearity conditions are derived and the structural property of the nonlinearity term is revealed. A number of examples including identifying distributed parameters in elliptic differential equations are presented.Comment: 21 pages, to appear in Inverse Problem

    Proper general decomposition (PGD) for the resolution of Navier–Stokes equations

    Get PDF
    In this work, the PGD method will be considered for solving some problems of fluid mechanics by looking for the solution as a sum of tensor product functions. In the first stage, the equations of Stokes and Burgers will be solved. Then, we will solve the Navier–Stokes problem in the case of the lid-driven cavity for different Reynolds numbers (Re = 100, 1000 and 10,000). Finally, the PGD method will be compared to the standard resolution technique, both in terms of CPU time and accuracy.Région Poitou-Charente
    corecore