1,253 research outputs found

    Some Incipient Techniques For Improving Efficiency in Computational Mechanics

    Get PDF
    This contribution presents a review of different techniques available for alleviating simulation cost in computational mechanics. The first one is based on a separated representation of the unknown fields; the second one uses a model reduction based on the Karhunen-Loève decomposition within an adaptive scheme, and the last one is a mixed technique specially adapted for reducing models involving local singularities. These techniques can be applied in a large variety of models

    One and two-fiber orientation kinetic theories of fiber suspensions

    Get PDF
    http://dx.doi.org/10.1016/j.jnnfm.2012.10.009The morphology influencing rheological properties of suspensions of rigid spheres constitutes the flow induced collective ordering of the spheres characterized by two or more sphere distribution functions. When the rigid spheres are replaced by rigid fibers, the collective order in the position of the spheres is replaced by the flow induced orientation of the fibers that suffices to be characterized by one-fiber orientation distribution function. A flow induced collective ordering of fibers (both in position and orientation), that can only be characterized by two or more fiber distribution functions, can still however constitute an important part of the morphology. We show that two types of interaction among fibers, one being the Onsager-type topological interaction entering the free energy and the other the hydrodynamics interaction entering the dissipative part of the time evolution, give indeed rise to a collective order in the orientation influencing the rheology of fiber suspensions

    On the deterministic solution of multidimensional parametric models using the Proper Generalized Decomposition

    Get PDF
    This paper focuses on the efficient solution of models defined in high dimensional spaces. Those models involve numerous numerical challenges because of their associated curse of dimensionality. It is well known that in mesh-based discrete models the complexity (degrees of freedom) scales exponentially with the dimension of the space. Many models encountered in computational science and engineering involve numerous dimensions called configurational coordinates. Some examples are the models encoun- tered in biology making use of the chemical master equation, quantum chemistry involving the solution of the Schrödinger or Dirac equations, kinetic theory descriptions of complex systems based on the solution of the so-called Fokker–Planck equation, stochastic models in which the random variables are included as new coordinates, financial mathematics, etc. This paper revisits the curse of dimensionality and proposes an efficient strategy for circumventing such challenging issue. This strategy, based on the use of a Proper Generalized Decomposition, is specially well suited to treat the multidimensional parametric equations

    The proper generalized decomposition for the simulation of delamination using cohesive zone model

    Get PDF
    The use of cohesive zone models is an efficient way to treat the damage, especially when the crack path is known a priori. This is the case in the modeling of delamination in composite laminates. However, the simulations using cohesive zone models are expensive in a computational point of view. When using implicit time integration scheme or when solving static problems, the non-linearity related to the cohesive model requires many iterations before reaching convergence. In explicit approaches, the time step stability condition also requires an important number of iterations. In this article, a new approach based on a separated representation of the solution is proposed. The Proper Generalized Decomposition is used to build the solution. This technique, coupled with a cohesive zone model, allows a significant reduction of the computational cost. The results approximated with the PGD are very close to the ones obtained using the classical finite element approach

    Simulating microstructure evolution during passive mixing

    Get PDF
    The prediction of microstructure evolution during passive mixing is of major interest in order to qualify and quantify mixing devices as well as to predict the final morphology of the resulting blend. Direct numerical simulation fails because of the different characteristic lengths of the microstructure and the process itself. Micro-macro approaches could be a valuable alternative but the computational cost remains tremendous. For this reason many authors proposed the introduction of some microstructural variables able to qualify and quantify the mixing process at a mesoscale level. Some proposals considered only the effects induced by the flow kinematics, other introduced also the effects of shape relaxation due to the surface tension and coalescence. The most advanced integrate also the break-up process. However, the derivation of the evolution equations governing the evolution of such microstructural variables needs the introduction of some closure relations whose impact on the computed solution should be evaluated before applying it for simulating complex mixing flows. In this work we consider the Lee and Park’s model that considers the flow kinematics, the surface tension, the coalescence and the break-up mechanisms in the evolution of the area tensor. The accuracy of both a quadratic closure and an orthotropic relations will be analyzed in the first part of this work, and then the resulting closed model by using a quadratic closure will be used for simulating complex mixing flows

    Monitoring a PGD solver for parametric power flow problems with goal-oriented error assessment

    Get PDF
    This is the peer reviewed version of the following article: [García-Blanco, R., Borzacchiello, D., Chinesta, F., and Diez, P. (2017) Monitoring a PGD solver for parametric power flow problems with goal-oriented error assessment. Int. J. Numer. Meth. Engng, 111: 529–552. doi: 10.1002/nme.5470], which has been published in final form at http://onlinelibrary.wiley.com/doi/10.1002/nme.5470/full. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving.The parametric analysis of electric grids requires carrying out a large number of Power Flow computations. The different parameters describe loading conditions and grid properties. In this framework, the Proper Generalized Decomposition (PGD) provides a numerical solution explicitly accounting for the parametric dependence. Once the PGD solution is available, exploring the multidimensional parametric space is computationally inexpensive. The aim of this paper is to provide tools to monitor the error associated with this significant computational gain and to guarantee the quality of the PGD solution. In this case, the PGD algorithm consists in three nested loops that correspond to 1) iterating algebraic solver, 2) number of terms in the separable greedy expansion and 3) the alternated directions for each term. In the proposed approach, the three loops are controlled by stopping criteria based on residual goal-oriented error estimates. This allows one for using only the computational resources necessary to achieve the accuracy prescribed by the end- user. The paper discusses how to compute the goal-oriented error estimates. This requires linearizing the error equation and the Quantity of Interest to derive an efficient error representation based on an adjoint problem. The efficiency of the proposed approach is demonstrated on benchmark problems.Peer ReviewedPostprint (author's final draft

    MODEL REDUCTION METHODS IN OPTION PRICING

    Get PDF
    In this work we introduce the Proper Orthogonal Decomposition (POD)approach to the valuation of contingent claims for one–dimensional price models.First, we present the POD in the context of an abstract Hilbert space and we givean application for the numerical pricing of Double Barrier Options. In a finitedimension setting, we show the model reduction method for Finite Differenceschemes of implicit type. In particular, we construct the reduced version of theCrank–Nicolson scheme and some numerical examples are given.Model Reduction, Proper Orthogonal Decomposition, Finite Difference Schemes, Crank–Nicolson Scheme.

    Vademecum-based GFEM (V-GFEM): optimal enrichment for transient problems

    Get PDF
    This is the accepted version of the following article: [Canales, D., Leygue, A., Chinesta, F., González, D., Cueto, E., Feulvarch, E., Bergheau, J. -M., and Huerta, A. (2016) Vademecum-based GFEM (V-GFEM): optimal enrichment for transient problems. Int. J. Numer. Meth. Engng, 108: 971–989. doi: 10.1002/nme.5240.], which has been published in final form at http://onlinelibrary.wiley.com/doi/10.1002/nme.5240/fullThis paper proposes a generalized finite element method based on the use of parametric solutions as enrichment functions. These parametric solutions are precomputed off-line and stored in memory in the form of a computational vademecum so that they can be used on-line with negligible cost. This renders a more efficient computational method than traditional finite element methods at performing simulations of processes. One key issue of the proposed method is the efficient computation of the parametric enrichments. These are computed and efficiently stored in memory by employing proper generalized decompositions. Although the presented method can be broadly applied, it is particularly well suited in manufacturing processes involving localized physics that depend on many parameters, such as welding. After introducing the vademecum-generalized finite element method formulation, we present some numerical examples related to the simulation of thermal models encountered in welding processes.Peer ReviewedPostprint (author's final draft

    Deterministic solution of the kinetic theory model of colloidal suspensions of structureless particles

    Get PDF
    A direct modeling of colloidal suspensions consists of calculating trajectories of all suspended objects. Due to the large time computing and the large cost involved in such calculations, we consider in this paper another route. Colloidal suspensions are described on a mesoscopic level by a distribution function whose time evolution is governed by a Fokker–Plancklike equation. The difficulty encountered on this route is the high dimensionality of the space in which the distribution function is defined. A novel strategy is used to solve numerically the Fokker–Planck equation circumventing the curse of dimensionality issue. Rheological and morphological predictions of the model that includes both direct and hydrodynamic interactions are presented in different flows

    An overview of the proper generalized decomposition with applications in computational rheology

    Get PDF
    We review the foundations and applications of the proper generalized decomposition (PGD), a powerful model reduction technique that computes a priori by means of successive enrichment a separated representation of the unknown field. The computational complexity of the PGD scales linearly with the dimension of the space wherein the model is defined, which is in marked contrast with the exponential scaling of standard grid-based methods. First introduced in the context of computational rheology by Ammar et al. [3] and [4], the PGD has since been further developed and applied in a variety of applications ranging from the solution of the Schrödinger equation of quantum mechanics to the analysis of laminate composites. In this paper, we illustrate the use of the PGD in four problem categories related to computational rheology: (i) the direct solution of the Fokker-Planck equation for complex fluids in configuration spaces of high dimension, (ii) the development of very efficient non-incremental algorithms for transient problems, (iii) the fully three-dimensional solution of problems defined in degenerate plate or shell-like domains often encountered in polymer processing or composites manufacturing, and finally (iv) the solution of multidimensional parametric models obtained by introducing various sources of problem variability as additional coordinates
    corecore