17,812 research outputs found
Efficient implicit FEM simulation of sheet metal forming
For the simulation of industrial sheet forming processes, the time discretisation is\ud
one of the important factors that determine the accuracy and efficiency of the algorithm. For\ud
relatively small models, the implicit time integration method is preferred, because of its inherent\ud
equilibrium check. For large models, the computation time becomes prohibitively large and, in\ud
practice, often explicit methods are used. In this contribution a strategy is presented that enables\ud
the application of implicit finite element simulations for large scale sheet forming analysis.\ud
Iterative linear equation solvers are commonly considered unsuitable for shell element models.\ud
The condition number of the stiffness matrix is usually very poor and the extreme reduction\ud
of CPU time that is obtained in 3D bulk simulations is not reached in sheet forming simulations.\ud
Adding mass in an implicit time integration method has a beneficial effect on the condition number.\ud
If mass scaling is used—like in explicit methods—iterative linear equation solvers can lead\ud
to very efficient implicit time integration methods, without restriction to a critical time step and\ud
with control of the equilibrium error in every increment. Time savings of a factor of 10 and more\ud
can easily be reached, compared to the use of conventional direct solvers.\ud
A domain decomposing parallel sparse linear system solver
The solution of large sparse linear systems is often the most time-consuming
part of many science and engineering applications. Computational fluid
dynamics, circuit simulation, power network analysis, and material science are
just a few examples of the application areas in which large sparse linear
systems need to be solved effectively. In this paper we introduce a new
parallel hybrid sparse linear system solver for distributed memory
architectures that contains both direct and iterative components. We show that
by using our solver one can alleviate the drawbacks of direct and iterative
solvers, achieving better scalability than with direct solvers and more
robustness than with classical preconditioned iterative solvers. Comparisons to
well-known direct and iterative solvers on a parallel architecture are
provided.Comment: To appear in Journal of Computational and Applied Mathematic
Recommended from our members
A JavaScript API for the Ice Sheet System Model: towards on online interactive model for the Cryosphere Community
Abstract. Earth System Models (ESMs) are becoming increasingly complex, requiring extensive knowledge and experience to deploy and use in an efficient manner. They run on high-performance architectures that are significantly different from the everyday environments that scientists use to pre and post-process results (i.e. MATLAB, Python). This results in models that are hard to use for non specialists, and that are increasingly specific in their application. It also makes them relatively inaccessible to the wider science community, not to mention to the general public. Here, we present a new software/model paradigm that attempts to bridge the gap between the science community and the complexity of ESMs, by developing a new JavaScript Application Program Interface (API) for the Ice Sheet System Model (ISSM). The aforementioned API allows Cryosphere Scientists to run ISSM on the client-side of a webpage, within the JavaScript environment. When combined with a Web server running ISSM (using a Python API), it enables the serving of ISSM computations in an easy and straightforward way. The deep integration and similarities between all the APIs in ISSM (MATLAB, Python, and now JavaScript) significantly shortens and simplifies the turnaround of state-of-the-art science runs and their use by the larger community. We demonstrate our approach via a new Virtual Earth System Laboratory (VESL) Web site
GPU Accelerated Explicit Time Integration Methods for Electro-Quasistatic Fields
Electro-quasistatic field problems involving nonlinear materials are commonly
discretized in space using finite elements. In this paper, it is proposed to
solve the resulting system of ordinary differential equations by an explicit
Runge-Kutta-Chebyshev time-integration scheme. This mitigates the need for
Newton-Raphson iterations, as they are necessary within fully implicit time
integration schemes. However, the electro-quasistatic system of ordinary
differential equations has a Laplace-type mass matrix such that parts of the
explicit time-integration scheme remain implicit. An iterative solver with
constant preconditioner is shown to efficiently solve the resulting multiple
right-hand side problem. This approach allows an efficient parallel
implementation on a system featuring multiple graphic processing units.Comment: 4 pages, 5 figure
An Efficient Block Circulant Preconditioner For Simulating Fracture Using Large Fuse Networks
{\it Critical slowing down} associated with the iterative solvers close to
the critical point often hinders large-scale numerical simulation of fracture
using discrete lattice networks. This paper presents a block circlant
preconditioner for iterative solvers for the simulation of progressive fracture
in disordered, quasi-brittle materials using large discrete lattice networks.
The average computational cost of the present alorithm per iteration is , where the stiffness matrix is partioned into
-by- blocks such that each block is an -by- matrix, and
represents the operational count associated with solving a block-diagonal
matrix with -by- dense matrix blocks. This algorithm using the block
circulant preconditioner is faster than the Fourier accelerated preconditioned
conjugate gradient (PCG) algorithm, and alleviates the {\it critical slowing
down} that is especially severe close to the critical point. Numerical results
using random resistor networks substantiate the efficiency of the present
algorithm.Comment: 16 pages including 2 figure
Hydrodynamics of Suspensions of Passive and Active Rigid Particles: A Rigid Multiblob Approach
We develop a rigid multiblob method for numerically solving the mobility
problem for suspensions of passive and active rigid particles of complex shape
in Stokes flow in unconfined, partially confined, and fully confined
geometries. As in a number of existing methods, we discretize rigid bodies
using a collection of minimally-resolved spherical blobs constrained to move as
a rigid body, to arrive at a potentially large linear system of equations for
the unknown Lagrange multipliers and rigid-body motions. Here we develop a
block-diagonal preconditioner for this linear system and show that a standard
Krylov solver converges in a modest number of iterations that is essentially
independent of the number of particles. For unbounded suspensions and
suspensions sedimented against a single no-slip boundary, we rely on existing
analytical expressions for the Rotne-Prager tensor combined with a fast
multipole method or a direct summation on a Graphical Processing Unit to obtain
an simple yet efficient and scalable implementation. For fully confined
domains, such as periodic suspensions or suspensions confined in slit and
square channels, we extend a recently-developed rigid-body immersed boundary
method to suspensions of freely-moving passive or active rigid particles at
zero Reynolds number. We demonstrate that the iterative solver for the coupled
fluid and rigid body equations converges in a bounded number of iterations
regardless of the system size. We optimize a number of parameters in the
iterative solvers and apply our method to a variety of benchmark problems to
carefully assess the accuracy of the rigid multiblob approach as a function of
the resolution. We also model the dynamics of colloidal particles studied in
recent experiments, such as passive boomerangs in a slit channel, as well as a
pair of non-Brownian active nanorods sedimented against a wall.Comment: Under revision in CAMCOS, Nov 201
Alternating-Direction Line-Relaxation Methods on Multicomputers
We study the multicom.puter performance of a three-dimensional Navier–Stokes solver based on alternating-direction line-relaxation methods. We compare several multicomputer implementations, each of which combines a particular line-relaxation method and a particular distributed block-tridiagonal solver. In our experiments, the problem size was determined by resolution requirements of the application. As a result, the granularity of the computations of our study is finer than is customary in the performance analysis of concurrent block-tridiagonal solvers. Our best results were obtained with a modified half-Gauss–Seidel line-relaxation method implemented by means of a new iterative block-tridiagonal solver that is developed here. Most computations were performed on the Intel Touchstone Delta, but we also used the Intel Paragon XP/S, the Parsytec SC-256, and the Fujitsu S-600 for comparison
- …