44,293 research outputs found
Linear system solvers for boundary value ODEs
AbstractWe investigate the stability properties of several linear system solvers for solving boundary value ODEs. We consider the compactification algorithm, Gaussian elimination with row partial pivoting, and a QR algorithm applied to linear systems arising from solving BVPs for which the matrix is block-bidiagonal except for bordering along the last n rows and columns. We will particularly compare AUTO's original linear solver (an LU decomposition with partial pivoting) and our implementation of the analogous QR algorithm to AUTO. Two other factors (the underlying continuation strategy and mesh selection strategy) may affect the stability of the linear system solver for ODE continuation codes as well and are also discussed in our numerical investigations
Numerical Solution of ODEs and the Columbus' Egg: Three Simple Ideas for Three Difficult Problems
On computers, discrete problems are solved instead of continuous ones. One
must be sure that the solutions of the former problems, obtained in real time
(i.e., when the stepsize h is not infinitesimal) are good approximations of the
solutions of the latter ones. However, since the discrete world is much richer
than the continuous one (the latter being a limit case of the former), the
classical definitions and techniques, devised to analyze the behaviors of
continuous problems, are often insufficient to handle the discrete case, and
new specific tools are needed. Often, the insistence in following a path
already traced in the continuous setting, has caused waste of time and efforts,
whereas new specific tools have solved the problems both more easily and
elegantly. In this paper we survey three of the main difficulties encountered
in the numerical solutions of ODEs, along with the novel solutions proposed.Comment: 25 pages, 4 figures (typos fixed
Fifty Years of Stiffness
The notion of stiffness, which originated in several applications of a
different nature, has dominated the activities related to the numerical
treatment of differential problems for the last fifty years. Contrary to what
usually happens in Mathematics, its definition has been, for a long time, not
formally precise (actually, there are too many of them). Again, the needs of
applications, especially those arising in the construction of robust and
general purpose codes, require nowadays a formally precise definition. In this
paper, we review the evolution of such a notion and we also provide a precise
definition which encompasses all the previous ones.Comment: 24 pages, 11 figure
Numerical study of asymmetric keel hydrodynamic performance through advanced CFD
The hydrodynamics of an asymmetric IACC yacht keel at angle of yaw are presented using simulations performed by advanced computational fluid dynamics using state-of-the-art software. The aim of the paper is to continue working on the improvement of numerical viscous flow predictions for high-performance yachts using Large Eddy Simulation and Detached Eddy Simulation on unstructured grids. Quantitative comparisons of global forces acting on the keel and wake survey are carried out. Qualitative comparisons include flow visualisation, unsteady and separated flow and other features. Star-CCM+ and the trimmed cell method give better forces and wake prediction compared to the unstructured mesh of ANSYS Fluent. Both solvers give good flow visualisation near and far field of the keel
3D mesh processing using GAMer 2 to enable reaction-diffusion simulations in realistic cellular geometries
Recent advances in electron microscopy have enabled the imaging of single
cells in 3D at nanometer length scale resolutions. An uncharted frontier for in
silico biology is the ability to simulate cellular processes using these
observed geometries. Enabling such simulations requires watertight meshing of
electron micrograph images into 3D volume meshes, which can then form the basis
of computer simulations of such processes using numerical techniques such as
the Finite Element Method. In this paper, we describe the use of our recently
rewritten mesh processing software, GAMer 2, to bridge the gap between poorly
conditioned meshes generated from segmented micrographs and boundary marked
tetrahedral meshes which are compatible with simulation. We demonstrate the
application of a workflow using GAMer 2 to a series of electron micrographs of
neuronal dendrite morphology explored at three different length scales and show
that the resulting meshes are suitable for finite element simulations. This
work is an important step towards making physical simulations of biological
processes in realistic geometries routine. Innovations in algorithms to
reconstruct and simulate cellular length scale phenomena based on emerging
structural data will enable realistic physical models and advance discovery at
the interface of geometry and cellular processes. We posit that a new frontier
at the intersection of computational technologies and single cell biology is
now open.Comment: 39 pages, 14 figures. High resolution figures and supplemental movies
available upon reques
Accelerating scientific codes by performance and accuracy modeling
Scientific software is often driven by multiple parameters that affect both
accuracy and performance. Since finding the optimal configuration of these
parameters is a highly complex task, it extremely common that the software is
used suboptimally. In a typical scenario, accuracy requirements are imposed,
and attained through suboptimal performance. In this paper, we present a
methodology for the automatic selection of parameters for simulation codes, and
a corresponding prototype tool. To be amenable to our methodology, the target
code must expose the parameters affecting accuracy and performance, and there
must be formulas available for error bounds and computational complexity of the
underlying methods. As a case study, we consider the particle-particle
particle-mesh method (PPPM) from the LAMMPS suite for molecular dynamics, and
use our tool to identify configurations of the input parameters that achieve a
given accuracy in the shortest execution time. When compared with the
configurations suggested by expert users, the parameters selected by our tool
yield reductions in the time-to-solution ranging between 10% and 60%. In other
words, for the typical scenario where a fixed number of core-hours are granted
and simulations of a fixed number of timesteps are to be run, usage of our tool
may allow up to twice as many simulations. While we develop our ideas using
LAMMPS as computational framework and use the PPPM method for dispersion as
case study, the methodology is general and valid for a range of software tools
and methods
- …