5,598 research outputs found
An Efficient Algorithm For Simulating Fracture Using Large Fuse Networks
The high computational cost involved in modeling of the progressive fracture
simulations using large discrete lattice networks stems from the requirement to
solve {\it a new large set of linear equations} every time a new lattice bond
is broken. To address this problem, we propose an algorithm that combines the
multiple-rank sparse Cholesky downdating algorithm with the rank-p inverse
updating algorithm based on the Sherman-Morrison-Woodbury formula for the
simulation of progressive fracture in disordered quasi-brittle materials using
discrete lattice networks. Using the present algorithm, the computational
complexity of solving the new set of linear equations after breaking a bond
reduces to the same order as that of a simple {\it backsolve} (forward
elimination and backward substitution) {\it using the already LU factored
matrix}. That is, the computational cost is , where denotes the number of non-zeros of the Cholesky factorization of
the stiffness matrix . This algorithm using the direct sparse solver
is faster than the Fourier accelerated preconditioned conjugate gradient (PCG)
iterative solvers, and eliminates the {\it critical slowing down} associated
with the iterative solvers that is especially severe close to the critical
points. Numerical results using random resistor networks substantiate the
efficiency of the present algorithm.Comment: 15 pages including 1 figure. On page pp11407 of the original paper
(J. Phys. A: Math. Gen. 36 (2003) 11403-11412), Eqs. 11 and 12 were
misprinted that went unnoticed during the proof reading stag
An Efficient Block Circulant Preconditioner For Simulating Fracture Using Large Fuse Networks
{\it Critical slowing down} associated with the iterative solvers close to
the critical point often hinders large-scale numerical simulation of fracture
using discrete lattice networks. This paper presents a block circlant
preconditioner for iterative solvers for the simulation of progressive fracture
in disordered, quasi-brittle materials using large discrete lattice networks.
The average computational cost of the present alorithm per iteration is , where the stiffness matrix is partioned into
-by- blocks such that each block is an -by- matrix, and
represents the operational count associated with solving a block-diagonal
matrix with -by- dense matrix blocks. This algorithm using the block
circulant preconditioner is faster than the Fourier accelerated preconditioned
conjugate gradient (PCG) algorithm, and alleviates the {\it critical slowing
down} that is especially severe close to the critical point. Numerical results
using random resistor networks substantiate the efficiency of the present
algorithm.Comment: 16 pages including 2 figure
Electron scattering states at solid surfaces calculated with realistic potentials
Scattering states with LEED asymptotics are calculated for a general
non-muffin tin potential, as e.g. for a pseudopotential with a suitable barrier
and image potential part. The latter applies especially to the case of low
lying conduction bands. The wave function is described with a reciprocal
lattice representation parallel to the surface and a discretization of the real
space perpendicular to the surface. The Schroedinger equation leads to a system
of linear one-dimensional equations. The asymptotic boundary value problem is
confined via the quantum transmitting boundary method to a finite interval. The
solutions are obtained basing on a multigrid technique which yields a fast and
reliable algorithm. The influence of the boundary conditions, the accuracy and
the rate of convergence with several solvers are discussed. The resulting
charge densities are investigated.Comment: 5 pages, 4 figures, copyright and acknowledgment added, typos etc.
correcte
Hypercube matrix computation task
A major objective of the Hypercube Matrix Computation effort at the Jet Propulsion Laboratory (JPL) is to investigate the applicability of a parallel computing architecture to the solution of large-scale electromagnetic scattering problems. Three scattering analysis codes are being implemented and assessed on a JPL/California Institute of Technology (Caltech) Mark 3 Hypercube. The codes, which utilize different underlying algorithms, give a means of evaluating the general applicability of this parallel architecture. The three analysis codes being implemented are a frequency domain method of moments code, a time domain finite difference code, and a frequency domain finite elements code. These analysis capabilities are being integrated into an electromagnetics interactive analysis workstation which can serve as a design tool for the construction of antennas and other radiating or scattering structures. The first two years of work on the Hypercube Matrix Computation effort is summarized. It includes both new developments and results as well as work previously reported in the Hypercube Matrix Computation Task: Final Report for 1986 to 1987 (JPL Publication 87-18)
Hypercube matrix computation task
The Hypercube Matrix Computation (Year 1986-1987) task investigated the applicability of a parallel computing architecture to the solution of large scale electromagnetic scattering problems. Two existing electromagnetic scattering codes were selected for conversion to the Mark III Hypercube concurrent computing environment. They were selected so that the underlying numerical algorithms utilized would be different thereby providing a more thorough evaluation of the appropriateness of the parallel environment for these types of problems. The first code was a frequency domain method of moments solution, NEC-2, developed at Lawrence Livermore National Laboratory. The second code was a time domain finite difference solution of Maxwell's equations to solve for the scattered fields. Once the codes were implemented on the hypercube and verified to obtain correct solutions by comparing the results with those from sequential runs, several measures were used to evaluate the performance of the two codes. First, a comparison was provided of the problem size possible on the hypercube with 128 megabytes of memory for a 32-node configuration with that available in a typical sequential user environment of 4 to 8 megabytes. Then, the performance of the codes was anlyzed for the computational speedup attained by the parallel architecture
- …