46,866 research outputs found
Multigrid Methods in Lattice Field Computations
The multigrid methodology is reviewed. By integrating numerical processes at
all scales of a problem, it seeks to perform various computational tasks at a
cost that rises as slowly as possible as a function of , the number of
degrees of freedom in the problem. Current and potential benefits for lattice
field computations are outlined. They include: solution of Dirac
equations; just operations in updating the solution (upon any local
change of data, including the gauge field); similar efficiency in gauge fixing
and updating; operations in updating the inverse matrix and in
calculating the change in the logarithm of its determinant; operations
per producing each independent configuration in statistical simulations
(eliminating CSD), and, more important, effectively just operations per
each independent measurement (eliminating the volume factor as well). These
potential capabilities have been demonstrated on simple model problems.
Extensions to real life are explored.Comment: 4
Distributed Deblurring of Large Images of Wide Field-Of-View
Image deblurring is an economic way to reduce certain degradations (blur and
noise) in acquired images. Thus, it has become essential tool in high
resolution imaging in many applications, e.g., astronomy, microscopy or
computational photography. In applications such as astronomy and satellite
imaging, the size of acquired images can be extremely large (up to gigapixels)
covering wide field-of-view suffering from shift-variant blur. Most of the
existing image deblurring techniques are designed and implemented to work
efficiently on centralized computing system having multiple processors and a
shared memory. Thus, the largest image that can be handle is limited by the
size of the physical memory available on the system. In this paper, we propose
a distributed nonblind image deblurring algorithm in which several connected
processing nodes (with reasonable computational resources) process
simultaneously different portions of a large image while maintaining certain
coherency among them to finally obtain a single crisp image. Unlike the
existing centralized techniques, image deblurring in distributed fashion raises
several issues. To tackle these issues, we consider certain approximations that
trade-offs between the quality of deblurred image and the computational
resources required to achieve it. The experimental results show that our
algorithm produces the similar quality of images as the existing centralized
techniques while allowing distribution, and thus being cost effective for
extremely large images.Comment: 16 pages, 10 figures, submitted to IEEE Trans. on Image Processin
Boundary Treatment and Multigrid Preconditioning for Semi-Lagrangian Schemes Applied to Hamilton-Jacobi-Bellman Equations
We analyse two practical aspects that arise in the numerical solution of
Hamilton-Jacobi-Bellman (HJB) equations by a particular class of monotone
approximation schemes known as semi-Lagrangian schemes. These schemes make use
of a wide stencil to achieve convergence and result in discretization matrices
that are less sparse and less local than those coming from standard finite
difference schemes. This leads to computational difficulties not encountered
there. In particular, we consider the overstepping of the domain boundary and
analyse the accuracy and stability of stencil truncation. This truncation
imposes a stricter CFL condition for explicit schemes in the vicinity of
boundaries than in the interior, such that implicit schemes become attractive.
We then study the use of geometric, algebraic and aggregation-based multigrid
preconditioners to solve the resulting discretised systems from implicit time
stepping schemes efficiently. Finally, we illustrate the performance of these
techniques numerically for benchmark test cases from the literature
SHARP: A Spatially Higher-order, Relativistic Particle-in-Cell Code
Numerical heating in particle-in-cell (PIC) codes currently precludes the
accurate simulation of cold, relativistic plasma over long periods, severely
limiting their applications in astrophysical environments. We present a
spatially higher-order accurate relativistic PIC algorithm in one spatial
dimension, which conserves charge and momentum exactly. We utilize the
smoothness implied by the usage of higher-order interpolation functions to
achieve a spatially higher-order accurate algorithm (up to fifth order). We
validate our algorithm against several test problems -- thermal stability of
stationary plasma, stability of linear plasma waves, and two-stream instability
in the relativistic and non-relativistic regimes. Comparing our simulations to
exact solutions of the dispersion relations, we demonstrate that SHARP can
quantitatively reproduce important kinetic features of the linear regime. Our
simulations have a superior ability to control energy non-conservation and
avoid numerical heating in comparison to common second-order schemes. We
provide a natural definition for convergence of a general PIC algorithm: the
complement of physical modes captured by the simulation, i.e., those that lie
above the Poisson noise, must grow commensurately with the resolution. This
implies that it is necessary to simultaneously increase the number of particles
per cell and decrease the cell size. We demonstrate that traditional ways for
testing for convergence fail, leading to plateauing of the energy error. This
new PIC code enables us to faithfully study the long-term evolution of plasma
problems that require absolute control of the energy and momentum conservation.Comment: 26 pages, 19 figures, discussion about performance is added,
published in Ap
- …