354 research outputs found
Einstien-Multidimensional Extrapolation methods
In this paper, we present a new framework for the recent multidimensional
extrapolation methods: Tensor Global Minimal Polynomial (TG-MPE) and Tensor
Global Reduced Rank Extrapolation (TG-RRE) methods. We develop a new approach
to the one presented in \cite{17}. The proposed framework highlights, in
addition their polynomial feature, the connection of TG-MPE and TG-RRE with
nonlinear Krylov subspace methods. A unified algorithm is proposed for their
implemention. Theoretical results are given and some numerical experiments on
linear and nonlinear problems are considered to confirm the performance of the
proposed algorithms
An optimality property of an approximated solution computed by the Hessenberg method
We revisit the implementation of the Krylov subspace method based on the Hessenberg process for general linear operator equations. It is established that at each step, the computed approximate solution can be regarded by the corresponding approach as the minimizer of a certain norm of residual corresponding to the obtained approximate solution of the system. Test problems are numerically examined for solving tensor equations with a cosine transform product arising from image restoration to compare the performance of the Krylov subspace methods in conjunction with the Tikhonov regularization technique based on Hessenberg and Arnoldi processes
hp-adaptive discontinuous Galerkin solver for elliptic equations in numerical relativity
A considerable amount of attention has been given to discontinuous Galerkin methods for hyperbolic problems in numerical relativity, showing potential advantages of the methods in dealing with hydrodynamical shocks and other discontinuities. This paper investigates discontinuous Galerkin methods for the solution of elliptic problems in numerical relativity. We present a novel hp-adaptive numerical scheme for curvilinear and non-conforming meshes. It uses a multigrid preconditioner with a Chebyshev or Schwarz smoother to create a very scalable discontinuous Galerkin code on generic domains. The code employs compactification to move the outer boundary near spatial infinity. We explore the properties of the code on some test problems, including one mimicking Neutron stars with phase transitions. We also apply it to construct initial data for two or three black holes
Diagonalizing transfer matrices and matrix product operators: a medley of exact and computational methods
Transfer matrices and matrix product operators play an ubiquitous role in the
field of many body physics. This paper gives an ideosyncratic overview of
applications, exact results and computational aspects of diagonalizing transfer
matrices and matrix product operators. The results in this paper are a mixture
of classic results, presented from the point of view of tensor networks, and of
new results. Topics discussed are exact solutions of transfer matrices in
equilibrium and non-equilibrium statistical physics, tensor network states,
matrix product operator algebras, and numerical matrix product state methods
for finding extremal eigenvectors of matrix product operators.Comment: Lecture notes from a course at Vienna Universit
A -mode integrator for solving evolution equations in Kronecker form
In this paper, we propose a -mode integrator for computing the solution
of stiff evolution equations. The integrator is based on a d-dimensional
splitting approach and uses exact (usually precomputed) one-dimensional matrix
exponentials. We show that the action of the exponentials, i.e. the
corresponding batched matrix-vector products, can be implemented efficiently on
modern computer systems. We further explain how -mode products can be used
to compute spectral transformations efficiently even if no fast transform is
available. We illustrate the performance of the new integrator by solving
three-dimensional linear and nonlinear Schr\"odinger equations, and we show
that the -mode integrator can significantly outperform numerical methods
well established in the field. We also discuss how to efficiently implement
this integrator on both multi-core CPUs and GPUs. Finally, the numerical
experiments show that using GPUs results in performance improvements between a
factor of 10 and 20, depending on the problem
- …