66,763 research outputs found
Sparse approximate inverse preconditioners on high performance GPU platforms
Simulation with models based on partial differential equations often requires the solution of (sequences of) large and sparse algebraic linear systems. In multidimensional domains, preconditioned Krylov iterative solvers are often appropriate for these duties. Therefore, the search for efficient preconditioners for Krylov subspace methods is a crucial theme. Recent developments, especially in computing hardware, have renewed the interest in approximate inverse preconditioners in factorized form, because their application during the solution process can be more efficient. We present here some experiences focused on the approximate inverse preconditioners proposed by Benzi and Tůma from 1996 and the sparsification and inversion proposed by van Duin in 1999. Computational costs, reorderings and implementation issues are considered both on conventional and innovative computing architectures like Graphics Programming Units (GPUs)
Computing matrix inversion with optical networks
With this paper we bring about a discussion on the computing potential of
complex optical networks and provide experimental demonstration that an optical
fiber network can be used as an analog processor to calculate matrix inversion.
A 3x3 matrix is inverted as a proof-of-concept demonstration using a fiber
network containing three nodes and operating at telecomm wavelength. For an NxN
matrix, the overall solving time (including setting time of the matrix elements
and calculation time of inversion) scales as O(N^2), whereas matrix inversion
by most advanced computer algorithms requires ~O(N^2.37) computational time.
For well-conditioned matrices, the error of the inversion performed optically
is found to be less than 3%, limited by the accuracy of measurement equipment.Comment: 5 page
Parallel computation of optimized arrays for 2-D electrical imaging surveys
Modern automatic multi-electrode survey instruments have made it possible to use non-traditional arrays to maximize the subsurface resolution from electrical imaging surveys. Previous studies have shown that one of the best methods for generating optimized arrays is to select the set of array configurations that maximizes the model resolution for a homogeneous earth model. The Sherman–Morrison Rank-1 update is used to calculate the change in the model resolution when a new array is added to a selected set of array configurations. This method had the disadvantage that it required several hours of computer time even for short 2-D survey lines. The algorithm was modified to calculate the change in the model resolution rather than the entire resolution matrix. This reduces the computer time and memory required as well as the computational round-off errors. The matrix–vector multiplications for a single add-on array were replaced with matrix–matrix multiplications for 28 add-on arrays to further reduce the computer time. The temporary variables were stored in the double-precision Single Instruction Multiple Data (SIMD) registers within the CPU to minimize computer memory access. A further reduction in the computer time is achieved by using the computer graphics card Graphics Processor Unit (GPU) as a highly parallel mathematical coprocessor. This makes it possible to carry out the calculations for 512 add-on arrays in parallel using the GPU. The changes reduce the computer time by more than two orders of magnitude. The algorithm used to generate an optimized data set adds a specified number of new array configurations after each iteration to the existing set. The resolution of the optimized data set can be increased by adding a smaller number of new array configurations after each iteration. Although this increases the computer time required to generate an optimized data set with the same number of data points, the new fast numerical routines has made this practical on commonly available microcomputers
Recommended from our members
Alternative methods for representing the inverse of linear programming basis matrices
Methods for representing the inverse of Linear Programming (LP) basis matrices are closely related to techniques for solving a system of sparse unsymmetric linear equations by direct methods. It is now well accepted that for these problems the static process of reordering the matrix in the lower block triangular (LBT) form constitutes the initial step. We introduce a combined static and dynamic factorisation of a basis matrix and derive its inverse which we call the partial elimination form of the inverse (PEFI). This factorization takes advantage of the LBT structure and produces a sparser representation of the inverse than the elimination form of the inverse (EFI). In this we make use of the original columns (of the constraint matrix) which are in the basis. To represent the factored inverse it is, however, necessary to introduce special data structures which are used in the forward and the backward transformations (the two major algorithmic steps) of the simplex method. These correspond to solving a system of equations and solving a system of equations with the transposed matrix respectively. In this paper we compare the nonzero build up of PEFI with that of EFI. We have also investigated alternative methods for updating the basis inverse in the PEFI representation. The results of our experimental investigation are presented in this pape
Radio interferometric gain calibration as a complex optimization problem
Recent developments in optimization theory have extended some traditional
algorithms for least-squares optimization of real-valued functions
(Gauss-Newton, Levenberg-Marquardt, etc.) into the domain of complex functions
of a complex variable. This employs a formalism called the Wirtinger
derivative, and derives a full-complex Jacobian counterpart to the conventional
real Jacobian. We apply these developments to the problem of radio
interferometric gain calibration, and show how the general complex Jacobian
formalism, when combined with conventional optimization approaches, yields a
whole new family of calibration algorithms, including those for the polarized
and direction-dependent gain regime. We further extend the Wirtinger calculus
to an operator-based matrix calculus for describing the polarized calibration
regime. Using approximate matrix inversion results in computationally efficient
implementations; we show that some recently proposed calibration algorithms
such as StefCal and peeling can be understood as special cases of this, and
place them in the context of the general formalism. Finally, we present an
implementation and some applied results of CohJones, another specialized
direction-dependent calibration algorithm derived from the formalism.Comment: 18 pages; 6 figures; accepted by MNRA
- …