47,843 research outputs found
Fast matrix multiplication techniques based on the Adleman-Lipton model
On distributed memory electronic computers, the implementation and
association of fast parallel matrix multiplication algorithms has yielded
astounding results and insights. In this discourse, we use the tools of
molecular biology to demonstrate the theoretical encoding of Strassen's fast
matrix multiplication algorithm with DNA based on an -moduli set in the
residue number system, thereby demonstrating the viability of computational
mathematics with DNA. As a result, a general scalable implementation of this
model in the DNA computing paradigm is presented and can be generalized to the
application of \emph{all} fast matrix multiplication algorithms on a DNA
computer. We also discuss the practical capabilities and issues of this
scalable implementation. Fast methods of matrix computations with DNA are
important because they also allow for the efficient implementation of other
algorithms (i.e. inversion, computing determinants, and graph theory) with DNA.Comment: To appear in the International Journal of Computer Engineering
Research. Minor changes made to make the preprint as similar as possible to
the published versio
RLS Adaptive Filtering Algorithms Based on Parallel Computations
The paper presents a family of the sliding window RLS adaptive filtering algorithms with the regularization of adaptive filter correlation matrix. The algorithms are developed in forms, fitted to the implementation by means of parallel computations. The family includes RLS and fast RLS algorithms based on generalized matrix inversion lemma, fast RLS algorithms based on square root free inverse QR decomposition and linearly constrained RLS algorithms. The considered algorithms are mathematically identical to the appropriate algorithms with sequential computations. The computation procedures of the developed algorithms are presented. The results of the algorithm simulation are presented as well
Fast linear algebra is stable
In an earlier paper, we showed that a large class of fast recursive matrix
multiplication algorithms is stable in a normwise sense, and that in fact if
multiplication of -by- matrices can be done by any algorithm in
operations for any , then it can be done
stably in operations for any . Here we extend
this result to show that essentially all standard linear algebra operations,
including LU decomposition, QR decomposition, linear equation solving, matrix
inversion, solving least squares problems, (generalized) eigenvalue problems
and the singular value decomposition can also be done stably (in a normwise
sense) in operations.Comment: 26 pages; final version; to appear in Numerische Mathemati
GPU-Accelerated Algorithms for Compressed Signals Recovery with Application to Astronomical Imagery Deblurring
Compressive sensing promises to enable bandwidth-efficient on-board
compression of astronomical data by lifting the encoding complexity from the
source to the receiver. The signal is recovered off-line, exploiting GPUs
parallel computation capabilities to speedup the reconstruction process.
However, inherent GPU hardware constraints limit the size of the recoverable
signal and the speedup practically achievable. In this work, we design parallel
algorithms that exploit the properties of circulant matrices for efficient
GPU-accelerated sparse signals recovery. Our approach reduces the memory
requirements, allowing us to recover very large signals with limited memory. In
addition, it achieves a tenfold signal recovery speedup thanks to ad-hoc
parallelization of matrix-vector multiplications and matrix inversions.
Finally, we practically demonstrate our algorithms in a typical application of
circulant matrices: deblurring a sparse astronomical image in the compressed
domain
- …