67,295 research outputs found
Fast Multipole Method as a Matrix-Free Hierarchical Low-Rank Approximation
There has been a large increase in the amount of work on hierarchical
low-rank approximation methods, where the interest is shared by multiple
communities that previously did not intersect. This objective of this article
is two-fold; to provide a thorough review of the recent advancements in this
field from both analytical and algebraic perspectives, and to present a
comparative benchmark of two highly optimized implementations of contrasting
methods for some simple yet representative test cases. We categorize the recent
advances in this field from the perspective of compute-memory tradeoff, which
has not been considered in much detail in this area. Benchmark tests reveal
that there is a large difference in the memory consumption and performance
between the different methods.Comment: 19 pages, 6 figure
A Multiscale Method for Model Order Reduction in PDE Parameter Estimation
Estimating parameters of Partial Differential Equations (PDEs) is of interest
in a number of applications such as geophysical and medical imaging. Parameter
estimation is commonly phrased as a PDE-constrained optimization problem that
can be solved iteratively using gradient-based optimization. A computational
bottleneck in such approaches is that the underlying PDEs needs to be solved
numerous times before the model is reconstructed with sufficient accuracy. One
way to reduce this computational burden is by using Model Order Reduction (MOR)
techniques such as the Multiscale Finite Volume Method (MSFV).
In this paper, we apply MSFV for solving high-dimensional parameter
estimation problems. Given a finite volume discretization of the PDE on a fine
mesh, the MSFV method reduces the problem size by computing a
parameter-dependent projection onto a nested coarse mesh. A novelty in our work
is the integration of MSFV into a PDE-constrained optimization framework, which
updates the reduced space in each iteration. We also present a computationally
tractable way of differentiating the MOR solution that acknowledges the change
of basis. As we demonstrate in our numerical experiments, our method leads to
computational savings particularly for large-scale parameter estimation
problems and can benefit from parallelization.Comment: 22 pages, 4 figures, 3 table
Solution of the Linearly Structured Partial Polynomial Inverse Eigenvalue Problem
In this paper, linearly structured partial polynomial inverse eigenvalue
problem is considered for the matrix polynomial of arbitrary degree
. Given a set of eigenpairs (), this problem
concerns with computing the matrices for
of specified linear structure such that the matrix
polynomial has
the given eigenpairs as its eigenvalues and eigenvectors. Many practical
applications give rise to the linearly structured structured matrix polynomial.
Therefore, construction of the linearly structured matrix polynomial is the
most important aspect of the polynomial inverse eigenvalue problem(PIEP). In
this paper, a necessary and sufficient condition for the existence of the
solution of this problem is derived. Additionally, we characterize the class of
all solutions to this problem by giving the explicit expressions of solutions.
The results presented in this paper address some important open problems in the
area of PIEP raised in De Teran, Dopico and Van Dooren [SIAM Journal on Matrix
Analysis and Applications, (), pp ]. An attractive
feature of our solution approach is that it does not impose any restriction on
the number of eigendata for computing the solution of PIEP. The proposed method
is validated with various numerical examples on a spring mass problem
Chapter 10: Algebraic Algorithms
Our Chapter in the upcoming Volume I: Computer Science and Software
Engineering of Computing Handbook (Third edition), Allen Tucker, Teo Gonzales
and Jorge L. Diaz-Herrera, editors, covers Algebraic Algorithms, both symbolic
and numerical, for matrix computations and root-finding for polynomials and
systems of polynomials equations. We cover part of these large subjects and
include basic bibliography for further study. To meet space limitation we cite
books, surveys, and comprehensive articles with pointers to further references,
rather than including all the original technical papers.Comment: 41.1 page
A literature survey of matrix methods for data science
Efficient numerical linear algebra is a core ingredient in many applications
across almost all scientific and industrial disciplines. With this survey we
want to illustrate that numerical linear algebra has played and is playing a
crucial role in enabling and improving data science computations with many new
developments being fueled by the availability of data and computing resources.
We highlight the role of various different factorizations and the power of
changing the representation of the data as well as discussing topics such as
randomized algorithms, functions of matrices, and high-dimensional problems. We
briefly touch upon the role of techniques from numerical linear algebra used
within deep learning
Effective Resistances, Statistical Leverage, and Applications to Linear Equation Solving
Recent work in theoretical computer science and scientific computing has
focused on nearly-linear-time algorithms for solving systems of linear
equations. While introducing several novel theoretical perspectives, this work
has yet to lead to practical algorithms. In an effort to bridge this gap, we
describe in this paper two related results. Our first and main result is a
simple algorithm to approximate the solution to a set of linear equations
defined by a Laplacian (for a graph with nodes and edges)
constraint matrix. The algorithm is a non-recursive algorithm; even though it
runs in O(n^2 \cdot \polylog(n)) time rather than
time (given an oracle for the so-called statistical leverage scores), it is
extremely simple; and it can be used to compute an approximate solution with a
direct solver. In light of this result, our second result is a straightforward
connection between the concept of graph resistance (which has proven useful in
recent algorithms for linear equation solvers) and the concept of statistical
leverage (which has proven useful in numerically-implementable randomized
algorithms for large matrix problems and which has a natural data-analytic
interpretation).Comment: 16 page
Uncertainty quantification in large Bayesian linear inverse problems using Krylov subspace methods
For linear inverse problems with a large number of unknown parameters,
uncertainty quantification remains a challenging task. In this work, we use
Krylov subspace methods to approximate the posterior covariance matrix and
describe efficient methods for exploring the posterior distribution. Assuming
that Krylov methods (e.g., based on the generalized Golub-Kahan
bidiagonalization) have been used to compute an estimate of the solution, we
get an approximation of the posterior covariance matrix for `free.' We provide
theoretical results that quantify the accuracy of the approximation and of the
resulting posterior distribution. Then, we describe efficient methods that use
the approximation to compute measures of uncertainty, including the
Kullback-Liebler divergence. We present two methods that use preconditioned
Lanczos methods to efficiently generate samples from the posterior
distribution. Numerical examples from tomography demonstrate the effectiveness
of the described approaches.Comment: 26 pages, 4 figures, 2 tables. Under revie
Literature survey on low rank approximation of matrices
Low rank approximation of matrices has been well studied in literature.
Singular value decomposition, QR decomposition with column pivoting, rank
revealing QR factorization (RRQR), Interpolative decomposition etc are
classical deterministic algorithms for low rank approximation. But these
techniques are very expensive operations are required for matrices). There are several randomized algorithms available in the
literature which are not so expensive as the classical techniques (but the
complexity is not linear in n). So, it is very expensive to construct the low
rank approximation of a matrix if the dimension of the matrix is very large.
There are alternative techniques like Cross/Skeleton approximation which gives
the low-rank approximation with linear complexity in n . In this article we
review low rank approximation techniques briefly and give extensive references
of many techniques
Direct Inversion of the 3D Pseudo-polar Fourier Transform
The pseudo-polar Fourier transform is a specialized non-equally spaced
Fourier transform, which evaluates the Fourier transform on a near-polar grid,
known as the pseudo-polar grid. The advantage of the pseudo-polar grid over
other non-uniform sampling geometries is that the transformation, which samples
the Fourier transform on the pseudo-polar grid, can be inverted using a fast
and stable algorithm. For other sampling geometries, even if the non-equally
spaced Fourier transform can be inverted, the only known algorithms are
iterative. The convergence speed of these algorithms as well as their accuracy
are difficult to control, as they depend both on the sampling geometry as well
as on the unknown reconstructed object. In this paper, we present a direct
inversion algorithm for the three-dimensional pseudo-polar Fourier transform.
The algorithm is based only on one-dimensional resampling operations, and is
shown to be significantly faster than existing iterative inversion algorithms
Polynomial Time Algorithms for Dual Volume Sampling
We study dual volume sampling, a method for selecting k columns from an n x m
short and wide matrix (n <= k <= m) such that the probability of selection is
proportional to the volume spanned by the rows of the induced submatrix. This
method was proposed by Avron and Boutsidis (2013), who showed it to be a
promising method for column subset selection and its multiple applications.
However, its wider adoption has been hampered by the lack of polynomial time
sampling algorithms. We remove this hindrance by developing an exact
(randomized) polynomial time sampling algorithm as well as its derandomization.
Thereafter, we study dual volume sampling via the theory of real stable
polynomials and prove that its distribution satisfies the "Strong Rayleigh"
property. This result has numerous consequences, including a provably
fast-mixing Markov chain sampler that makes dual volume sampling much more
attractive to practitioners. This sampler is closely related to classical
algorithms for popular experimental design methods that are to date lacking
theoretical analysis but are known to empirically work well
- β¦