603 research outputs found
Preconditioning Kernel Matrices
The computational and storage complexity of kernel machines presents the
primary barrier to their scaling to large, modern, datasets. A common way to
tackle the scalability issue is to use the conjugate gradient algorithm, which
relieves the constraints on both storage (the kernel matrix need not be stored)
and computation (both stochastic gradients and parallelization can be used).
Even so, conjugate gradient is not without its own issues: the conditioning of
kernel matrices is often such that conjugate gradients will have poor
convergence in practice. Preconditioning is a common approach to alleviating
this issue. Here we propose preconditioned conjugate gradients for kernel
machines, and develop a broad range of preconditioners particularly useful for
kernel matrices. We describe a scalable approach to both solving kernel
machines and learning their hyperparameters. We show this approach is exact in
the limit of iterations and outperforms state-of-the-art approximations for a
given computational budget
Some Preconditioning Techniques for Saddle Point Problems
Saddle point problems arise frequently in many applications in science and engineering, including constrained optimization, mixed finite element formulations of partial differential equations, circuit analysis, and so forth. Indeed the formulation of most problems with constraints gives rise to saddle point systems. This paper provides a concise overview of iterative approaches for the solution of such systems which are of particular importance in the context of large scale computation. In particular we describe some of the most useful preconditioning techniques for Krylov subspace solvers applied to saddle point problems, including block and constrained preconditioners.\ud
\ud
The work of Michele Benzi was supported in part by the National Science Foundation grant DMS-0511336
Solving the Poisson equation on small aspect ratio domains using unstructured meshes
We discuss the ill conditioning of the matrix for the discretised Poisson
equation in the small aspect ratio limit, and motivate this problem in the
context of nonhydrostatic ocean modelling. Efficient iterative solvers for the
Poisson equation in small aspect ratio domains are crucial for the successful
development of nonhydrostatic ocean models on unstructured meshes. We introduce
a new multigrid preconditioner for the Poisson problem which can be used with
finite element discretisations on general unstructured meshes; this
preconditioner is motivated by the fact that the Poisson problem has a
condition number which is independent of aspect ratio when Dirichlet boundary
conditions are imposed on the top surface of the domain. This leads to the
first level in an algebraic multigrid solver (which can be extended by further
conventional algebraic multigrid stages), and an additive smoother. We
illustrate the method with numerical tests on unstructured meshes, which show
that the preconditioner makes a dramatic improvement on a more standard
multigrid preconditioner approach, and also show that the additive smoother
produces better results than standard SOR smoothing. This new solver method
makes it feasible to run nonhydrostatic unstructured mesh ocean models in small
aspect ratio domains.Comment: submitted to Ocean Modellin
Domain decomposition algorithms and computation fluid dynamics
In the past several years, domain decomposition was a very popular topic, partly motivated by the potential of parallelization. While a large body of theory and algorithms were developed for model elliptic problems, they are only recently starting to be tested on realistic applications. The application of some of these methods to two model problems in computational fluid dynamics are investigated. Some examples are two dimensional convection-diffusion problems and the incompressible driven cavity flow problem. The construction and analysis of efficient preconditioners for the interface operator to be used in the iterative solution of the interface solution is described. For the convection-diffusion problems, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is discussed
Reconstructing Kernel-based Machine Learning Force Fields with Super-linear Convergence
Kernel machines have sustained continuous progress in the field of quantum
chemistry. In particular, they have proven to be successful in the low-data
regime of force field reconstruction. This is because many physical invariances
and symmetries can be incorporated into the kernel function to compensate for
much larger datasets. So far, the scalability of this approach has however been
hindered by its cubical runtime in the number of training points. While it is
known, that iterative Krylov subspace solvers can overcome these burdens, they
crucially rely on effective preconditioners, which are elusive in practice.
Practical preconditioners need to be computationally efficient and numerically
robust at the same time. Here, we consider the broad class of Nystr\"om-type
methods to construct preconditioners based on successively more sophisticated
low-rank approximations of the original kernel matrix, each of which provides a
different set of computational trade-offs. All considered methods estimate the
relevant subspace spanned by the kernel matrix columns using different
strategies to identify a representative set of inducing points. Our
comprehensive study covers the full spectrum of approaches, starting from naive
random sampling to leverage score estimates and incomplete Cholesky
factorizations, up to exact SVD decompositions.Comment: 18 pages, 12 figures, preprin
- …