60 research outputs found
Isogeometric preconditioners based on fast solvers for the Sylvester equation
We consider large linear systems arising from the isogeometric discretization
of the Poisson problem on a single-patch domain. The numerical solution of such
systems is considered a challenging task, particularly when the degree of the
splines employed as basis functions is high. We consider a preconditioning
strategy which is based on the solution of a Sylvester-like equation at each
step of an iterative solver. We show that this strategy, which fully exploits
the tensor structure that underlies isogeometric problems, is robust with
respect to both mesh size and spline degree, although it may suffer from the
presence of complicated geometry or coefficients. We consider two popular
solvers for the Sylvester equation, a direct one and an iterative one, and we
discuss in detail their implementation and efficiency for 2D and 3D problems on
single-patch or conforming multi-patch NURBS geometries. Numerical experiments
for problems with different domain geometries are presented, which demonstrate
the potential of this approach
Preconditioned low-rank Riemannian optimization for linear systems with tensor product structure
The numerical solution of partial differential equations on high-dimensional
domains gives rise to computationally challenging linear systems. When using
standard discretization techniques, the size of the linear system grows
exponentially with the number of dimensions, making the use of classic
iterative solvers infeasible. During the last few years, low-rank tensor
approaches have been developed that allow to mitigate this curse of
dimensionality by exploiting the underlying structure of the linear operator.
In this work, we focus on tensors represented in the Tucker and tensor train
formats. We propose two preconditioned gradient methods on the corresponding
low-rank tensor manifolds: A Riemannian version of the preconditioned
Richardson method as well as an approximate Newton scheme based on the
Riemannian Hessian. For the latter, considerable attention is given to the
efficient solution of the resulting Newton equation. In numerical experiments,
we compare the efficiency of our Riemannian algorithms with other established
tensor-based approaches such as a truncated preconditioned Richardson method
and the alternating linear scheme. The results show that our approximate
Riemannian Newton scheme is significantly faster in cases when the application
of the linear operator is expensive.Comment: 24 pages, 8 figure
Greedy low-rank algorithm for spatial connectome regression
Recovering brain connectivity from tract tracing data is an important
computational problem in the neurosciences. Mesoscopic connectome
reconstruction was previously formulated as a structured matrix regression
problem (Harris et al., 2016), but existing techniques do not scale to the
whole-brain setting. The corresponding matrix equation is challenging to solve
due to large scale, ill-conditioning, and a general form that lacks a
convergent splitting. We propose a greedy low-rank algorithm for connectome
reconstruction problem in very high dimensions. The algorithm approximates the
solution by a sequence of rank-one updates which exploit the sparse and
positive definite problem structure. This algorithm was described previously
(Kressner and Sirkovi\'c, 2015) but never implemented for this connectome
problem, leading to a number of challenges. We have had to design judicious
stopping criteria and employ efficient solvers for the three main sub-problems
of the algorithm, including an efficient GPU implementation that alleviates the
main bottleneck for large datasets. The performance of the method is evaluated
on three examples: an artificial "toy" dataset and two whole-cortex instances
using data from the Allen Mouse Brain Connectivity Atlas. We find that the
method is significantly faster than previous methods and that moderate ranks
offer good approximation. This speedup allows for the estimation of
increasingly large-scale connectomes across taxa as these data become available
from tracing experiments. The data and code are available online
Krylov subspace techniques for model reduction and the solution of linear matrix equations
This thesis focuses on the model reduction of linear systems and the solution of large
scale linear matrix equations using computationally efficient Krylov subspace techniques.
Most approaches for model reduction involve the computation and factorization of large
matrices. However Krylov subspace techniques have the advantage that they involve only
matrix-vector multiplications in the large dimension, which makes them a better choice
for model reduction of large scale systems. The standard Arnoldi/Lanczos algorithms are
well-used Krylov techniques that compute orthogonal bases to Krylov subspaces and, by
using a projection process on to the Krylov subspace, produce a reduced order model that
interpolates the actual system and its derivatives at infinity. An extension is the rational
Arnoldi/Lanczos algorithm which computes orthogonal bases to the union of Krylov
subspaces and results in a reduced order model that interpolates the actual system and
its derivatives at a predefined set of interpolation points. This thesis concentrates on the
rational Krylov method for model reduction.
In the rational Krylov method an important issue is the selection of interpolation points
for which various techniques are available in the literature with different selection criteria.
One of these techniques selects the interpolation points such that the approximation
satisfies the necessary conditions for H2 optimal approximation. However it is possible
to have more than one approximation for which the necessary optimality conditions are
satisfied. In this thesis, some conditions on the interpolation points are derived, that
enable us to compute all approximations that satisfy the necessary optimality conditions
and hence identify the global minimizer to the H2 optimal model reduction problem.
It is shown that for an H2 optimal approximation that interpolates at m interpolation
points, the interpolation points are the simultaneous solution of m multivariate polynomial
equations in m unknowns. This condition reduces to the computation of zeros of a
linear system, for a first order approximation. In case of second order approximation the
condition is to compute the simultaneous solution of two bivariate polynomial equations.
These two cases are analyzed in detail and it is shown that a global minimizer to the
H2 optimal model reduction problem can be identified. Furthermore, a computationally
efficient iterative algorithm is also proposed for the H2 optimal model reduction problem
that converges to a local minimizer.
In addition to the effect of interpolation points on the accuracy of the rational interpolating
approximation, an ordinary choice of interpolation points may result in a reduced
order model that loses the useful properties such as stability, passivity, minimum-phase and bounded real character as well as structure of the actual system. Recently in the
literature it is shown that the rational interpolating approximations can be parameterized
in terms of a free low dimensional parameter in order to preserve the stability of the
actual system in the reduced order approximation. This idea is extended in this thesis
to preserve other properties and combinations of them. Also the concept of parameterization
is applied to the minimal residual method, two-sided rational Arnoldi method
and H2 optimal approximation in order to improve the accuracy of the interpolating
approximation.
The rational Krylov method has also been used in the literature to compute low rank
approximate solutions of the Sylvester and Lyapunov equations, which are useful for
model reduction. The approach involves the computation of two set of basis vectors in
which each vector is orthogonalized with all previous vectors. This orthogonalization
becomes computationally expensive and requires high storage capacity as the number of
basis vectors increases. In this thesis, a restart scheme is proposed which restarts without
requiring that the new vectors are orthogonal to the previous vectors. Instead, a set of
two new orthogonal basis vectors are computed. This reduces the computational burden
of orthogonalization and the requirement of storage capacity. It is shown that in case
of Lyapunov equations, the approximate solution obtained through the restart scheme
approaches monotonically to the actual solution
- …