754 research outputs found
High-order adaptive time stepping for vesicle suspensions with viscosity contrast
We construct a high-order adaptive time stepping scheme for vesicle
suspensions with viscosity contrast. The high-order accuracy is achieved using
a spectral deferred correction (SDC) method, and adaptivity is achieved by
estimating the local truncation error with the numerical error of physically
constant values. Numerical examples demonstrate that our method can handle
suspensions with vesicles that are tumbling, tank-treading, or both. Moreover,
we demonstrate that a user-prescribed tolerance can be automatically achieved
for simulations with long time horizons
An inexact Newton-Krylov algorithm for constrained diffeomorphic image registration
We propose numerical algorithms for solving large deformation diffeomorphic
image registration problems. We formulate the nonrigid image registration
problem as a problem of optimal control. This leads to an infinite-dimensional
partial differential equation (PDE) constrained optimization problem.
The PDE constraint consists, in its simplest form, of a hyperbolic transport
equation for the evolution of the image intensity. The control variable is the
velocity field. Tikhonov regularization on the control ensures well-posedness.
We consider standard smoothness regularization based on - or
-seminorms. We augment this regularization scheme with a constraint on the
divergence of the velocity field rendering the deformation incompressible and
thus ensuring that the determinant of the deformation gradient is equal to one,
up to the numerical error.
We use a Fourier pseudospectral discretization in space and a Chebyshev
pseudospectral discretization in time. We use a preconditioned, globalized,
matrix-free, inexact Newton-Krylov method for numerical optimization. A
parameter continuation is designed to estimate an optimal regularization
parameter. Regularity is ensured by controlling the geometric properties of the
deformation field. Overall, we arrive at a black-box solver. We study spectral
properties of the Hessian, grid convergence, numerical accuracy, computational
efficiency, and deformation regularity of our scheme. We compare the designed
Newton-Krylov methods with a globalized preconditioned gradient descent. We
study the influence of a varying number of unknowns in time.
The reported results demonstrate excellent numerical accuracy, guaranteed
local deformation regularity, and computational efficiency with an optional
control on local mass conservation. The Newton-Krylov methods clearly
outperform the Picard method if high accuracy of the inversion is required.Comment: 32 pages; 10 figures; 9 table
Far-Field Compression for Fast Kernel Summation Methods in High Dimensions
We consider fast kernel summations in high dimensions: given a large set of
points in dimensions (with ) and a pair-potential function (the
{\em kernel} function), we compute a weighted sum of all pairwise kernel
interactions for each point in the set. Direct summation is equivalent to a
(dense) matrix-vector multiplication and scales quadratically with the number
of points. Fast kernel summation algorithms reduce this cost to log-linear or
linear complexity.
Treecodes and Fast Multipole Methods (FMMs) deliver tremendous speedups by
constructing approximate representations of interactions of points that are far
from each other. In algebraic terms, these representations correspond to
low-rank approximations of blocks of the overall interaction matrix. Existing
approaches require an excessive number of kernel evaluations with increasing
and number of points in the dataset.
To address this issue, we use a randomized algebraic approach in which we
first sample the rows of a block and then construct its approximate, low-rank
interpolative decomposition. We examine the feasibility of this approach
theoretically and experimentally. We provide a new theoretical result showing a
tighter bound on the reconstruction error from uniformly sampling rows than the
existing state-of-the-art. We demonstrate that our sampling approach is
competitive with existing (but prohibitively expensive) methods from the
literature. We also construct kernel matrices for the Laplacian, Gaussian, and
polynomial kernels -- all commonly used in physics and data analysis. We
explore the numerical properties of blocks of these matrices, and show that
they are amenable to our approach. Depending on the data set, our randomized
algorithm can successfully compute low rank approximations in high dimensions.
We report results for data sets with ambient dimensions from four to 1,000.Comment: 43 pages, 21 figure
- …