90 research outputs found
A Self-learning Algebraic Multigrid Method for Extremal Singular Triplets and Eigenpairs
A self-learning algebraic multigrid method for dominant and minimal singular
triplets and eigenpairs is described. The method consists of two multilevel
phases. In the first, multiplicative phase (setup phase), tentative singular
triplets are calculated along with a multigrid hierarchy of interpolation
operators that approximately fit the tentative singular vectors in a collective
and self-learning manner, using multiplicative update formulas. In the second,
additive phase (solve phase), the tentative singular triplets are improved up
to the desired accuracy by using an additive correction scheme with fixed
interpolation operators, combined with a Ritz update. A suitable generalization
of the singular value decomposition is formulated that applies to the coarse
levels of the multilevel cycles. The proposed algorithm combines and extends
two existing multigrid approaches for symmetric positive definite eigenvalue
problems to the case of dominant and minimal singular triplets. Numerical tests
on model problems from different areas show that the algorithm converges to
high accuracy in a modest number of iterations, and is flexible enough to deal
with a variety of problems due to its self-learning properties.Comment: 29 page
A Nonlinear GMRES Optimization Algorithm for Canonical Tensor Decomposition
A new algorithm is presented for computing a canonical rank-R tensor
approximation that has minimal distance to a given tensor in the Frobenius
norm, where the canonical rank-R tensor consists of the sum of R rank-one
components. Each iteration of the method consists of three steps. In the first
step, a tentative new iterate is generated by a stand-alone one-step process,
for which we use alternating least squares (ALS). In the second step, an
accelerated iterate is generated by a nonlinear generalized minimal residual
(GMRES) approach, recombining previous iterates in an optimal way, and
essentially using the stand-alone one-step process as a preconditioner. In
particular, the nonlinear extension of GMRES is used that was proposed by
Washio and Oosterlee in [ETNA Vol. 15 (2003), pp. 165-185] for nonlinear
partial differential equation problems. In the third step, a line search is
performed for globalization. The resulting nonlinear GMRES (N-GMRES)
optimization algorithm is applied to dense and sparse tensor decomposition test
problems. The numerical tests show that ALS accelerated by N-GMRES may
significantly outperform both stand-alone ALS and a standard nonlinear
conjugate gradient optimization method, especially when highly accurate
stationary points are desired for difficult problems. The proposed N-GMRES
optimization algorithm is based on general concepts and may be applied to other
nonlinear optimization problems
The influence of societal individualism on a century of tobacco use: modelling the prevalence of smoking
Smoking of tobacco is predicted to cause approximately six million deaths
worldwide in 2014. Responding effectively to this epidemic requires a thorough
understanding of how smoking behaviour is transmitted and modified. Here, we
present a new mathematical model of the social dynamics that cause cigarette
smoking to spread in a population. Our model predicts that more individualistic
societies will show faster adoption and cessation of smoking. Evidence from a
new century-long composite data set on smoking prevalence in 25 countries
supports the model, with direct implications for public health interventions
around the world. Our results suggest that differences in culture between
societies can measurably affect the temporal dynamics of a social spreading
process, and that these effects can be understood via a quantitative
mathematical model matched to observations
Linear Asymptotic Convergence of Anderson Acceleration: Fixed-Point Analysis
We study the asymptotic convergence of AA(), i.e., Anderson acceleration
with window size for accelerating fixed-point methods ,
. Convergence acceleration by AA() has been widely observed but
is not well understood. We consider the case where the fixed-point iteration
function is differentiable and the convergence of the fixed-point method
itself is root-linear. We identify numerically several conspicuous properties
of AA() convergence: First, AA() sequences converge
root-linearly but the root-linear convergence factor depends strongly on the
initial condition. Second, the AA() acceleration coefficients
do not converge but oscillate as converges to . To shed light on
these observations, we write the AA() iteration as an augmented fixed-point
iteration , and analyze the continuity
and differentiability properties of and . We find that the
vector of acceleration coefficients is not continuous at the fixed
point . However, we show that, despite the discontinuity of ,
the iteration function is Lipschitz continuous and directionally
differentiable at for AA(1), and we generalize this to AA() with
for most cases. Furthermore, we find that is not differentiable at
. We then discuss how these theoretical findings relate to the observed
convergence behaviour of AA(). The discontinuity of at
allows to oscillate as converges to , and the
non-differentiability of allows AA() sequences to converge with
root-linear convergence factors that strongly depend on the initial condition.
Additional numerical results illustrate our findings
- …