342 research outputs found
Fast Gradient Methods for Uniformly Convex and Weakly Smooth Problems
In this paper, acceleration of gradient methods for convex optimization
problems with weak levels of convexity and smoothness is considered. Starting
from the universal fast gradient method which was designed to be an optimal
method for weakly smooth problems whose gradients are H\"older continuous, its
momentum is modified appropriately so that it can also accommodate uniformly
convex and weakly smooth problems. Differently from the existing works, fast
gradient methods proposed in this paper do not use the restarting technique but
use momentums that are suitably designed to reflect both the uniform convexity
and the weak smoothness information of the target energy function. Both
theoretical and numerical results that support the superiority of proposed
methods are presented.Comment: 28 pages, 8 figure
An Optimized Dynamic Mode Decomposition Model Robust to Multiplicative Noise
Dynamic mode decomposition (DMD) is an efficient tool for decomposing
spatio-temporal data into a set of low-dimensional modes, yielding the
oscillation frequencies and the growth rates of physically significant modes.
In this paper, we propose a novel DMD model that can be used for dynamical
systems affected by multiplicative noise. We first derive a maximum a
posteriori (MAP) estimator for the data-based model decomposition of a linear
dynamical system corrupted by certain multiplicative noise. Applying penalty
relaxation to the MAP estimator, we obtain the proposed DMD model whose
epigraphical limits are the MAP estimator and the conventional optimized DMD
model. We also propose an efficient alternating gradient descent method for
solving the proposed DMD model, and analyze its convergence behavior. The
proposed model is demonstrated on both the synthetic data and the numerically
generated one-dimensional combustor data, and is shown to have superior
reconstruction properties compared to state-of-the-art DMD models. Considering
that multiplicative noise is ubiquitous in numerous dynamical systems, the
proposed DMD model opens up new possibilities for accurate data-based modal
decomposition.Comment: 35 pages, 10 figure
On the linear convergence of additive Schwarz methods for the -Laplacian
We consider additive Schwarz methods for boundary value problems involving
the -Laplacian. While the existing theoretical estimates for the convergence
rate of additive Schwarz methods for the -Laplacian are sublinear, the
actual convergence rate observed by numerical experiments is linear. In this
paper, we bridge the gap between these theoretical and numerical results by
analyzing the linear convergence rate of additive Schwarz methods for the
-Laplacian. In order to estimate the linear convergence rate of the methods,
we present two essential components. Firstly, we present a new abstract
convergence theory of additive Schwarz methods written in terms of a
quasi-norm. This quasi-norm exhibits behavior similar to the Bregman distance
of the convex energy functional associated to the problem. Secondly, we provide
a quasi-norm version of the Poincar'{e}--Friedrichs inequality, which is
essential for deriving a quasi-norm stable decomposition for a two-level domain
decomposition setting.Comment: 23 pages, 2 figure
- …