41,799 research outputs found
Local Linear Convergence Analysis of Primal-Dual Splitting Methods
In this paper, we study the local linear convergence properties of a
versatile class of Primal-Dual splitting methods for minimizing composite
non-smooth convex optimization problems. Under the assumption that the
non-smooth components of the problem are partly smooth relative to smooth
manifolds, we present a unified local convergence analysis framework for these
methods. More precisely, in our framework we first show that (i) the sequences
generated by Primal-Dual splitting methods identify a pair of primal and dual
smooth manifolds in a finite number of iterations, and then (ii) enter a local
linear convergence regime, which is characterized based on the structure of the
underlying active smooth manifolds. We also show how our results for
Primal-Dual splitting can be specialized to cover existing ones on
Forward-Backward splitting and Douglas-Rachford splitting/ADMM (alternating
direction methods of multipliers). Moreover, based on these obtained local
convergence analysis result, several practical acceleration techniques are
discussed. To exemplify the usefulness of the obtained result, we consider
several concrete numerical experiments arising from fields including
signal/image processing, inverse problems and machine learning, etc. The
demonstration not only verifies the local linear convergence behaviour of
Primal-Dual splitting methods, but also the insights on how to accelerate them
in practice
On I-acceleration convergence of sequences of fuzzy real numbers
In this article we introduce the notion of ideal acceleration convergence of sequences of fuzzy real numbers. We have proved a decomposition theorem for ideal acceleration convergence of sequences as well as for subsequence transformations and studied different types of acceleration convergence of fuzzy real valued sequence
On Vector Sequence Transforms and Acceleration Techniques
This dissertation is devoted to the acceleration of convergence of vector sequences. This means to produce a replacement sequence from the original sequence with higher rate of convergence.
It is assumed that the sequence is generated from a linear matrix iteration xi+ i = Gxi + k where G is an n x n square matrix and xI+1 , xi,and k are n x 1 vectors. Acceleration of convergence is obtained when we are able to resolve approximations to low dimension invariant subspaces of G which contain large components of the error. When this occurs, simple weighted averages of iterates x,+|, i = 1 ,2 ,... k where k \u3c n are used to produce iterates which contain approximately no error in the selfsame low dimension invariant subspaces. We begin with simple techniques based upon the resolution of a simple dominant eigenvalue/eigenvector pair and extend the notion to higher dimensional invariant spaces. Discussion is given to using various subspace iteration methods and their convergence. These ideas are again generalized by solving the eigenelement for a projection of G onto an appropriate subspace. The use of Lanzcos-type methods are discussed for establishing these projections.
We produce acceleration techniques based on the process of generalized inversion. The relationship between the minimal polynomial extrapolation technique (MPE) for acceleration of convergence and conjugate gradient type methods is explored. Further acceleration techniques are formed from conjugate gradient type techniques and a generalized inverse Newton\u27s method.
An exposition is given to accelerations based upon generalizations of rational interpolation and Pade approximation. Further acceleration techniques using Sherman-Woodbury-Morrison type formulas are formulated and suggested as a replacement for the E-transform.
We contrast the effect of several extrapolation techniques drawn from the dissertation on a nonsymmetric linear iteration. We pick the Minimal Polynomial Extrapolation (MPE) as a representative of techniques based on orthogonal residuals, the Vector -Algorithm (VEA) as a representative vector interpolation technique and a technique formulated in this dissertation based on solving a projected eigenproblem. The results show the projected eigenproblem technique to be superior for certain iterations
Abstract Fixpoint Computations with Numerical Acceleration Methods
Static analysis by abstract interpretation aims at automatically proving
properties of computer programs. To do this, an over-approximation of program
semantics, defined as the least fixpoint of a system of semantic equations,
must be computed. To enforce the convergence of this computation, widening
operator is used but it may lead to coarse results. We propose a new method to
accelerate the computation of this fixpoint by using standard techniques of
numerical analysis. Our goal is to automatically and dynamically adapt the
widening operator in order to maintain precision
Scalar Levin-Type Sequence Transformations
Sequence transformations are important tools for the convergence acceleration
of slowly convergent scalar sequences or series and for the summation of
divergent series. Transformations that depend not only on the sequence elements
or partial sums but also on an auxiliary sequence of so-called remainder
estimates are of Levin-type if they are linear in the , and
nonlinear in the . Known Levin-type sequence transformations are
reviewed and put into a common theoretical framework. It is discussed how such
transformations may be constructed by either a model sequence approach or by
iteration of simple transformations. As illustration, two new sequence
transformations are derived. Common properties and results on convergence
acceleration and stability are given. For important special cases, extensions
of the general results are presented. Also, guidelines for the application of
Levin-type sequence transformations are discussed, and a few numerical examples
are given.Comment: 59 pages, LaTeX, invited review for J. Comput. Applied Math.,
abstract shortene
Implementation of the Combined--Nonlinear Condensation Transformation
We discuss several applications of the recently proposed combined
nonlinear-condensation transformation (CNCT) for the evaluation of slowly
convergent, nonalternating series. These include certain statistical
distributions which are of importance in linguistics, statistical-mechanics
theory, and biophysics (statistical analysis of DNA sequences). We also discuss
applications of the transformation in experimental mathematics, and we briefly
expand on further applications in theoretical physics. Finally, we discuss a
related Mathematica program for the computation of Lerch's transcendent.Comment: 23 pages, 1 table, 1 figure (Comput. Phys. Commun., in press
- …