33,127 research outputs found
Preconditioned Algorithm for Difference of Convex Functions with applications to Graph Ginzburg-Landau Model
In this work, we propose and study a preconditioned framework with a graphic
Ginzburg-Landau functional for image segmentation and data clustering by
parallel computing. Solving nonlocal models is usually challenging due to the
huge computation burden. For the nonconvex and nonlocal variational functional,
we propose several damped Jacobi and generalized Richardson preconditioners for
the large-scale linear systems within a difference of convex functions
algorithms framework. They are efficient for parallel computing with GPU and
can leverage the computational cost. Our framework also provides flexible step
sizes with a global convergence guarantee. Numerical experiments show the
proposed algorithms are very competitive compared to the singular value
decomposition based spectral method
Fast linear algebra is stable
In an earlier paper, we showed that a large class of fast recursive matrix
multiplication algorithms is stable in a normwise sense, and that in fact if
multiplication of -by- matrices can be done by any algorithm in
operations for any , then it can be done
stably in operations for any . Here we extend
this result to show that essentially all standard linear algebra operations,
including LU decomposition, QR decomposition, linear equation solving, matrix
inversion, solving least squares problems, (generalized) eigenvalue problems
and the singular value decomposition can also be done stably (in a normwise
sense) in operations.Comment: 26 pages; final version; to appear in Numerische Mathemati
Minimizing Communication for Eigenproblems and the Singular Value Decomposition
Algorithms have two costs: arithmetic and communication. The latter
represents the cost of moving data, either between levels of a memory
hierarchy, or between processors over a network. Communication often dominates
arithmetic and represents a rapidly increasing proportion of the total cost, so
we seek algorithms that minimize communication. In \cite{BDHS10} lower bounds
were presented on the amount of communication required for essentially all
-like algorithms for linear algebra, including eigenvalue problems and
the SVD. Conventional algorithms, including those currently implemented in
(Sca)LAPACK, perform asymptotically more communication than these lower bounds
require. In this paper we present parallel and sequential eigenvalue algorithms
(for pencils, nonsymmetric matrices, and symmetric matrices) and SVD algorithms
that do attain these lower bounds, and analyze their convergence and
communication costs.Comment: 43 pages, 11 figure
An improved Newton iteration for the generalized inverse of a matrix, with applications
The purpose here is to clarify and illustrate the potential for the use of variants of Newton's method of solving problems of practical interest on highly personal computers. The authors show how to accelerate the method substantially and how to modify it successfully to cope with ill-conditioned matrices. The authors conclude that Newton's method can be of value for some interesting computations, especially in parallel and other computing environments in which matrix products are especially easy to work with
- …