186,976 research outputs found
Performance of algebraic multigrid methods for non-symmetric matrices arising in particle methods
Large linear systems with sparse, non-symmetric matrices arise in the
modeling of Markov chains or in the discretization of convection-diffusion
problems. Due to their potential to solve sparse linear systems with an effort
that is linear in the number of unknowns, algebraic multigrid (AMG) methods are
of fundamental interest for such systems. For symmetric positive definite
matrices, fundamental theoretical convergence results are established, and
efficient AMG solvers have been developed. In contrast, for non-symmetric
matrices, theoretical convergence results have been provided only recently. A
property that is sufficient for convergence is that the matrix be an M-matrix.
In this paper, we present how the simulation of incompressible fluid flows with
particle methods leads to large linear systems with sparse, non-symmetric
matrices. In each time step, the Poisson equation is approximated by meshfree
finite differences. While traditional least squares approaches do not guarantee
an M-matrix structure, an approach based on linear optimization yields
optimally sparse M-matrices. For both types of discretization approaches, we
investigate the performance of a classical AMG method, as well as an AMLI type
method. While in the considered test problems, the M-matrix structure turns out
not to be necessary for the convergence of AMG, problems can occur when it is
violated. In addition, the matrices obtained by the linear optimization
approach result in fast solution times due to their optimal sparsity.Comment: 16 pages, 7 figure
Using Underapproximations for Sparse Nonnegative Matrix Factorization
Nonnegative Matrix Factorization consists in (approximately) factorizing a
nonnegative data matrix by the product of two low-rank nonnegative matrices. It
has been successfully applied as a data analysis technique in numerous domains,
e.g., text mining, image processing, microarray data analysis, collaborative
filtering, etc.
We introduce a novel approach to solve NMF problems, based on the use of an
underapproximation technique, and show its effectiveness to obtain sparse
solutions. This approach, based on Lagrangian relaxation, allows the resolution
of NMF problems in a recursive fashion. We also prove that the
underapproximation problem is NP-hard for any fixed factorization rank, using a
reduction of the maximum edge biclique problem in bipartite graphs.
We test two variants of our underapproximation approach on several standard
image datasets and show that they provide sparse part-based representations
with low reconstruction error. Our results are comparable and sometimes
superior to those obtained by two standard Sparse Nonnegative Matrix
Factorization techniques.Comment: Version 2 removed the section about convex reformulations, which was
not central to the development of our main results; added material to the
introduction; added a review of previous related work (section 2.3);
completely rewritten the last part (section 4) to provide extensive numerical
results supporting our claims. Accepted in J. of Pattern Recognitio
Generalized power method for sparse principal component analysis
In this paper we develop a new approach to sparse principal component
analysis (sparse PCA). We propose two single-unit and two block optimization
formulations of the sparse PCA problem, aimed at extracting a single sparse
dominant principal component of a data matrix, or more components at once,
respectively. While the initial formulations involve nonconvex functions, and
are therefore computationally intractable, we rewrite them into the form of an
optimization program involving maximization of a convex function on a compact
set. The dimension of the search space is decreased enormously if the data
matrix has many more columns (variables) than rows. We then propose and analyze
a simple gradient method suited for the task. It appears that our algorithm has
best convergence properties in the case when either the objective function or
the feasible set are strongly convex, which is the case with our single-unit
formulations and can be enforced in the block case. Finally, we demonstrate
numerically on a set of random and gene expression test problems that our
approach outperforms existing algorithms both in quality of the obtained
solution and in computational speed.Comment: Submitte
Algorithms for solving large sparse systems of simultaneous linear equations on vector processors
Very efficient algorithms for solving large sparse systems of simultaneous linear equations have been developed for serial processing computers. These involve a reordering of matrix rows and columns in order to obtain a near triangular pattern of nonzero elements. Then an LU factorization is developed to represent the matrix inverse in terms of a sequence of elementary Gaussian eliminations, or pivots. In this paper it is shown how these algorithms are adapted for efficient implementation on vector processors. Results obtained on the CYBER 200 Model 205 are presented for a series of large test problems which show the comparative advantages of the triangularization and vector processing algorithms
Recommended from our members
The scheduling of sparse matrix-vector multiplication on a massively parallel dap computer
An efficient data structure is presented which supports general unstructured sparse matrix-vector multiplications on a Distributed Array of Processors (DAP). This approach seeks to reduce the inter-processor data movements and organises the operations in batches of massively parallel steps by a heuristic scheduling procedure performed on the host computer.
The resulting data structure is of particular relevance to iterative schemes for solving linear systems. Performance results for matrices taken from well known Linear Programming (LP) test problems are presented and analysed
- …