487 research outputs found
Scaling Manifold Ranking Based Image Retrieval
Manifold Ranking is a graph-based ranking algorithm being successfully applied to retrieve images from multimedia databases. Given a query image, Manifold Ranking computes the ranking scores of images in the database by exploiting the relationships among them expressed in the form of a graph. Since Manifold Ranking effectively utilizes the global structure of the graph, it is significantly better at finding intuitive results compared with current approaches. Fundamentally, Manifold Ranking requires an inverse matrix to compute ranking scores and so needs O(n^3) time, where n is the number of images. Manifold Ranking, unfortunately, does not scale to support databases with large numbers of images. Our solution, Mogul, is based on two ideas: (1) It efficiently computes ranking scores by sparse matrices, and (2) It skips unnecessary score computations by estimating upper bounding scores. These two ideas reduce the time complexity of Mogul to O(n) from O(n^3) of the inverse matrix approach. Experiments show that Mogul is much faster and gives significantly better retrieval quality than a state-of-the-art approximation approach
Parallel accelerated cyclic reduction preconditioner for three-dimensional elliptic PDEs with variable coefficients
We present a robust and scalable preconditioner for the solution of
large-scale linear systems that arise from the discretization of elliptic PDEs
amenable to rank compression. The preconditioner is based on hierarchical
low-rank approximations and the cyclic reduction method. The setup and
application phases of the preconditioner achieve log-linear complexity in
memory footprint and number of operations, and numerical experiments exhibit
good weak and strong scalability at large processor counts in a distributed
memory environment. Numerical experiments with linear systems that feature
symmetry and nonsymmetry, definiteness and indefiniteness, constant and
variable coefficients demonstrate the preconditioner applicability and
robustness. Furthermore, it is possible to control the number of iterations via
the accuracy threshold of the hierarchical matrix approximations and their
arithmetic operations, and the tuning of the admissibility condition parameter.
Together, these parameters allow for optimization of the memory requirements
and performance of the preconditioner.Comment: 24 pages, Elsevier Journal of Computational and Applied Mathematics,
Dec 201
Preconditioning of wavelet BEM by the incomplete Cholesky factorization
The present paper is dedicated to the preconditioning of boundary element matrices which are given in wavelet coordinates. We investigate the incomplete Cholesky factorization (ICF) for a pattern which includes also the coefficients of all off-diagonal bands associated with the level-level-interactions. The pattern is chosen in such a way that the ICF is computable in log-linear complexity. Numerical experiments are performed to quantify the effects of the proposed preconditionin
A fast direct solver for nonlocal operators in wavelet coordinates
In this article, we consider fast direct solvers for nonlocal operators. The
pivotal idea is to combine a wavelet representation of the system matrix,
yielding a quasi-sparse matrix, with the nested dissection ordering scheme. The
latter drastically reduces the fill-in during the factorization of the system
matrix by means of a Cholesky decomposition or an LU decomposition,
respectively. This way, we end up with the exact inverse of the compressed
system matrix with only a moderate increase of the number of nonzero entries in
the matrix.
To illustrate the efficacy of the approach, we conduct numerical experiments
for different highly relevant applications of nonlocal operators: We consider
(i) the direct solution of boundary integral equations in three spatial
dimensions, issuing from the polarizable continuum model, (ii) a parabolic
problem for the fractional Laplacian in integral form and (iii) the fast
simulation of Gaussian random fields
kernlab - An S4 Package for Kernel Methods in R
kernlab is an extensible package for kernel-based machine learning methods in R. It takes advantage of R's new S4 ob ject model and provides a framework for creating and using kernel-based algorithms. The package contains dot product primitives (kernels), implementations of support vector machines and the relevance vector machine, Gaussian processes, a ranking algorithm, kernel PCA, kernel CCA, and a spectral clustering algorithm. Moreover it provides a general purpose quadratic programming solver, and an incomplete Cholesky decomposition method.
- …