5,421 research outputs found

    Fast matrix computations for pair-wise and column-wise commute times and Katz scores

    Full text link
    We first explore methods for approximating the commute time and Katz score between a pair of nodes. These methods are based on the approach of matrices, moments, and quadrature developed in the numerical linear algebra community. They rely on the Lanczos process and provide upper and lower bounds on an estimate of the pair-wise scores. We also explore methods to approximate the commute times and Katz scores from a node to all other nodes in the graph. Here, our approach for the commute times is based on a variation of the conjugate gradient algorithm, and it provides an estimate of all the diagonals of the inverse of a matrix. Our technique for the Katz scores is based on exploiting an empirical localization property of the Katz matrix. We adopt algorithms used for personalized PageRank computing to these Katz scores and theoretically show that this approach is convergent. We evaluate these methods on 17 real world graphs ranging in size from 1000 to 1,000,000 nodes. Our results show that our pair-wise commute time method and column-wise Katz algorithm both have attractive theoretical properties and empirical performance.Comment: 35 pages, journal version of http://dx.doi.org/10.1007/978-3-642-18009-5_13 which has been submitted for publication. Please see http://www.cs.purdue.edu/homes/dgleich/publications/2011/codes/fast-katz/ for supplemental code

    In-network Sparsity-regularized Rank Minimization: Algorithms and Applications

    Full text link
    Given a limited number of entries from the superposition of a low-rank matrix plus the product of a known fat compression matrix times a sparse matrix, recovery of the low-rank and sparse components is a fundamental task subsuming compressed sensing, matrix completion, and principal components pursuit. This paper develops algorithms for distributed sparsity-regularized rank minimization over networks, when the nuclear- and â„“1\ell_1-norm are used as surrogates to the rank and nonzero entry counts of the sought matrices, respectively. While nuclear-norm minimization has well-documented merits when centralized processing is viable, non-separability of the singular-value sum challenges its distributed minimization. To overcome this limitation, an alternative characterization of the nuclear norm is adopted which leads to a separable, yet non-convex cost minimized via the alternating-direction method of multipliers. The novel distributed iterations entail reduced-complexity per-node tasks, and affordable message passing among single-hop neighbors. Interestingly, upon convergence the distributed (non-convex) estimator provably attains the global optimum of its centralized counterpart, regardless of initialization. Several application domains are outlined to highlight the generality and impact of the proposed framework. These include unveiling traffic anomalies in backbone networks, predicting networkwide path latencies, and mapping the RF ambiance using wireless cognitive radios. Simulations with synthetic and real network data corroborate the convergence of the novel distributed algorithm, and its centralized performance guarantees.Comment: 30 pages, submitted for publication on the IEEE Trans. Signal Proces

    Self consistent bathymetric mapping from robotic vehicles in the deep ocean

    Get PDF
    Submitted In partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and Woods Hole Oceanographic Institution June 2005Obtaining accurate and repeatable navigation for robotic vehicles in the deep ocean is difficult and consequently a limiting factor when constructing vehicle-based bathymetric maps. This thesis presents a methodology to produce self-consistent maps and simultaneously improve vehicle position estimation by exploiting accurate local navigation and utilizing terrain relative measurements. It is common for errors in the vehicle position estimate to far exceed the errors associated with the acoustic range sensor. This disparity creates inconsistency when an area is imaged multiple times and causes artifacts that distort map integrity. Our technique utilizes small terrain "submaps" that can be pairwise registered and used to additionally constrain the vehicle position estimates in accordance with actual bottom topography. A delayed state Kalman filter is used to incorporate these sub-map registrations as relative position measurements between previously visited vehicle locations. The archiving of previous positions in a filter state vector allows for continual adjustment of the sub-map locations. The terrain registration is accomplished using a two dimensional correlation and a six degree of freedom point cloud alignment method tailored for bathymetric data. The complete bathymetric map is then created from the union of all sub-maps that have been aligned in a consistent manner. Experimental results from the fully automated processing of a multibeam survey over the TAG hydrothermal structure at the Mid-Atlantic ridge are presented to validate the proposed method.This work was funded by the CenSSIS ERC of the Nation Science Foundation under grant EEC-9986821 and in part by the Woods Hole Oceanographic Institution through a grant from the Penzance Foundation

    3D shape matching and registration : a probabilistic perspective

    Get PDF
    Dense correspondence is a key area in computer vision and medical image analysis. It has applications in registration and shape analysis. In this thesis, we develop a technique to recover dense correspondences between the surfaces of neuroanatomical objects over heterogeneous populations of individuals. We recover dense correspondences based on 3D shape matching. In this thesis, the 3D shape matching problem is formulated under the framework of Markov Random Fields (MRFs). We represent the surfaces of neuroanatomical objects as genus zero voxel-based meshes. The surface meshes are projected into a Markov random field space. The projection carries both geometric and topological information in terms of Gaussian curvature and mesh neighbourhood from the original space to the random field space. Gaussian curvature is projected to the nodes of the MRF, and the mesh neighbourhood structure is projected to the edges. 3D shape matching between two surface meshes is then performed by solving an energy function minimisation problem formulated with MRFs. The outcome of the 3D shape matching is dense point-to-point correspondences. However, the minimisation of the energy function is NP hard. In this thesis, we use belief propagation to perform the probabilistic inference for 3D shape matching. A sparse update loopy belief propagation algorithm adapted to the 3D shape matching is proposed to obtain an approximate global solution for the 3D shape matching problem. The sparse update loopy belief propagation algorithm demonstrates significant efficiency gain compared to standard belief propagation. The computational complexity and convergence property analysis for the sparse update loopy belief propagation algorithm are also conducted in the thesis. We also investigate randomised algorithms to minimise the energy function. In order to enhance the shape matching rate and increase the inlier support set, we propose a novel clamping technique. The clamping technique is realized by combining the loopy belief propagation message updating rule with the feedback from 3D rigid body registration. By using this clamping technique, the correct shape matching rate is increased significantly. Finally, we investigate 3D shape registration techniques based on the 3D shape matching result. Based on the point-to-point dense correspondences obtained from the 3D shape matching, a three-point based transformation estimation technique is combined with the RANdom SAmple Consensus (RANSAC) algorithm to obtain the inlier support set. The global registration approach is purely dependent on point-wise correspondences between two meshed surfaces. It has the advantage that the need for orientation initialisation is eliminated and that all shapes of spherical topology. The comparison of our MRF based 3D registration approach with a state-of-the-art registration algorithm, the first order ellipsoid template, is conducted in the experiments. These show dense correspondence for pairs of hippocampi from two different data sets, each of around 20 60+ year old healthy individuals

    Sparse reduced-rank regression for imaging genetics studies: models and applications

    Get PDF
    We present a novel statistical technique; the sparse reduced rank regression (sRRR) model which is a strategy for multivariate modelling of high-dimensional imaging responses and genetic predictors. By adopting penalisation techniques, the model is able to enforce sparsity in the regression coefficients, identifying subsets of genetic markers that best explain the variability observed in subsets of the phenotypes. To properly exploit the rich structure present in each of the imaging and genetics domains, we additionally propose the use of several structured penalties within the sRRR model. Using simulation procedures that accurately reflect realistic imaging genetics data, we present detailed evaluations of the sRRR method in comparison with the more traditional univariate linear modelling approach. In all settings considered, we show that sRRR possesses better power to detect the deleterious genetic variants. Moreover, using a simple genetic model, we demonstrate the potential benefits, in terms of statistical power, of carrying out voxel-wise searches as opposed to extracting averages over regions of interest in the brain. Since this entails the use of phenotypic vectors of enormous dimensionality, we suggest the use of a sparse classification model as a de-noising step, prior to the imaging genetics study. Finally, we present the application of a data re-sampling technique within the sRRR model for model selection. Using this approach we are able to rank the genetic markers in order of importance of association to the phenotypes, and similarly rank the phenotypes in order of importance to the genetic markers. In the very end, we illustrate the application perspective of the proposed statistical models in three real imaging genetics datasets and highlight some potential associations

    CUR Decompositions, Similarity Matrices, and Subspace Clustering

    Get PDF
    A general framework for solving the subspace clustering problem using the CUR decomposition is presented. The CUR decomposition provides a natural way to construct similarity matrices for data that come from a union of unknown subspaces U=⋃Mi=1Si\mathscr{U}=\underset{i=1}{\overset{M}\bigcup}S_i. The similarity matrices thus constructed give the exact clustering in the noise-free case. Additionally, this decomposition gives rise to many distinct similarity matrices from a given set of data, which allow enough flexibility to perform accurate clustering of noisy data. We also show that two known methods for subspace clustering can be derived from the CUR decomposition. An algorithm based on the theoretical construction of similarity matrices is presented, and experiments on synthetic and real data are presented to test the method. Additionally, an adaptation of our CUR based similarity matrices is utilized to provide a heuristic algorithm for subspace clustering; this algorithm yields the best overall performance to date for clustering the Hopkins155 motion segmentation dataset.Comment: Approximately 30 pages. Current version contains improved algorithm and numerical experiments from the previous versio
    • …
    corecore