287 research outputs found

    Embedding Graphs under Centrality Constraints for Network Visualization

    Full text link
    Visual rendering of graphs is a key task in the mapping of complex network data. Although most graph drawing algorithms emphasize aesthetic appeal, certain applications such as travel-time maps place more importance on visualization of structural network properties. The present paper advocates two graph embedding approaches with centrality considerations to comply with node hierarchy. The problem is formulated first as one of constrained multi-dimensional scaling (MDS), and it is solved via block coordinate descent iterations with successive approximations and guaranteed convergence to a KKT point. In addition, a regularization term enforcing graph smoothness is incorporated with the goal of reducing edge crossings. A second approach leverages the locally-linear embedding (LLE) algorithm which assumes that the graph encodes data sampled from a low-dimensional manifold. Closed-form solutions to the resulting centrality-constrained optimization problems are determined yielding meaningful embeddings. Experimental results demonstrate the efficacy of both approaches, especially for visualizing large networks on the order of thousands of nodes.Comment: Submitted to IEEE Transactions on Visualization and Computer Graphic

    Fast Graph Laplacian regularized kernel learning via semidefinite-quadratic-linear programming.

    Get PDF
    Wu, Xiaoming.Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.Includes bibliographical references (p. 30-34).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.ivChapter 1 --- Introduction --- p.1Chapter 2 --- Preliminaries --- p.4Chapter 2.1 --- Kernel Learning Theory --- p.4Chapter 2.1.1 --- Positive Semidefinite Kernel --- p.4Chapter 2.1.2 --- The Reproducing Kernel Map --- p.6Chapter 2.1.3 --- Kernel Tricks --- p.7Chapter 2.2 --- Spectral Graph Theory --- p.8Chapter 2.2.1 --- Graph Laplacian --- p.8Chapter 2.2.2 --- Eigenvectors of Graph Laplacian --- p.9Chapter 2.3 --- Convex Optimization --- p.10Chapter 2.3.1 --- From Linear to Conic Programming --- p.11Chapter 2.3.2 --- Second-Order Cone Programming --- p.12Chapter 2.3.3 --- Semidefinite Programming --- p.12Chapter 3 --- Fast Graph Laplacian Regularized Kernel Learning --- p.14Chapter 3.1 --- The Problems --- p.14Chapter 3.1.1 --- MVU --- p.16Chapter 3.1.2 --- PCP --- p.17Chapter 3.1.3 --- Low-Rank Approximation: from SDP to QSDP --- p.18Chapter 3.2 --- Previous Approach: from QSDP to SDP --- p.20Chapter 3.3 --- Our Formulation: from QSDP to SQLP --- p.21Chapter 3.4 --- Experimental Results --- p.23Chapter 3.4.1 --- The Results --- p.25Chapter 4 --- Conclusion --- p.28Bibliography --- p.3

    Modeling outcomes of soccer matches

    Get PDF
    We compare various extensions of the Bradley-Terry model and a hierarchical Poisson log-linear model in terms of their performance in predicting the outcome of soccer matches (win, draw, or loss). The parameters of the Bradley-Terry extensions are estimated by maximizing the log-likelihood, or an appropriately penalized version of it, while the posterior densities of the parameters of the hierarchical Poisson log-linear model are approximated using integrated nested Laplace approximations. The prediction performance of the various modeling approaches is assessed using a novel, context-specific framework for temporal validation that is found to deliver accurate estimates of the test error. The direct modeling of outcomes via the various Bradley-Terry extensions and the modeling of match scores using the hierarchical Poisson log-linear model demonstrate similar behavior in terms of predictive performance

    Data-Driven Shape Analysis and Processing

    Full text link
    Data-driven methods play an increasingly important role in discovering geometric, structural, and semantic relationships between 3D shapes in collections, and applying this analysis to support intelligent modeling, editing, and visualization of geometric data. In contrast to traditional approaches, a key feature of data-driven approaches is that they aggregate information from a collection of shapes to improve the analysis and processing of individual shapes. In addition, they are able to learn models that reason about properties and relationships of shapes without relying on hard-coded rules or explicitly programmed instructions. We provide an overview of the main concepts and components of these techniques, and discuss their application to shape classification, segmentation, matching, reconstruction, modeling and exploration, as well as scene analysis and synthesis, through reviewing the literature and relating the existing works with both qualitative and numerical comparisons. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing.Comment: 10 pages, 19 figure

    kLog: A Language for Logical and Relational Learning with Kernels

    Full text link
    We introduce kLog, a novel approach to statistical relational learning. Unlike standard approaches, kLog does not represent a probability distribution directly. It is rather a language to perform kernel-based learning on expressive logical and relational representations. kLog allows users to specify learning problems declaratively. It builds on simple but powerful concepts: learning from interpretations, entity/relationship data modeling, logic programming, and deductive databases. Access by the kernel to the rich representation is mediated by a technique we call graphicalization: the relational representation is first transformed into a graph --- in particular, a grounded entity/relationship diagram. Subsequently, a choice of graph kernel defines the feature space. kLog supports mixed numerical and symbolic data, as well as background knowledge in the form of Prolog or Datalog programs as in inductive logic programming systems. The kLog framework can be applied to tackle the same range of tasks that has made statistical relational learning so popular, including classification, regression, multitask learning, and collective classification. We also report about empirical comparisons, showing that kLog can be either more accurate, or much faster at the same level of accuracy, than Tilde and Alchemy. kLog is GPLv3 licensed and is available at http://klog.dinfo.unifi.it along with tutorials
    • …
    corecore