4,285 research outputs found

    Labeled Interleaving Distance for Reeb Graphs

    Full text link
    Merge trees, contour trees, and Reeb graphs are graph-based topological descriptors that capture topological changes of (sub)level sets of scalar fields. Comparing scalar fields using their topological descriptors has many applications in topological data analysis and visualization of scientific data. Recently, Munch and Stefanou introduced a labeled interleaving distance for comparing two labeled merge trees, which enjoys a number of theoretical and algorithmic properties. In particular, the labeled interleaving distance between merge trees can be computed in polynomial time. In this work, we define the labeled interleaving distance for labeled Reeb graphs. We then prove that the (ordinary) interleaving distance between Reeb graphs equals the minimum of the labeled interleaving distance over all labelings. We also provide an efficient algorithm for computing the labeled interleaving distance between two labeled contour trees (which are special types of Reeb graphs that arise from simply-connected domains). In the case of merge trees, the notion of the labeled interleaving distance was used by Gasparovic et al. to prove that the (ordinary) interleaving distance on the set of (unlabeled) merge trees is intrinsic. As our final contribution, we present counterexamples showing that, on the contrary, the (ordinary) interleaving distance on (unlabeled) Reeb graphs (and contour trees) is not intrinsic. It turns out that, under mild conditions on the labelings, the labeled interleaving distance is a metric on isomorphism classes of Reeb graphs, analogous to the ordinary interleaving distance. This provides new metrics on large classes of Reeb graphs

    Principal Geodesic Analysis of Merge Trees (and Persistence Diagrams)

    Full text link
    This paper presents a computational framework for the Principal Geodesic Analysis of merge trees (MT-PGA), a novel adaptation of the celebrated Principal Component Analysis (PCA) framework [87] to the Wasserstein metric space of merge trees [92]. We formulate MT-PGA computation as a constrained optimization problem, aiming at adjusting a basis of orthogonal geodesic axes, while minimizing a fitting energy. We introduce an efficient, iterative algorithm which exploits shared-memory parallelism, as well as an analytic expression of the fitting energy gradient, to ensure fast iterations. Our approach also trivially extends to extremum persistence diagrams. Extensive experiments on public ensembles demonstrate the efficiency of our approach - with MT-PGA computations in the orders of minutes for the largest examples. We show the utility of our contributions by extending to merge trees two typical PCA applications. First, we apply MT-PGA to data reduction and reliably compress merge trees by concisely representing them by their first coordinates in the MT-PGA basis. Second, we present a dimensionality reduction framework exploiting the first two directions of the MT-PGA basis to generate two-dimensional layouts of the ensemble. We augment these layouts with persistence correlation views, enabling global and local visual inspections of the feature variability in the ensemble. In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a lightweight C++ implementation that can be used to reproduce our results

    Flow-based Influence Graph Visual Summarization

    Full text link
    Visually mining a large influence graph is appealing yet challenging. People are amazed by pictures of newscasting graph on Twitter, engaged by hidden citation networks in academics, nevertheless often troubled by the unpleasant readability of the underlying visualization. Existing summarization methods enhance the graph visualization with blocked views, but have adverse effect on the latent influence structure. How can we visually summarize a large graph to maximize influence flows? In particular, how can we illustrate the impact of an individual node through the summarization? Can we maintain the appealing graph metaphor while preserving both the overall influence pattern and fine readability? To answer these questions, we first formally define the influence graph summarization problem. Second, we propose an end-to-end framework to solve the new problem. Our method can not only highlight the flow-based influence patterns in the visual summarization, but also inherently support rich graph attributes. Last, we present a theoretic analysis and report our experiment results. Both evidences demonstrate that our framework can effectively approximate the proposed influence graph summarization objective while outperforming previous methods in a typical scenario of visually mining academic citation networks.Comment: to appear in IEEE International Conference on Data Mining (ICDM), Shen Zhen, China, December 201

    Taming Horizontal Instability in Merge Trees: On the Computation of a Comprehensive Deformation-based Edit Distance

    Full text link
    Comparative analysis of scalar fields in scientific visualization often involves distance functions on topological abstractions. This paper focuses on the merge tree abstraction (representing the nesting of sub- or superlevel sets) and proposes the application of the unconstrained deformation-based edit distance. Previous approaches on merge trees often suffer from instability: small perturbations in the data can lead to large distances of the abstractions. While some existing methods can handle so-called vertical instability, the unconstrained deformation-based edit distance addresses both vertical and horizontal instabilities, also called saddle swaps. We establish the computational complexity as NP-complete, and provide an integer linear program formulation for computation. Experimental results on the TOSCA shape matching ensemble provide evidence for the stability of the proposed distance. We thereby showcase the potential of handling saddle swaps for comparison of scalar fields through merge trees

    Data complexity measured by principal graphs

    Full text link
    How to measure the complexity of a finite set of vectors embedded in a multidimensional space? This is a non-trivial question which can be approached in many different ways. Here we suggest a set of data complexity measures using universal approximators, principal cubic complexes. Principal cubic complexes generalise the notion of principal manifolds for datasets with non-trivial topologies. The type of the principal cubic complex is determined by its dimension and a grammar of elementary graph transformations. The simplest grammar produces principal trees. We introduce three natural types of data complexity: 1) geometric (deviation of the data's approximator from some "idealized" configuration, such as deviation from harmonicity); 2) structural (how many elements of a principal graph are needed to approximate the data), and 3) construction complexity (how many applications of elementary graph transformations are needed to construct the principal object starting from the simplest one). We compute these measures for several simulated and real-life data distributions and show them in the "accuracy-complexity" plots, helping to optimize the accuracy/complexity ratio. We discuss various issues connected with measuring data complexity. Software for computing data complexity measures from principal cubic complexes is provided as well.Comment: Computers and Mathematics with Applications, in pres

    Customizable tubular model for n-furcating blood vessels and its application to 3D reconstruction of the cerebrovascular system

    Get PDF
    Understanding the 3D cerebral vascular network is one of the pressing issues impacting the diagnostics of various systemic disorders and is helpful in clinical therapeutic strategies. Unfortunately, the existing software in the radiological workstation does not meet the expectations of radiologists who require a computerized system for detailed, quantitative analysis of the human cerebrovascular system in 3D and a standardized geometric description of its components. In this study, we show a method that uses 3D image data from magnetic resonance imaging with contrast to create a geometrical reconstruction of the vessels and a parametric description of the reconstructed segments of the vessels. First, the method isolates the vascular system using controlled morphological growing and performs skeleton extraction and optimization. Then, around the optimized skeleton branches, it creates tubular objects optimized for quality and accuracy of matching with the originally isolated vascular data. Finally, it optimizes the joints on n-furcating vessel segments. As a result, the algorithm gives a complete description of shape, position in space, position relative to other segments, and other anatomical structures of each cerebrovascular system segment. Our method is highly customizable and in principle allows reconstructing vascular structures from any 2D or 3D data. The algorithm solves shortcomings of currently available methods including failures to reconstruct the vessel mesh in the proximity of junctions and is free of mesh collisions in high curvature vessels. It also introduces a number of optimizations in the vessel skeletonization leading to a more smooth and more accurate model of the vessel network. We have tested the method on 20 datasets from the public magnetic resonance angiography image database and show that the method allows for repeatable and robust segmentation of the vessel network and allows to compute vascular lateralization indices. Graphical abstract: [Figure not available: see fulltext.]</p

    Depth from Monocular Images using a Semi-Parallel Deep Neural Network (SPDNN) Hybrid Architecture

    Get PDF
    Deep neural networks are applied to a wide range of problems in recent years. In this work, Convolutional Neural Network (CNN) is applied to the problem of determining the depth from a single camera image (monocular depth). Eight different networks are designed to perform depth estimation, each of them suitable for a feature level. Networks with different pooling sizes determine different feature levels. After designing a set of networks, these models may be combined into a single network topology using graph optimization techniques. This "Semi Parallel Deep Neural Network (SPDNN)" eliminates duplicated common network layers, and can be further optimized by retraining to achieve an improved model compared to the individual topologies. In this study, four SPDNN models are trained and have been evaluated at 2 stages on the KITTI dataset. The ground truth images in the first part of the experiment are provided by the benchmark, and for the second part, the ground truth images are the depth map results from applying a state-of-the-art stereo matching method. The results of this evaluation demonstrate that using post-processing techniques to refine the target of the network increases the accuracy of depth estimation on individual mono images. The second evaluation shows that using segmentation data alongside the original data as the input can improve the depth estimation results to a point where performance is comparable with stereo depth estimation. The computational time is also discussed in this study.Comment: 44 pages, 25 figure
    • …
    corecore