16,992 research outputs found

    A predictive approach for a real-time remote visualization of large meshes

    Get PDF
    DĂ©jĂ  sur HALRemote access to large meshes is the subject of studies since several years. We propose in this paper a contribution to the problem of remote mesh viewing. We work on triangular meshes. After a study of existing methods of remote viewing, we propose a visualization approach based on a client-server architecture, in which almost all operations are performed on the server. Our approach includes three main steps: a first step of partitioning the original mesh, generating several fragments of the original mesh that can be supported by the supposed smaller Transfer Control Protocol (TCP) window size of the network, a second step called pre-simplification of the mesh partitioned, generating simplified models of fragments at different levels of detail, which aims to accelerate the visualization process when a client(that we also call remote user) requests a visualization of a specific area of interest, the final step involves the actual visualization of an area which interest the client, the latter having the possibility to visualize more accurately the area of interest, and less accurately the areas out of context. In this step, the reconstruction of the object taking into account the connectivity of fragments before simplifying a fragment is necessary.Pestiv-3D projec

    A virtual workspace for hybrid multidimensional scaling algorithms

    Get PDF
    In visualising multidimensional data, it is well known that different types of algorithms to process them. Data sets might be distinguished according to volume, variable types and distribution, and each of these characteristics imposes constraints upon the choice of applicable algorithms for their visualization. Previous work has shown that a hybrid algorithmic approach can be successful in addressing the impact of data volume on the feasibility of multidimensional scaling (MDS). This suggests that hybrid combinations of appropriate algorithms might also successfully address other characteristics of data. This paper presents a system and framework in which a user can easily explore hybrid algorithms and the data flowing through them. Visual programming and a novel algorithmic architecture let the user semi-automatically define data flows and the co-ordination of multiple views

    A visual workspace for constructing hybrid MDS algorithms and coordinating multiple views

    Get PDF
    Data can be distinguished according to volume, variable types and distribution, and each of these characteristics imposes constraints upon the choice of applicable algorithms for their visualisation. This has led to an abundance of often disparate algorithmic techniques. Previous work has shown that a hybrid algorithmic approach can be successful in addressing the impact of data volume on the feasibility of multidimensional scaling (MDS). This paper presents a system and framework in which a user can easily explore algorithms as well as their hybrid conjunctions and the data flowing through them. Visual programming and a novel algorithmic architecture let the user semi-automatically define data flows and the co-ordination of multiple views of algorithmic and visualisation components. We propose that our approach has two main benefits: significant improvements in run times of MDS algorithms can be achieved, and intermediate views of the data and the visualisation program structure can provide greater insight and control over the visualisation process

    The Topology ToolKit

    Full text link
    This system paper presents the Topology ToolKit (TTK), a software platform designed for topological data analysis in scientific visualization. TTK provides a unified, generic, efficient, and robust implementation of key algorithms for the topological analysis of scalar data, including: critical points, integral lines, persistence diagrams, persistence curves, merge trees, contour trees, Morse-Smale complexes, fiber surfaces, continuous scatterplots, Jacobi sets, Reeb spaces, and more. TTK is easily accessible to end users due to a tight integration with ParaView. It is also easily accessible to developers through a variety of bindings (Python, VTK/C++) for fast prototyping or through direct, dependence-free, C++, to ease integration into pre-existing complex systems. While developing TTK, we faced several algorithmic and software engineering challenges, which we document in this paper. In particular, we present an algorithm for the construction of a discrete gradient that complies to the critical points extracted in the piecewise-linear setting. This algorithm guarantees a combinatorial consistency across the topological abstractions supported by TTK, and importantly, a unified implementation of topological data simplification for multi-scale exploration and analysis. We also present a cached triangulation data structure, that supports time efficient and generic traversals, which self-adjusts its memory usage on demand for input simplicial meshes and which implicitly emulates a triangulation for regular grids with no memory overhead. Finally, we describe an original software architecture, which guarantees memory efficient and direct accesses to TTK features, while still allowing for researchers powerful and easy bindings and extensions. TTK is open source (BSD license) and its code, online documentation and video tutorials are available on TTK's website

    Improved Optimal and Approximate Power Graph Compression for Clearer Visualisation of Dense Graphs

    Full text link
    Drawings of highly connected (dense) graphs can be very difficult to read. Power Graph Analysis offers an alternate way to draw a graph in which sets of nodes with common neighbours are shown grouped into modules. An edge connected to the module then implies a connection to each member of the module. Thus, the entire graph may be represented with much less clutter and without loss of detail. A recent experimental study has shown that such lossless compression of dense graphs makes it easier to follow paths. However, computing optimal power graphs is difficult. In this paper, we show that computing the optimal power-graph with only one module is NP-hard and therefore likely NP-hard in the general case. We give an ILP model for power graph computation and discuss why ILP and CP techniques are poorly suited to the problem. Instead, we are able to find optimal solutions much more quickly using a custom search method. We also show how to restrict this type of search to allow only limited back-tracking to provide a heuristic that has better speed and better results than previously known heuristics.Comment: Extended technical report accompanying the PacificVis 2013 paper of the same nam
    • …
    corecore