6,235 research outputs found

    The Topology ToolKit

    Full text link
    This system paper presents the Topology ToolKit (TTK), a software platform designed for topological data analysis in scientific visualization. TTK provides a unified, generic, efficient, and robust implementation of key algorithms for the topological analysis of scalar data, including: critical points, integral lines, persistence diagrams, persistence curves, merge trees, contour trees, Morse-Smale complexes, fiber surfaces, continuous scatterplots, Jacobi sets, Reeb spaces, and more. TTK is easily accessible to end users due to a tight integration with ParaView. It is also easily accessible to developers through a variety of bindings (Python, VTK/C++) for fast prototyping or through direct, dependence-free, C++, to ease integration into pre-existing complex systems. While developing TTK, we faced several algorithmic and software engineering challenges, which we document in this paper. In particular, we present an algorithm for the construction of a discrete gradient that complies to the critical points extracted in the piecewise-linear setting. This algorithm guarantees a combinatorial consistency across the topological abstractions supported by TTK, and importantly, a unified implementation of topological data simplification for multi-scale exploration and analysis. We also present a cached triangulation data structure, that supports time efficient and generic traversals, which self-adjusts its memory usage on demand for input simplicial meshes and which implicitly emulates a triangulation for regular grids with no memory overhead. Finally, we describe an original software architecture, which guarantees memory efficient and direct accesses to TTK features, while still allowing for researchers powerful and easy bindings and extensions. TTK is open source (BSD license) and its code, online documentation and video tutorials are available on TTK's website

    Analysis domain model for shared virtual environments

    Get PDF
    The field of shared virtual environments, which also encompasses online games and social 3D environments, has a system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model

    Advancing Creative Visual Thinking with Constructive Function-based Modelling.

    Get PDF
    Modern education technologies are destined to reflect the realities of a modern digital age. The juxtaposition of real and synthetic (computer-generated) worlds as well as a greater emphasis on visual dimension are especially important characteristics that have to be taken into account in learning and teaching. We describe the ways in which an approach to constructive shape modelling can be used to advancing creative visual thinking in artistic and technical education. This approach assumes the use of a simple programming language or interactive software tools for creating a shape model, generating its images, and finally fabricating a real object of that model. It can be considered an educational technology suitable not only for children and students but also for researchers, artists, and designers. The corresponding modelling language and software tools are being developed within an international HyperFun Project. These tools are easy to use by students of different age, specialization and abilities, and can easily be extended and adapted for various educational purposes in different areas

    animation : An R Package for Creating Animations and Demonstrating Statistical Methods

    Get PDF
    Animated graphs that demonstrate statistical ideas and methods can both attract interest and assist understanding. In this paper we first discuss how animations can be related to some statistical topics such as iterative algorithms, random simulations, (re)sampling methods and dynamic trends, then we describe the approaches that may be used to create animations, and give an overview to the R package animation, including its design, usage and the statistical topics in the package. With the animation package, we can export the animations produced by R into a variety of formats, such as a web page, a GIF animation, a Flash movie, a PDF document, or an MP4/AVI video, so that users can publish the animations fairly easily. The design of this package is flexible enough to be readily incorporated into web applications, e.g., we can generate animations online with Rweb, which means we do not even need R to be installed locally to create animations. We will show examples of the use of animations in teaching statistics and in the presentation of statistical reports using Sweave or knitr. In fact, this paper itself was written with the knitr and animation package, and the animations are embedded in the PDF document, so that readers can watch the animations in real time when they read the paper (the Adobe Reader is required).Animations can add insight and interest to traditional static approaches to teaching statistics and reporting, making statistics a more interesting and appealing subject

    Optimizing Lossy Compression Rate-Distortion from Automatic Online Selection between SZ and ZFP

    Full text link
    With ever-increasing volumes of scientific data produced by HPC applications, significantly reducing data size is critical because of limited capacity of storage space and potential bottlenecks on I/O or networks in writing/reading or transferring data. SZ and ZFP are the two leading lossy compressors available to compress scientific data sets. However, their performance is not consistent across different data sets and across different fields of some data sets: for some fields SZ provides better compression performance, while other fields are better compressed with ZFP. This situation raises the need for an automatic online (during compression) selection between SZ and ZFP, with a minimal overhead. In this paper, the automatic selection optimizes the rate-distortion, an important statistical quality metric based on the signal-to-noise ratio. To optimize for rate-distortion, we investigate the principles of SZ and ZFP. We then propose an efficient online, low-overhead selection algorithm that predicts the compression quality accurately for two compressors in early processing stages and selects the best-fit compressor for each data field. We implement the selection algorithm into an open-source library, and we evaluate the effectiveness of our proposed solution against plain SZ and ZFP in a parallel environment with 1,024 cores. Evaluation results on three data sets representing about 100 fields show that our selection algorithm improves the compression ratio up to 70% with the same level of data distortion because of very accurate selection (around 99%) of the best-fit compressor, with little overhead (less than 7% in the experiments).Comment: 14 pages, 9 figures, first revisio

    OpenAlea: A visual programming and component-based software platform for plant modeling

    Get PDF
    International audienceAs illustrated by the approaches presented during the 5th FSPM workshop (Prusinkiewicz and Hanan 2007, and this issue), the development of functional-structural plant models requires an increasing amount of computer modeling. All these models are developed by different teams in various contexts and with different goals. Efficient and flexible computational frameworks are required to augment the interaction between these models, their reusability, and the possibility to compare them on identical datasets. In this paper, we present an open-source platform, OpenAlea, that provides a user-friendly environment for modelers, and advanced deployment methods. OpenAlea allows researchers to build models using a visual programming interface and provides a set of tools and models dedicated to plant modeling. Models and algorithms are embedded in OpenAlea components with well defined input and output interfaces that can be easily interconnected to form more complex models and define more macroscopic components. The system architecture is based on the use of a general purpose, high-level, object-oriented script language, Python, widely used in other scientific areas. We briefly present the rationale that underlies the architectural design of this system and we illustrate the use of the platform to assemble several heterogeneous model components and to rapidly prototype a complex modeling scenario

    Engaging the Virtual Landscape: Toward an Experiential Approach to Exploring Place Through a Spatial Experience Engine

    Get PDF
    The utilization of Geographic Information Systems (GIS) and other geospatial technologies in historical inquiry and the humanities has led to a number of projects that are exploring digital representations of past landscapes and places as platforms for synthesizing and representing historical and geographic information. Recent advancements in geovisualization, immersive environments, and virtual reality offer the opportunity to generate digital representations of cultural and physical landscapes, and embed those virtual landscapes with information and knowledge from multiple GIS sources. The development of these technologies and their application to historical research has opened up new opportunities to synthesize historical records from disparate sources, represent these sources spatially in digital form, and to embed the qualitative data into those spatial representations that is often crucial to historical interpretation.;This dissertation explores the design and development of a serious game-based virtual engine, the Spatial Experience Engine (SEE), that provides an immersive and interactive platform for an experiential approach to exploring and understanding place. Through a case study focused on the late nineteenth-century urban landscape of Morgantown, West Virginia, the implementation of the SEE discussed in this dissertation demonstrates a compelling platform for building and exploring complex, virtual landscapes, enhanced with spatialized information and multimedia. The SEE not only provides an alternative approach for scholars exploring the spatial turn in history and a humanistic, experiential analysis of historical places, but its flexibility and extensibility also offer the potential for future implementations to explore a wide range of research questions related to the representation of geographic information within an immersive and interactive virtual landscape
    • 

    corecore