511 research outputs found

    A Testing Environment for Continuous Colormaps

    Full text link
    Many computer science disciplines (e.g., combinatorial optimization, natural language processing, and information retrieval) use standard or established test suites for evaluating algorithms. In visualization, similar approaches have been adopted in some areas (e.g., volume visualization), while user testimonies and empirical studies have been the dominant means of evaluation in most other areas, such as designing colormaps. In this paper, we propose to establish a test suite for evaluating the design of colormaps. With such a suite, the users can observe the effects when different continuous colormaps are applied to planar scalar fields that may exhibit various characteristic features, such as jumps, local extrema, ridge or valley lines, different distributions of scalar values, different gradients, different signal frequencies, different levels of noise, and so on. The suite also includes an expansible collection of real-world data sets including the most popular data for colormap testing in the visualization literature. The test suite has been integrated into a web-based application for creating continuous colormaps (https://ccctool.com/), facilitating close inter-operation between design and evaluation processes. This new facility complements traditional evaluation methods such as user testimonies and empirical studies

    Data Painter: A Tool for Colormap Interaction

    Get PDF
    The choice of a mapping from data to color should involve careful consideration in order to maximize the user understanding of the underlying data. It is desirable for features within the data to be visually separable and identifiable. Current practice involves selecting a mapping from predefined colormaps or coding specific colormaps using software such as MATLAB. The purposes of this paper are to introduce interactive operations for colormaps that enable users to create more visually distinguishable pixel based visualizations, and to describe our tool, Data Painter, that provides a fast, easy to use framework for defining these color mappings. We demonstrate the use of the tool to create colormaps for various application areas and compare to existing color mapping methods. We present a new objective measure to evaluate their efficacy

    Evolving time surfaces and tracking mixing indicators for flow visualization

    Get PDF
    The complexity of large scale computational fluid dynamic simulations (CFD) demands powerful tools to investigate the numerical results. To analyze and understand these voluminous results, we need to visualize the 3D flow field. We chose to use a visualization technique called Time Surfaces. A time surface is a set of surfaces swept by an initial seed surface for a given number of timesteps. We use a front tracking approach where the points of an in initial surface are advanced in a Lagrangian fashion. To maintain a smooth time surface, our method requires surface refinement operations that either split triangle edges, adjust narrow triangles, or delete small triangles. In the conventional approach of edge splitting, we compute the length of an edge, and split that edge if it has exceeded a certain threshold length. In our new approach, we examine the angle between the two vectors at a given edge. We split the edge if the vectors are diverging from one another. This vector angle criterion enables us to refine an edge before advancing the surface front. Refining a surface prior to advancing it has the effect of minimizing the amount of interpolation error. In addition, unlike the edge length criterion which yields a triangular mesh with even vertex distribution throughout the surface, the vector angle criterion yields a triangular mesh that has fewer vertices where the vector field is flat and more vertices where the vector field is curved. Motivated by the evaluation and the analysis of flow field mixing quantities, this work explores two types of quantitative measurements. First, we look at Ottino\u27s mixing indicators which measure the degree of mixing of a fluid by quantifying the rate at which a sample fluid blob stretches in a flow field over a period of time. Using the geometry of the time surfaces we generated, we are able to easily evaluate otherwise complicated mixing quantities. Second, we compute the curvature and torsion of the velocity field itself. Visualizing the distribution and intensity of the curvature and torsion scalar fields enables us to identify regions of strong and low mixing. To better observe these scalar fields, we designed a multi-scale colormap that emphasizes small, medium, and large values, simultaneously. We test our time surface method and analyze fluid flow mixing quantities on two CFD datasets: a stirred tank simulation and a BP oil spill simulation

    Scout: a hardware-accelerated system for quantitatively driven visualization and analysis

    Get PDF
    Journal ArticleQuantitative techniques for visualization are critical to the successful analysis of both acquired and simulated scientific data. Many visualization techniques rely on indirect mappings, such as transfer functions, to produce the final imagery. In many situations, it is preferable and more powerful to express these mappings as mathematical expressions, or queries, that can then be directly applied to the data. In this paper, we present a hardware-accelerated system that provides such capabilities and exploits current graphics hardware for portions of the computational tasks that would otherwise be executed on the CPU. In our approach, the direct programming of the graphics processor using a concise data parallel language, gives scientists the capability to efficiently explore and visualize data sets

    Firefly: a browser-based interactive 3D data visualization tool for millions of data points

    Full text link
    We present Firefly, a new browser-based interactive tool for visualizing 3D particle data sets. On a typical personal computer, Firefly can simultaneously render and enable real-time interactions with > ~10 million particles, and can interactively explore datasets with billions of particles using the included custom-built octree render engine. Once created, viewing a Firefly visualization requires no installation and is immediately usable in most modern internet browsers simply by visiting a URL. As a result, a Firefly visualization works out-of-the-box on most devices including smartphones and tablets. Firefly is primarily developed for researchers to explore their own data, but can also be useful to communicate results to researchers/collaborators and as an effective public outreach tool. Every element of the user interface can be customized and disabled, enabling easy adaptation of the same visualization for different audiences with little additional effort. Creating a new Firefly visualization is simple with the provided Python data pre-processor (PDPP) that translates input data to a Firefly-compatible format and provides helpful methods for hosting instances of Firefly both locally and on the internet. In addition to visualizing the positions of particles, users can visualize vector fields (e.g., velocities) and also filter and color points by scalar fields. We share three examples of Firefly applied to astronomical datasets: 1) the FIRE cosmological zoom-in simulations, 2) the SDSS galaxy catalog, and 3) Gaia DR3. A gallery of additional interactive demos is available at https://alexbgurvi.ch/Firefly.Comment: 25 pages, 8 figures. Submitting to ApjS, comments welcome

    Doctor of Philosophy

    Get PDF
    dissertationWith the ever-increasing amount of available computing resources and sensing devices, a wide variety of high-dimensional datasets are being produced in numerous fields. The complexity and increasing popularity of these data have led to new challenges and opportunities in visualization. Since most display devices are limited to communication through two-dimensional (2D) images, many visualization methods rely on 2D projections to express high-dimensional information. Such a reduction of dimension leads to an explosion in the number of 2D representations required to visualize high-dimensional spaces, each giving a glimpse of the high-dimensional information. As a result, one of the most important challenges in visualizing high-dimensional datasets is the automatic filtration and summarization of the large exploration space consisting of all 2D projections. In this dissertation, a new type of algorithm is introduced to reduce the exploration space that identifies a small set of projections that capture the intrinsic structure of high-dimensional data. In addition, a general framework for summarizing the structure of quality measures in the space of all linear 2D projections is presented. However, identifying the representative or informative projections is only part of the challenge. Due to the high-dimensional nature of these datasets, obtaining insights and arriving at conclusions based solely on 2D representations are limited and prone to error. How to interpret the inaccuracies and resolve the ambiguity in the 2D projections is the other half of the puzzle. This dissertation introduces projection distortion error measures and interactive manipulation schemes that allow the understanding of high-dimensional structures via data manipulation in 2D projections

    Visual Analysis of Variability and Features of Climate Simulation Ensembles

    Get PDF
    This PhD thesis is concerned with the visual analysis of time-dependent scalar field ensembles as occur in climate simulations. Modern climate projections consist of multiple simulation runs (ensemble members) that vary in parameter settings and/or initial values, which leads to variations in the resulting simulation data. The goal of ensemble simulations is to sample the space of possible futures under the given climate model and provide quantitative information about uncertainty in the results. The analysis of such data is challenging because apart from the spatiotemporal data, also variability has to be analyzed and communicated. This thesis presents novel techniques to analyze climate simulation ensembles visually. A central question is how the data can be aggregated under minimized information loss. To address this question, a key technique applied in several places in this work is clustering. The first part of the thesis addresses the challenge of finding clusters in the ensemble simulation data. Various distance metrics lend themselves for the comparison of scalar fields which are explored theoretically and practically. A visual analytics interface allows the user to interactively explore and compare multiple parameter settings for the clustering and investigate the resulting clusters, i.e. prototypical climate phenomena. A central contribution here is the development of design principles for analyzing variability in decadal climate simulations, which has lead to a visualization system centered around the new Clustering Timeline. This is a variant of a Sankey diagram that utilizes clustering results to communicate climatic states over time coupled with ensemble member agreement. It can reveal several interesting properties of the dataset, such as: into how many inherently similar groups the ensemble can be divided at any given time, whether the ensemble diverges in general, whether there are different phases in the time lapse, maybe periodicity, or outliers. The Clustering Timeline is also used to compare multiple climate simulation models and assess their performance. The Hierarchical Clustering Timeline is an advanced version of the above. It introduces the concept of a cluster hierarchy that may group the whole dataset down to the individual static scalar fields into clusters of various sizes and densities recording the nesting relationship between them. One more contribution of this work in terms of visualization research is, that ways are investigated how to practically utilize a hierarchical clustering of time-dependent scalar fields to analyze the data. To this end, a system of different views is proposed which are linked through various interaction possibilities. The main advantage of the system is that a dataset can now be inspected at an arbitrary level of detail without having to recompute a clustering with different parameters. Interesting branches of the simulation can be expanded to reveal smaller differences in critical clusters or folded to show only a coarse representation of the less interesting parts of the dataset. The last building block of the suit of visual analysis methods developed for this thesis aims at a robust, (largely) automatic detection and tracking of certain features in a scalar field ensemble. Techniques are presented that I found can identify and track super- and sub-levelsets. And I derive “centers of action” from these sets which mark the location of extremal climate phenomena that govern the weather (e.g. Icelandic Low and Azores High). The thesis also presents visual and quantitative techniques to evaluate the temporal change of the positions of these centers; such a displacement would be likely to manifest in changes in weather. In a preliminary analysis with my collaborators, we indeed observed changes in the loci of the centers of action in a simulation with increased greenhouse gas concentration as compared to pre-industrial concentration levels

    A Fast and Scalable System to Visualize Contour Gradient from Spatio-temporal Data

    Get PDF
    Changes in geological processes that span over the years may often go unnoticed due to their inherent noise and variability. Natural phenomena such as riverbank erosion, and climate change in general, is invisible to humans unless appropriate measures are taken to analyze the underlying data. Visualization helps geological sciences to generate scientific insights into such long-term geological events. Commonly used approaches such as side-by-side contour plots and spaghetti plots do not provide a clear idea about the historical spatial trends. To overcome this challenge, we propose an image-gradient based approach called ContourDiff. ContourDiff overlays gradient vector over contour plots to analyze the trends of change across spatial regions and temporal domain. Our approach first aggregates for each location, its value differences from the neighboring points over the temporal domain, and then creates a vector field representing the prominent changes. Finally, it overlays the vectors (differential trends) along the contour paths, revealing the differential trends that the contour lines (isolines) experienced over time. We designed an interface, where users can interact with the generated visualization to reveal changes and trends in geospatial data. We evaluated our system using real-life datasets, consisting of millions of data points, where the visualizations were generated in less than a minute in a single-threaded execution. We show the potential of the system in detecting subtle changes from almost identical images, describe implementation challenges, speed-up techniques, and scope for improvements. Our experimental results reveal that ContourDiff can reliably visualize the differential trends, and provide a new way to explore the change pattern in spatiotemporal data. The expert evaluation of our system using real-life WRF (Weather Research and Forecasting) model output reveals the potential of our technique to generate useful insights on the spatio-temporal trends of geospatial variables
    • …
    corecore