14 research outputs found

    Rainbows Revisited: Modeling Effective Colormap Design for Graphical Inference

    Get PDF
    Color mapping is a foundational technique for visualizing scalar data. Prior literature offers guidelines for effective colormap design, such as emphasizing luminance variation while limiting changes in hue. However, empirical studies of color are largely focused on perceptual tasks. This narrow focus inhibits our understanding of how generalizable these guidelines are, particularly to tasks like visual inference that require synthesis and judgement across multiple percepts. Furthermore, the emphasis on traditional ramp designs (e.g., sequential or diverging) may sideline other key metrics or design strategies. We study how a cognitive metric-color name variation-impacts people's ability to make model-based judgments. In two graphical inference experiments, participants saw a series of color-coded scalar fields sampled from different models and assessed the relationships between these models. Contrary to conventional guidelines, participants were more accurate when viewing colormaps that cross a variety of uniquely nameable colors. We modeled participants' performance using this metric and found that it provides a better fit to the experimental data than do existing design principles. Our findings indicate cognitive advantages for colorful maps like rainbow, which exhibit high color categorization, despite their traditionally undesirable perceptual properties. We also found no evidence that color categorization would lead observers to infer false data features. Our results provide empirically grounded metrics for predicting a colormap's performance and suggest alternative guidelines for designing new quantitative colormaps to support inference. The data and materials for this paper are available at: https://osf.io/tck2r/

    Evaluating Gradient Perception in Color-Coded Scalar Fields

    Get PDF
    Color mapping is a commonly used technique for visualizing scalar fields. While there exists advice for choosing effective colormaps, it is unclear if current guidelines apply equally across task types. We study the perception of gradients and evaluate the effectiveness of three colormaps at depicting gradient magnitudes. In a crowd-sourced experiment, we determine the just-noticeable differences (JNDs) at which participants can reliably compare and judge variations in gradient between two scalar fields. We find that participants exhibited lower JNDs with a diverging (cool-warm) or a spectral (rainbow) scheme, as compared with a monotonic-luminance colormap (viridis). The results support a hypothesis that apparent discontinuities in the color ramp may help viewers discern subtle structural differences in gradient. We discuss these findings and highlight future research directions for colormap evaluation

    Dynamic Glyphs: Appropriating Causality Perception in Multivariate Visual Analysis

    Get PDF
    We investigate how to co-opt the perception of causality to aid the analysis of multivariate data. We propose Dynamic Glyphs (DyGs), an animated extension to traditional glyphs. DyGs encode data relations through seemingly physical interactions between glyph parts. We hypothesize that this representation gives rise to impressions of causality, enabling observers to reason intuitively about complex, multivariate dynamics. In a crowdsourced experiment, participants' accuracy with DyGs exceeded or was comparable to non-animated alternatives. Moreover, participants showed a propensity to infer higher-dimensional relations with DyGs. Our findings suggest that visual causality can be an effective 'channel' for communicating complex data relations that are otherwise difficult to think about. We discuss the implications and highlight future research opportunities

    A visual Analytics System for Optimizing Communications in Massively Parallel Applications

    Get PDF
    Current and future supercomputers have tens of thousands of compute nodes interconnected with high-dimensional networks and complex network topologies for improved performance. Application developers are required to write scalable parallel programs in order to achieve high throughput on these machines. Application performance is largely determined by efficient inter-process communication. A common way to analyze and optimize performance is through profiling parallel codes to identify communication bottlenecks. However, understanding gigabytes of profile data is not a trivial task. In this paper, we present a visual analytics system for identifying the scalability bottlenecks and improving the communication efficiency of massively parallel applications. Visualization methods used in this system are designed to comprehend large-scale and varied communication patterns on thousands of nodes in complex networks such as the 5D torus and the dragonfly. We also present efficient rerouting and remapping algorithms that can be coupled with our interactive visual analytics design for performance optimization. We demonstrate the utility of our system with several case studies using three benchmark applications on two leading supercomputers. The mapping suggestion from our system led to 38% improvement in hop-bytes for MiniAMR application on 4,096 MPI processes.This research has been sponsored in part by the U.S. National Science Foundation through grant IIS-1320229, and the U.S. Department of Energy through grants DE-SC0012610 and DE-SC0014917. This research has been funded in part and used resources of the Argonne Leadership Computing Facility at Argonne National Lab- oratory, which is supported by the Office of Science of the U.S. Department of Energy under contract no. DE-AC02-06CH11357. This work was supported in part by the DOE Office of Science, ASCR, under award numbers 57L38, 57L32, 57L11, 57K50, and 508050

    Concept-Driven Visual Analytics: an Exploratory Study of Model- and Hypothesis-Based Reasoning with Visualizations

    Get PDF
    Visualization tools facilitate exploratory data analysis, but fall short at supporting hypothesis-based reasoning. We conducted an exploratory study to investigate how visualizations might support a concept-driven analysis style, where users can optionally share their hypotheses and conceptual models in natural language, and receive customized plots depicting the fit of their models to the data. We report on how participants leveraged these unique affordances for visual analysis. We found that a majority of participants articulated meaningful models and predictions, utilizing them as entry points to sensemaking. We contribute an abstract typology representing the types of models participants held and externalized as data expectations. Our findings suggest ways for rearchitecting visual analytics tools to better support hypothesis- and model-based reasoning, in addition to their traditional role in exploratory analysis. We discuss the design implications and reflect on the potential benefits and challenges involved.National Science Foundation award #175561

    Towards Concept-Driven Visual Analytics

    Get PDF
    Visualizations of data provide a proven method for analysts to explore and make data-driven discoveries. However, current visualization tools provide only limited support for hypothesis-driven analyses, and often lack capabilities that would allow users to visually test the fit of their conceptual models against the data. This imbalance could bias users to overly rely on exploratory visual analysis as the principal mode of inquiry, which can be detrimental to discovery. To address this gap, we propose a new paradigm for 'concept-driven' visual analysis. In this style of analysis, analysts share their conceptual models and hypotheses with the system. The system then uses those inputs to drive the generation of visualizations, while providing plots and interactions to explore places where models and data disagree. We discuss key characteristics and design considerations for concept-driven visualizations, and report preliminary findings from a formative study.National Science Foundation award #175561

    Visual (dis)Confirmation: Validating Models and Hypotheses with Visualizations

    Get PDF
    Data visualization provides a powerful way for analysts to explore and make data-driven discoveries. However, current visual analytic tools provide only limited support for hypothesis-driven inquiry, as their built-in interactions and workflows are primarily intended for exploratory analysis. Visualization tools notably lack capabilities that would allow users to visually and incrementally test the fit of their conceptual models and provisional hypotheses against the data. This imbalance could bias users to overly rely on exploratory analysis as the principal mode of inquiry, which can be detrimental to discovery. In this paper, we introduce Visual (dis) Confirmation, a tool for conducting confirmatory, hypothesis-driven analyses with visualizations. Users interact by framing hypotheses and data expectations in natural language. The system then selects conceptually relevant data features and automatically generates visualizations to validate the underlying expectations. Distinctively, the resulting visualizations also highlight places where one's mental model disagrees with the data, so as to stimulate reflection. The proposed tool represents a new class of interactive data systems capable of supporting confirmatory visual analysis, and responding more intelligently by spotlighting gaps between one's knowledge and the data. We describe the algorithmic techniques behind this workflow. We also demonstrate the utility of the tool through a case study

    Sequoia: an interactive visual analytics platform for interpretation and feature extraction from nanopore sequencing datasets

    Get PDF
    Background: Direct-sequencing technologies, such as Oxford Nanopore's, are delivering long RNA reads with great efficacy and convenience. These technologies afford an ability to detect post-transcriptional modifications at a single-molecule resolution, promising new insights into the functional roles of RNA. However, realizing this potential requires new tools to analyze and explore this type of data. Result: Here, we present Sequoia, a visual analytics tool that allows users to interactively explore nanopore sequences. Sequoia combines a Python-based backend with a multi-view visualization interface, enabling users to import raw nanopore sequencing data in a Fast5 format, cluster sequences based on electric-current similarities, and drill-down onto signals to identify properties of interest. We demonstrate the application of Sequoia by generating and analyzing ~ 500k reads from direct RNA sequencing data of human HeLa cell line. We focus on comparing signal features from m6A and m5C RNA modifications as the first step towards building automated classifiers. We show how, through iterative visual exploration and tuning of dimensionality reduction parameters, we can separate modified RNA sequences from their unmodified counterparts. We also document new, qualitative signal signatures that characterize these modifications from otherwise normal RNA bases, which we were able to discover from the visualization. Conclusions: Sequoia's interactive features complement existing computational approaches in nanopore-based RNA workflows. The insights gleaned through visual analysis should help users in developing rationales, hypotheses, and insights into the dynamic nature of RNA. Sequoia is available at https://github.com/dnonatar/Sequoia
    corecore