425,816 research outputs found
Multi-level Visualization of Concurrent and Distributed Computation in Erlang
This paper describes a prototype visualization system
for concurrent and distributed applications programmed
using Erlang, providing two levels of granularity of view. Both
visualizations are animated to show the dynamics of aspects of
the computation.
At the low level, we show the concurrent behaviour of the
Erlang schedulers on a single instance of the Erlang virtual
machine, which we call an Erlang node. Typically there will be
one scheduler per core on a multicore system. Each scheduler
maintains a run queue of processes to execute, and we visualize
the migration of Erlang concurrent processes from one run queue
to another as work is redistributed to fully exploit the hardware.
The schedulers are shown as a graph with a circular layout. Next
to each scheduler we draw a variable length bar indicating the
current size of the run queue for the scheduler.
At the high level, we visualize the distributed aspects of the
system, showing interactions between Erlang nodes as a dynamic
graph drawn with a force model. Speci?cally we show message
passing between nodes as edges and lay out nodes according to
their current connections. In addition, we also show the grouping
of nodes into ās_groupsā using an Euler diagram drawn with
circles
CrossCode: Multi-level Visualization of Program Execution
Program visualizations help to form useful mental models of how programs
work, and to reason and debug code. But these visualizations exist at a fixed
level of abstraction, e.g., line-by-line. In contrast, programmers switch
between many levels of abstraction when inspecting program behavior. Based on
results from a formative study of hand-designed program visualizations, we
designed CrossCode, a web-based program visualization system for JavaScript
that leverages structural cues in syntax, control flow, and data flow to
aggregate and navigate program execution across multiple levels of abstraction.
In an exploratory qualitative study with experts, we found that CrossCode
enabled participants to maintain a strong sense of place in program execution,
was conducive to explaining program behavior, and helped track changes and
updates to the program state.Comment: 13 pages, 6 figures Submitted to CHI 2023: Conference on Human
Factors in Computing System
Visual support for the understanding of simulation processes
Current visualization systems are typically based on the concept of interactive post-processing. This decoupling of data visualiza-tion from the process of data generation offers a flexible applica-tion of visualization tools. It can also lead, however, to information loss in the visualization. Therefore, a combination of the visual-ization of the data generating process with the visualization of the produced data offers significant support for the understanding of the abstract data sets as well as the underlying process. Due to the application-specific characteristics of data generating processes, the task requires tailored visualization concepts. In this work, we focus on the application field of simulating biochemical reaction networks as discrete-event systems. These stochastic processes generate multi-run and multivariate time-series, which are analyzed and compared on three different process levels: model, experiment, and the level of multi-run simulation data, each associated with a broad range of analysis goals. To meet these challenging characteristics, we present visualization concepts specifically tailored to all three process levels. The fundament of all three visualization concepts is a compact view that relates the multi-run simulation data to the characteristics of the model structure and the experiment. The view provides the visualization at the experi-ment level. The visualization at the model level coordinates mul-tiple instances of this view for the comparison of experiments. At the level of multi-run simulation data, the views gives an overview on the data, which can be analyzed in detail in time-series views suited for the analysis goals. Although we derive our visualization concepts for one concrete simulation process, our general concept of tailoring the visualization concepts to process levels is generally applicable for the visualization of simulation processes
Evaluation of Multi-Level Cognitive Maps for Supporting Between-Floor Spatial Behavior in Complex Indoor Environments
People often become disoriented when navigating in complex, multi-level buildings. To efficiently find destinations located on different floors, navigators must refer to a globally coherent mental representation of the multi-level environment, which is termed a multi-level cognitive map. However, there is a surprising dearth of research into underlying theories of why integrating multi-level spatial knowledge into a multi-level cognitive map is so challenging and error-prone for humans. This overarching problem is the core motivation of this dissertation.
We address this vexing problem in a two-pronged approach combining study of both basic and applied research questions. Of theoretical interest, we investigate questions about how multi-level built environments are learned and structured in memory. The concept of multi-level cognitive maps and a framework of multi-level cognitive map development are provided. We then conducted a set of empirical experiments to evaluate the effects of several environmental factors on usersā development of multi-level cognitive maps. The findings of these studies provide important design guidelines that can be used by architects and help to better understand the research question of why people get lost in buildings. Related to application, we investigate questions about how to design user-friendly visualization interfaces that augment usersā capability to form multi-level cognitive maps. An important finding of this dissertation is that increasing visual access with an X-ray-like visualization interface is effective for overcoming the disadvantage of limited visual access in built environments and assists the development of multi-level cognitive maps. These findings provide important human-computer interaction (HCI) guidelines for visualization techniques to be used in future indoor navigation systems.
In sum, this dissertation adopts an interdisciplinary approach, combining theories from the fields of spatial cognition, information visualization, and HCI, addressing a long-standing and ubiquitous problem faced by anyone who navigates indoors: why do people get lost inside multi-level buildings. Results provide both theoretical and applied levels of knowledge generation and explanation, as well as contribute to the growing field of real-time indoor navigation systems
NASA GES DISC Level 2 Aerosol Analysis and Visualization Services
Overview of NASA GES DISC Level 2 aerosol analysis and visualization services: DQViz (Data Quality Visualization)MAPSS (Multi-sensor Aerosol Products Sampling System), and MAPSS_Explorer (Multi-sensor Aerosol Products Sampling System Explorer)
Using high resolution displays for high resolution cardiac data
The ability to perform fast, accurate, high resolution visualization is fundamental
to improving our understanding of anatomical data. As the volumes of data
increase from improvements in scanning technology, the methods applied to rendering
and visualization must evolve. In this paper we address the interactive display of
data from high resolution MRI scanning of a rabbit heart and subsequent histological
imaging. We describe a visualization environment involving a tiled LCD panel
display wall and associated software which provide an interactive and intuitive user
interface.
The oView software is an OpenGL application which is written for the VRJuggler
environment. This environment abstracts displays and devices away from the
application itself, aiding portability between different systems, from desktop PCs to
multi-tiled display walls. Portability between display walls has been demonstrated
through its use on walls at both Leeds and Oxford Universities. We discuss important
factors to be considered for interactive 2D display of large 3D datasets,
including the use of intuitive input devices and level of detail aspects
AVA: Towards Autonomous Visualization Agents through Visual Perception-Driven Decision-Making
With recent advances in multi-modal foundation models, the previously
text-only large language models (LLM) have evolved to incorporate visual input,
opening up unprecedented opportunities for various applications in
visualization. Our work explores the utilization of the visual perception
ability of multi-modal LLMs to develop Autonomous Visualization Agents (AVAs)
that can interpret and accomplish user-defined visualization objectives through
natural language. We propose the first framework for the design of AVAs and
present several usage scenarios intended to demonstrate the general
applicability of the proposed paradigm. The addition of visual perception
allows AVAs to act as the virtual visualization assistant for domain experts
who may lack the knowledge or expertise in fine-tuning visualization outputs.
Our preliminary exploration and proof-of-concept agents suggest that this
approach can be widely applicable whenever the choices of appropriate
visualization parameters require the interpretation of previous visual output.
Feedback from unstructured interviews with experts in AI research, medical
visualization, and radiology has been incorporated, highlighting the
practicality and potential of AVAs. Our study indicates that AVAs represent a
general paradigm for designing intelligent visualization systems that can
achieve high-level visualization goals, which pave the way for developing
expert-level visualization agents in the future
- ā¦