9 research outputs found
A Study of Mental Maps in Immersive Network Visualization
The visualization of a network influences the quality of the mental map that
the viewer develops to understand the network. In this study, we investigate
the effects of a 3D immersive visualization environment compared to a
traditional 2D desktop environment on the comprehension of a network's
structure. We compare the two visualization environments using three
tasks--interpreting network structure, memorizing a set of nodes, and
identifying the structural changes--commonly used for evaluating the quality of
a mental map in network visualization. The results show that participants were
able to interpret network structure more accurately when viewing the network in
an immersive environment, particularly for larger networks. However, we found
that 2D visualizations performed better than immersive visualization for tasks
that required spatial memory.Comment: IEEE Pacific Visualization Symposium 202
A Survey on ML4VIS: Applying Machine Learning Advances to Data Visualization
Inspired by the great success of machine learning (ML), researchers have
applied ML techniques to visualizations to achieve a better design,
development, and evaluation of visualizations. This branch of studies, known as
ML4VIS, is gaining increasing research attention in recent years. To
successfully adapt ML techniques for visualizations, a structured understanding
of the integration of ML4VISis needed. In this paper, we systematically survey
88 ML4VIS studies, aiming to answer two motivating questions: "what
visualization processes can be assisted by ML?" and "how ML techniques can be
used to solve visualization problems?" This survey reveals seven main processes
where the employment of ML techniques can benefit visualizations:Data
Processing4VIS, Data-VIS Mapping, InsightCommunication, Style Imitation, VIS
Interaction, VIS Reading, and User Profiling. The seven processes are related
to existing visualization theoretical models in an ML4VIS pipeline, aiming to
illuminate the role of ML-assisted visualization in general
visualizations.Meanwhile, the seven processes are mapped into main learning
tasks in ML to align the capabilities of ML with the needs in visualization.
Current practices and future opportunities of ML4VIS are discussed in the
context of the ML4VIS pipeline and the ML-VIS mapping. While more studies are
still needed in the area of ML4VIS, we hope this paper can provide a
stepping-stone for future exploration. A web-based interactive browser of this
survey is available at https://ml4vis.github.ioComment: 19 pages, 12 figures, 4 table
UnProjection: Leveraging Inverse-Projections for Visual Analytics of High-Dimensional Data
Projection techniques are often used to visualize high-dimensional data, allowing users to better understand the overall structure of multi-dimensional spaces on a 2D screen. Although many such methods exist, comparably little work has been done on generalizable methods of inverse-projection – the process of mapping the projected points, or more generally, the projection space back to the original high-dimensional space. In this article we present NNInv, a deep learning technique with the ability to approximate the inverse of any projection or mapping. NNInv learns to reconstruct high-dimensional data from any arbitrary point on a 2D projection space, giving users the ability to interact with the learned high-dimensional representation in a visual analytics system. We provide an analysis of the parameter space of NNInv, and offer guidance in selecting these parameters. We extend validation of the effectiveness of NNInv through a series of quantitative and qualitative analyses. We then demonstrate the method’s utility by applying it to three visualization tasks: interactive instance interpolation, classifier agreement, and gradient visualization
An HCI-Centric Survey and Taxonomy of Human-Generative-AI Interactions
Generative AI (GenAI) has shown remarkable capabilities in generating diverse
and realistic content across different formats like images, videos, and text.
In Generative AI, human involvement is essential, thus HCI literature has
investigated how to effectively create collaborations between humans and GenAI
systems. However, the current literature lacks a comprehensive framework to
better understand Human-GenAI Interactions, as the holistic aspects of
human-centered GenAI systems are rarely analyzed systematically. In this paper,
we present a survey of 291 papers, providing a novel taxonomy and analysis of
Human-GenAI Interactions from both human and Gen-AI perspectives. The
dimensions of design space include 1) Purposes of Using Generative AI, 2)
Feedback from Models to Users, 3) Control from Users to Models, 4) Levels of
Engagement, 5) Application Domains, and 6) Evaluation Strategies. Our work is
also timely at the current development stage of GenAI, where the Human-GenAI
interaction design is of paramount importance. We also highlight challenges and
opportunities to guide the design of Gen-AI systems and interactions towards
the future design of human-centered Generative AI applications