31 research outputs found
A Survey on the Role of Individual Differences on Visual Analytics Interactions: Masters Project Report
There is ample evidence in the visualization commu- nity that individual differences matter. These prior works high- light various traits and cognitive abilities that can modulate the use of the visualization systems and demonstrate a measurable influence on speed, accuracy, process, and attention. Perhaps the most important implication of this body of work is that we can use individual differences as a mechanism for estimating people’s potential to effectively leverage visual interfaces or to identify those people who may struggle. As visual literacy and data fluency continue to become essential skills for our everyday lives, we must embrace the growing need to understand the factors that divide our society, and identify concrete steps for bridging this gap. This paper presents the current understanding of how individual differences interact with visualization use and draws from recent research in the Visualization, Human-Computer Interaction, and Psychology communities. We focus on the specific designs and tasks for which there is concrete evidence of performance divergence due to individual characteristics. The purpose of this paper is to underscore the need to consider individual differences when designing and evaluating visualization systems and to call attention to this critical future research direction
Mini-VLAT: A Short and Effective Measure of Visualization Literacy
The visualization community regards visualization literacy as a necessary
skill. Yet, despite the recent increase in research into visualization literacy
by the education and visualization communities, we lack practical and
time-effective instruments for the widespread measurements of people's
comprehension and interpretation of visual designs. We present Mini-VLAT, a
brief but practical visualization literacy test. The Mini-VLAT is a 12-item
short form of the 53-item Visualization Literacy Assessment Test (VLAT). The
Mini-VLAT is reliable (coefficient omega = 0.72) and strongly correlates with
the VLAT. Five visualization experts validated the Mini-VLAT items, yielding an
average content validity ratio (CVR) of 0.6. We further validate Mini-VLAT by
demonstrating a strong positive correlation between study participants'
Mini-VLAT scores and their aptitude for learning an unfamiliar visualization
using a Parallel Coordinate Plot test. Overall, the Mini-VLAT items showed a
similar pattern of validity and reliability as the 53-item VLAT. The results
show that Mini-VLAT is a psychometrically sound and practical short measure of
visualization literacy
What Exactly is an Insight? A Literature Review
Insights are often considered the ideal outcome of visual analysis sessions.
However, there is no single definition of what an insight is. Some scholars
define insights as correlations, while others define them as hypotheses or aha
moments. This lack of a clear definition can make it difficult to build
visualization tools that effectively support insight discovery. In this paper,
we contribute a comprehensive literature review that maps the landscape of
existing insight definitions. We summarize key themes regarding how insight is
defined, with the goal of helping readers identify which definitions of insight
align closely with their research and tool development goals. Based on our
review, we also suggest interesting research directions, such as synthesizing a
unified formalism for insight and connecting theories of insight to other
critical concepts in visualization research.Comment: Technical report. arXiv admin note: text overlap with
arXiv:2206.0476
The Effects of Mixed-Initiative Visualization Systems on Exploratory Data Analysis
The primary purpose of information visualization is to act as a window between a user and the data. Historically, this has been accomplished via a single-agent framework: the only decision-maker in the relationship between visualization system and analyst is the analyst herself. Yet this framework arose not from first principles, but a necessity. Before this decade, computers were limited in their decision-making capabilities, especially in the face of large, complex datasets and visualization systems. This paper aims to present the design and evaluation of a mixed-initiative system that aids the user in handling large, complex datasets and dense visualization systems. We demonstrate this system with a between-groups, two-by-two study measuring the effects of this mixed-initiative system on user interactions and system usability. We find little to no evidence that the adaptive system designed here has a statistically significant impact on user interactions or system usability. We discuss the implications of this lack of evidence and examine how the data suggests a promising avenue for further research
Does Interaction Improve Bayesian Reasoning with Visualization?
Interaction enables users to navigate large amounts of data effectively,
supports cognitive processing, and increases data representation methods.
However, there have been few attempts to empirically demonstrate whether adding
interaction to a static visualization improves its function beyond popular
beliefs. In this paper, we address this gap. We use a classic Bayesian
reasoning task as a testbed for evaluating whether allowing users to interact
with a static visualization can improve their reasoning. Through two
crowdsourced studies, we show that adding interaction to a static Bayesian
reasoning visualization does not improve participants' accuracy on a Bayesian
reasoning task. In some cases, it can significantly detract from it. Moreover,
we demonstrate that underlying visualization design modulates performance and
that people with high versus low spatial ability respond differently to
different interaction techniques and underlying base visualizations. Our work
suggests that interaction is not as unambiguously good as we often believe; a
well designed static visualization can be as, if not more, effective than an
interactive one.Comment: 14 pages, 11 figures, To be published in 2021 ACM CHI Virtual
Conference on Human Factors in Computing System
Survey on Individual Differences in Visualization
Developments in data visualization research have enabled visualization
systems to achieve great general usability and application across a variety of
domains. These advancements have improved not only people's understanding of
data, but also the general understanding of people themselves, and how they
interact with visualization systems. In particular, researchers have gradually
come to recognize the deficiency of having one-size-fits-all visualization
interfaces, as well as the significance of individual differences in the use of
data visualization systems. Unfortunately, the absence of comprehensive surveys
of the existing literature impedes the development of this research. In this
paper, we review the research perspectives, as well as the personality traits
and cognitive abilities, visualizations, tasks, and measures investigated in
the existing literature. We aim to provide a detailed summary of existing
scholarship, produce evidence-based reviews, and spur future inquiry
Balancing Human and Machine Contributions in Human Computation Systems
Many interesting and successful human computation systems leverage the complementary computational strengths of both humans and machines to solve these problems. In this chapter, we examine Human Computation as a type of Human-Computer Collaboration—collaboration involving at least one human and at least one computational agent. We discuss recent advances in the open area of function allocation, and explore how to balance the contributions of humans and machines in computational systems. We then explore how human-computer collaborative strategies can be used to solve problems that are difficult or computationally infeasible for computers or humans alone