240 research outputs found
Polarizing Political Polls: How Visualization Design Choices Can Shape Public Opinion and Increase Political Polarization
While we typically focus on data visualization as a tool for facilitating
cognitive tasks (e.g., learning facts, making decisions), we know relatively
little about their second-order impacts on our opinions, attitudes, and values.
For example, could design or framing choices interact with viewers' social
cognitive biases in ways that promote political polarization? When reporting on
U.S. attitudes toward public policies, it is popular to highlight the gap
between Democrats and Republicans (e.g., with blue vs red connected dot plots).
But these charts may encourage social-normative conformity, influencing
viewers' attitudes to match the divided opinions shown in the visualization. We
conducted three experiments examining visualization framing in the context of
social conformity and polarization. Crowdworkers viewed charts showing
simulated polling results for public policy proposals. We varied framing
(aggregating data as non-partisan "All US Adults," or partisan "Democrat" and
"Republican") and the visualized groups' support levels. Participants then
reported their own support for each policy. We found that participants'
attitudes biased significantly toward the group attitudes shown in the stimuli
and this can increase inter-party attitude divergence. These results
demonstrate that data visualizations can induce social conformity and
accelerate political polarization. Choosing to visualize partisan divisions can
divide us further
How Do Viewers Synthesize Conflicting Information from Data Visualizations?
Scientific knowledge develops through cumulative discoveries that build on,
contradict, contextualize, or correct prior findings. Scientists and
journalists often communicate these incremental findings to lay people through
visualizations and text (e.g., the positive and negative effects of caffeine
intake). Consequently, readers need to integrate diverse and contrasting
evidence from multiple sources to form opinions or make decisions. However, the
underlying mechanism for synthesizing information from multiple visualizations
remains underexplored. To address this knowledge gap, we conducted a series of
four experiments (N = 1166) in which participants synthesized empirical
evidence from a pair of line charts presented sequentially. In Experiment 1, we
administered a baseline condition with charts depicting no specific context
where participants held no strong belief. To test for the generalizability, we
introduced real-world scenarios to our visualizations in Experiment 2, and
added accompanying text descriptions similar to on-line news articles or blog
posts in Experiment 3. In all three experiments, we varied the relative
direction and magnitude of line slopes within the chart pairs. We found that
participants tended to weigh the positive slope more when the two charts
depicted relationships in the opposite direction (e.g., one positive slope and
one negative slope). Participants tended to weigh the less steep slope when the
two charts depicted relationships in the same direction (e.g., both positive).
Through these experiments, we characterize participants' synthesis behaviors
depending on the relationship between the information they viewed, contribute
to theories describing underlying cognitive mechanisms in information
synthesis, and describe design implications for data storytelling.Comment: 11 pages, 5 figures, To be published in The IEEE Transactions on
Visualizations and Computer Graphic
Same Data, Diverging Perspectives: The Power of Visualizations to Elicit Competing Interpretations
People routinely rely on data to make decisions, but the process can be
riddled with biases. We show that patterns in data might be noticed first or
more strongly, depending on how the data is visually represented or what the
viewer finds salient. We also demonstrate that viewer interpretation of data is
similar to that of 'ambiguous figures' such that two people looking at the same
data can come to different decisions. In our studies, participants read
visualizations depicting competitions between two entities, where one has a
historical lead (A) but the other has been gaining momentum (B) and predicted a
winner, across two chart types and three annotation approaches. They either saw
the historical lead as salient and predicted that A would win, or saw the
increasing momentum as salient and predicted B to win. These results suggest
that decisions can be influenced by both how data are presented and what
patterns people find visually salient
Biased Average Position Estimates in Line and Bar Graphs:Underestimation, Overestimation, and Perceptual Pull
In visual depictions of data, position (i.e., the vertical height of a line
or a bar) is believed to be the most precise way to encode information compared
to other encodings (e.g., hue). Not only are other encodings less precise than
position, but they can also be prone to systematic biases (e.g., color category
boundaries can distort perceived differences between hues). By comparison,
position's high level of precision may seem to protect it from such biases. In
contrast, across three empirical studies, we show that while position may be a
precise form of data encoding, it can also produce systematic biases in how
values are visually encoded, at least for reports of average position across a
short delay. In displays with a single line or a single set of bars, reports of
average positions were significantly biased, such that line positions were
underestimated and bar positions were overestimated. In displays with multiple
data series (i.e., multiple lines and/or sets of bars), this systematic bias
still persisted. We also observed an effect of "perceptual pull", where the
average position estimate for each series was 'pulled' toward the other. These
findings suggest that, although position may still be the most precise form of
visual data encoding, it can also be systematically biased
My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning
Machine learning technology has become ubiquitous, but, unfortunately, often
exhibits bias. As a consequence, disparate stakeholders need to interact with
and make informed decisions about using machine learning models in everyday
systems. Visualization technology can support stakeholders in understanding and
evaluating trade-offs between, for example, accuracy and fairness of models.
This paper aims to empirically answer "Can visualization design choices affect
a stakeholder's perception of model bias, trust in a model, and willingness to
adopt a model?" Through a series of controlled, crowd-sourced experiments with
more than 1,500 participants, we identify a set of strategies people follow in
deciding which models to trust. Our results show that men and women prioritize
fairness and performance differently and that visual design choices
significantly affect that prioritization. For example, women trust fairer
models more often than men do, participants value fairness more when it is
explained using text than as a bar chart, and being explicitly told a model is
biased has a bigger impact than showing past biased performance. We test the
generalizability of our results by comparing the effect of multiple textual and
visual design choices and offer potential explanations of the cognitive
mechanisms behind the difference in fairness perception and trust. Our research
guides design considerations to support future work developing visualization
systems for machine learning.Comment: 11 pages, 6 figures, to appear in IEEE Transactions of Visualization
and Computer Graphics (Also in proceedings of IEEE VIS 2023
Same Data, Diverging Perspectives: The Power of Visualizations to Elicit Competing Interpretations
People routinely rely on data to make decisions, but the process can be riddled with biases. We show that patterns in data might be noticed first or more strongly, depending on how the data is visually represented or what the viewer finds salient. We also demonstrate that viewer interpretation of data is similar to that of ‘ambiguous figures’ such that two people looking at the same data can come to different decisions. In our studies, participants read visualizations depicting competitions between two entities, where one has a historical lead (A) but the other has been gaining momentum (B) and predicted a winner, across two chart types and three annotation approaches. They either saw the historical lead as salient and predicted that A would win, or saw the increasing momentum as salient and predicted B to win. These results suggest that decisions can be influenced by both how data are presented and what patterns people find visually salient
- …