68 research outputs found
Micro-entries: Encouraging Deeper Evaluation of Mental Models Over Time for Interactive Data Systems
Many interactive data systems combine visual representations of data with
embedded algorithmic support for automation and data exploration. To
effectively support transparent and explainable data systems, it is important
for researchers and designers to know how users understand the system. We
discuss the evaluation of users' mental models of system logic. Mental models
are challenging to capture and analyze. While common evaluation methods aim to
approximate the user's final mental model after a period of system usage, user
understanding continuously evolves as users interact with a system over time.
In this paper, we review many common mental model measurement techniques,
discuss tradeoffs, and recommend methods for deeper, more meaningful evaluation
of mental models when using interactive data analysis and visualization
systems. We present guidelines for evaluating mental models over time that
reveal the evolution of specific model updates and how they may map to the
particular use of interface features and data queries. By asking users to
describe what they know and how they know it, researchers can collect
structured, time-ordered insight into a user's conceptualization process while
also helping guide users to their own discoveries.Comment: 10 pages, submitted to BELIV 2020 Worksho
Soliciting Human-in-the-Loop User Feedback for Interactive Machine Learning Reduces User Trust and Impressions of Model Accuracy
Mixed-initiative systems allow users to interactively provide feedback to
potentially improve system performance. Human feedback can correct model errors
and update model parameters to dynamically adapt to changing data.
Additionally, many users desire the ability to have a greater level of control
and fix perceived flaws in systems they rely on. However, how the ability to
provide feedback to autonomous systems influences user trust is a largely
unexplored area of research. Our research investigates how the act of providing
feedback can affect user understanding of an intelligent system and its
accuracy. We present a controlled experiment using a simulated object detection
system with image data to study the effects of interactive feedback collection
on user impressions. The results show that providing human-in-the-loop feedback
lowered both participants' trust in the system and their perception of system
accuracy, regardless of whether the system accuracy improved in response to
their feedback. These results highlight the importance of considering the
effects of allowing end-user feedback on user trust when designing intelligent
systems.Comment: Accepted and to appear in the Proceedings of the AAAI Conference on
Human Computation and Crowdsourcing (HCOMP) 202
The Influence of Visual Provenance Representations on Strategies in a Collaborative Hand-off Data Analysis Scenario
Conducting data analysis tasks rarely occur in isolation. Especially in
intelligence analysis scenarios where different experts contribute knowledge to
a shared understanding, members must communicate how insights develop to
establish common ground among collaborators. The use of provenance to
communicate analytic sensemaking carries promise by describing the interactions
and summarizing the steps taken to reach insights. Yet, no universal guidelines
exist for communicating provenance in different settings. Our work focuses on
the presentation of provenance information and the resulting conclusions
reached and strategies used by new analysts. In an open-ended, 30-minute,
textual exploration scenario, we qualitatively compare how adding different
types of provenance information (specifically data coverage and interaction
history) affects analysts' confidence in conclusions developed, propensity to
repeat work, filtering of data, identification of relevant information, and
typical investigation strategies. We see that data coverage (i.e., what was
interacted with) provides provenance information without limiting individual
investigation freedom. On the other hand, while interaction history (i.e., when
something was interacted with) does not significantly encourage more mimicry,
it does take more time to comfortably understand, as represented by less
confident conclusions and less relevant information-gathering behaviors. Our
results contribute empirical data towards understanding how provenance
summarizations can influence analysis behaviors.Comment: to be published in IEEE Vis 202
XFake: Explainable Fake News Detector with Visualizations
In this demo paper, we present the XFake system, an explainable fake news
detector that assists end-users to identify news credibility. To effectively
detect and interpret the fakeness of news items, we jointly consider both
attributes (e.g., speaker) and statements. Specifically, MIMIC, ATTN and PERT
frameworks are designed, where MIMIC is built for attribute analysis, ATTN is
for statement semantic analysis and PERT is for statement linguistic analysis.
Beyond the explanations extracted from the designed frameworks, relevant
supporting examples as well as visualization are further provided to facilitate
the interpretation. Our implemented system is demonstrated on a real-world
dataset crawled from PolitiFact, where thousands of verified political news
have been collected.Comment: 4 pages, WebConf'2019 Dem
- …