442 research outputs found
Learning Interpretability for Visualizations using Adapted Cox Models through a User Experiment
In order to be useful, visualizations need to be interpretable. This paper
uses a user-based approach to combine and assess quality measures in order to
better model user preferences. Results show that cluster separability measures
are outperformed by a neighborhood conservation measure, even though the former
are usually considered as intuitively representative of user motives. Moreover,
combining measures, as opposed to using a single measure, further improves
prediction performances.Comment: Presented at NIPS 2016 Workshop on Interpretable Machine Learning in
Complex System
Micro-entries: Encouraging Deeper Evaluation of Mental Models Over Time for Interactive Data Systems
Many interactive data systems combine visual representations of data with
embedded algorithmic support for automation and data exploration. To
effectively support transparent and explainable data systems, it is important
for researchers and designers to know how users understand the system. We
discuss the evaluation of users' mental models of system logic. Mental models
are challenging to capture and analyze. While common evaluation methods aim to
approximate the user's final mental model after a period of system usage, user
understanding continuously evolves as users interact with a system over time.
In this paper, we review many common mental model measurement techniques,
discuss tradeoffs, and recommend methods for deeper, more meaningful evaluation
of mental models when using interactive data analysis and visualization
systems. We present guidelines for evaluating mental models over time that
reveal the evolution of specific model updates and how they may map to the
particular use of interface features and data queries. By asking users to
describe what they know and how they know it, researchers can collect
structured, time-ordered insight into a user's conceptualization process while
also helping guide users to their own discoveries.Comment: 10 pages, submitted to BELIV 2020 Worksho
On intelligible multimodal visual analysis
Analyzing data becomes an important skill in a more and more digital world. Yet, many users are facing knowledge barriers preventing them to independently conduct their data analysis. To tear down some of these barriers, multimodal interaction for visual analysis has been proposed. Multimodal interaction through speech and touch enables not only experts, but also novice users to effortlessly interact with such kind of technology. However, current approaches do not take the user differences into account. In fact, whether visual analysis is intelligible ultimately depends on the user.
In order to close this research gap, this dissertation explores how multimodal visual analysis can be personalized. To do so, it takes a holistic view. First, an intelligible task space of visual analysis tasks is defined by considering personalization potentials. This task space provides an initial basis for understanding how effective personalization in visual analysis can be approached. Second, empirical analyses on speech commands in visual analysis as well as used visualizations from scientific publications further reveal patterns and structures. These behavior-indicated findings help to better understand expectations towards multimodal visual analysis. Third, a technical prototype is designed considering the previous findings. Enriching the visual analysis by a persistent dialogue and a transparency of the underlying computations, conducted user studies show not only advantages, but address the relevance of considering the user’s characteristics. Finally, both communications channels – visualizations and dialogue – are personalized. Leveraging linguistic theory and reinforcement learning, the results highlight a positive effect of adjusting to the user. Especially when the user’s knowledge is exceeded, personalizations helps to improve the user experience.
Overall, this dissertations confirms not only the importance of considering the user’s characteristics in multimodal visual analysis, but also provides insights on how an intelligible analysis can be achieved. By understanding the use of input modalities, a system can focus only on the user’s needs. By understanding preferences on the output modalities, the system can better adapt to the user. Combining both directions imporves user experience and contributes towards an intelligible multimodal visual analysis
Toward Understanding Visual Perception in Machines with Human Psychophysics
Over the last several years, Deep Learning algorithms have become more and more powerful.
As such, they are being deployed in increasingly many areas including ones that can directly affect human lives.
At the same time, regulations like the GDPR or the AI Act are putting the request and need to better understand these artificial algorithms on legal grounds.
How do these algorithms come to their decisions?
What limits do they have?
And what assumptions do they make?
This thesis presents three publications that deepen our understanding of deep convolutional neural networks (DNNs) for visual perception of static images.
While all of them leverage human psychophysics, they do so in two different ways: either via direct comparison between human and DNN behavioral data or via an evaluation of the helpfulness of an explainability method.
Besides insights on DNNs, these works emphasize good practices:
For comparison studies, we propose a checklist on how to design, conduct and interpret experiments between different systems.
And for explainability methods, our evaluations exemplify that quantitatively testing widely spread intuitions can help put their benefits in a realistic perspective.
In the first publication, we test how similar DNNs are to the human visual system, and more specifically its capabilities and information processing.
Our experiments reveal that DNNs (1)~can detect closed contours, (2)~perform well on an abstract visual reasoning task and (3)~correctly classify small image crops.
On a methodological level, these experiments illustrate that (1)~human bias can influence our interpretation of findings, (2)~distinguishing necessary and sufficient mechanisms can be challenging, and (3)~the degree of aligning experimental conditions between systems can alter the outcome.
In the second and third publications, we evaluate how helpful humans find the explainability method feature visualization.
The purpose of this tool is to grant insights into the features of a DNN.
To measure the general informativeness and causal understanding supported via feature visualizations, we test participants on two different psychophysical tasks.
Our data unveil that humans can indeed understand the inner DNN semantics based on this explainability tool.
However, other visualizations such as natural data set samples also provide useful, and sometimes even \emph{more} useful, information.
On a methodological level, our work illustrates that human evaluations can adjust our expectations toward explainability methods and that different claims have to match the experiment
- …