36 research outputs found
explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning
We propose a framework for interactive and explainable machine learning that
enables users to (1) understand machine learning models; (2) diagnose model
limitations using different explainable AI methods; as well as (3) refine and
optimize the models. Our framework combines an iterative XAI pipeline with
eight global monitoring and steering mechanisms, including quality monitoring,
provenance tracking, model comparison, and trust building. To operationalize
the framework, we present explAIner, a visual analytics system for interactive
and explainable machine learning that instantiates all phases of the suggested
pipeline within the commonly used TensorBoard environment. We performed a
user-study with nine participants across different expertise levels to examine
their perception of our workflow and to collect suggestions to fill the gap
between our system and framework. The evaluation confirms that our tightly
integrated system leads to an informed machine learning process while
disclosing opportunities for further extensions.Comment: 9 pages paper, 2 pages references, 5 pages supplementary material
(ancillary files
Recommended from our members
Extracting Movement-based Topics for Analysis of Space Use
We present a novel approach to analyze spatio-temporal movement patterns using topic modeling. Our approach represents trajectories as sequences of place visits and moves, applies topic modeling separately to each collection of sequences, and synthesizes results. This supports the identification of dominant topics for both place visits and moves, the exploration of spatial and temporal patterns of movement, enabling understanding of space use. The approach is applied to two real-world data sets of car movements in Milan and UK road traffic, demonstrating the ability to uncover meaningful patterns and insights
A Survey on ML4VIS: Applying Machine Learning Advances to Data Visualization
Inspired by the great success of machine learning (ML), researchers have
applied ML techniques to visualizations to achieve a better design,
development, and evaluation of visualizations. This branch of studies, known as
ML4VIS, is gaining increasing research attention in recent years. To
successfully adapt ML techniques for visualizations, a structured understanding
of the integration of ML4VISis needed. In this paper, we systematically survey
88 ML4VIS studies, aiming to answer two motivating questions: "what
visualization processes can be assisted by ML?" and "how ML techniques can be
used to solve visualization problems?" This survey reveals seven main processes
where the employment of ML techniques can benefit visualizations:Data
Processing4VIS, Data-VIS Mapping, InsightCommunication, Style Imitation, VIS
Interaction, VIS Reading, and User Profiling. The seven processes are related
to existing visualization theoretical models in an ML4VIS pipeline, aiming to
illuminate the role of ML-assisted visualization in general
visualizations.Meanwhile, the seven processes are mapped into main learning
tasks in ML to align the capabilities of ML with the needs in visualization.
Current practices and future opportunities of ML4VIS are discussed in the
context of the ML4VIS pipeline and the ML-VIS mapping. While more studies are
still needed in the area of ML4VIS, we hope this paper can provide a
stepping-stone for future exploration. A web-based interactive browser of this
survey is available at https://ml4vis.github.ioComment: 19 pages, 12 figures, 4 table
The Role of Human Knowledge in Explainable AI
As the performance and complexity of machine learning models have grown significantly over the last years, there has been an increasing need to develop methodologies to describe their behaviour. Such a need has mainly arisen due to the widespread use of black-box models, i.e., high-performing models whose internal logic is challenging to describe and understand. Therefore, the machine learning and AI field is facing a new challenge: making models more explainable through appropriate techniques. The final goal of an explainability method is to faithfully describe the behaviour of a (black-box) model to users who can get a better understanding of its logic, thus increasing the trust and acceptance of the system. Unfortunately, state-of-the-art explainability approaches may not be enough to guarantee the full understandability of explanations from a human perspective. For this reason, human-in-the-loop methods have been widely employed to enhance and/or evaluate explanations of machine learning models. These approaches focus on collecting human knowledge that AI systems can then employ or involving humans to achieve their objectives (e.g., evaluating or improving the system). This article aims to present a literature overview on collecting and employing human knowledge to improve and evaluate the understandability of machine learning models through human-in-the-loop approaches. Furthermore, a discussion on the challenges, state-of-the-art, and future trends in explainability is also provided
Tools of Trade of the Next Blue-Collar Job? Antecedents, Design Features, and Outcomes of Interactive Labeling Systems
Supervised machine learning is becoming increasingly popular - and so is the need for annotated training data. Such data often needs to be manually labeled by human workers, not unlikely to negatively impact the involved workforce. To alleviate this issue, a new information systems class has emerged - interactive labeling systems. However, this young, but rapidly growing field lacks guidance and structure regarding the design of such systems. Against this backdrop, this paper describes antecedents, design features, and outcomes of interactive labeling systems. We perform a systematic literature review, identifying 188 relevant articles. Our results are presented as a morphological box with 14 dimensions, which we evaluate using card sorting. By additionally offering this box as a web-based artifact, we provide actionable guidance for interactive labeling system development for scholars and practitioners. Lastly, we discuss imbalances in the article distribution of our morphological box and suggest future work directions