4,826 research outputs found
DRLViz: Understanding Decisions and Memory in Deep Reinforcement Learning
We present DRLViz, a visual analytics interface to interpret the internal
memory of an agent (e.g. a robot) trained using deep reinforcement learning.
This memory is composed of large temporal vectors updated when the agent moves
in an environment and is not trivial to understand due to the number of
dimensions, dependencies to past vectors, spatial/temporal correlations, and
co-correlation between dimensions. It is often referred to as a black box as
only inputs (images) and outputs (actions) are intelligible for humans. Using
DRLViz, experts are assisted to interpret decisions using memory reduction
interactions, and to investigate the role of parts of the memory when errors
have been made (e.g. wrong direction). We report on DRLViz applied in the
context of video games simulators (ViZDoom) for a navigation scenario with item
gathering tasks. We also report on experts evaluation using DRLViz, and
applicability of DRLViz to other scenarios and navigation problems beyond
simulation games, as well as its contribution to black box models
interpretability and explainability in the field of visual analytics
ProtoExplorer: Interpretable Forensic Analysis of Deepfake Videos using Prototype Exploration and Refinement
In high-stakes settings, Machine Learning models that can provide predictions
that are interpretable for humans are crucial. This is even more true with the
advent of complex deep learning based models with a huge number of tunable
parameters. Recently, prototype-based methods have emerged as a promising
approach to make deep learning interpretable. We particularly focus on the
analysis of deepfake videos in a forensics context. Although prototype-based
methods have been introduced for the detection of deepfake videos, their use in
real-world scenarios still presents major challenges, in that prototypes tend
to be overly similar and interpretability varies between prototypes. This paper
proposes a Visual Analytics process model for prototype learning, and, based on
this, presents ProtoExplorer, a Visual Analytics system for the exploration and
refinement of prototype-based deepfake detection models. ProtoExplorer offers
tools for visualizing and temporally filtering prototype-based predictions when
working with video data. It disentangles the complexity of working with
spatio-temporal prototypes, facilitating their visualization. It further
enables the refinement of models by interactively deleting and replacing
prototypes with the aim to achieve more interpretable and less biased
predictions while preserving detection accuracy. The system was designed with
forensic experts and evaluated in a number of rounds based on both open-ended
think aloud evaluation and interviews. These sessions have confirmed the
strength of our prototype based exploration of deepfake videos while they
provided the feedback needed to continuously improve the system.Comment: 15 pages, 6 figure
Social media analytics: a survey of techniques, tools and platforms
This paper is written for (social science) researchers seeking to analyze the wealth of social media now available. It presents a comprehensive review of software tools for social networking media, wikis, really simple syndication feeds, blogs, newsgroups, chat and news feeds. For completeness, it also includes introductions to social media scraping, storage, data cleaning and sentiment analysis. Although principally a review, the paper also provides a methodology and a critique of social media tools. Analyzing social media, in particular Twitter feeds for sentiment analysis, has become a major research and business activity due to the availability of web-based application programming interfaces (APIs) provided by Twitter, Facebook and News services. This has led to an ‘explosion’ of data services, software tools for scraping and analysis and social media analytics platforms. It is also a research area undergoing rapid change and evolution due to commercial pressures and the potential for using social media data for computational (social science) research. Using a simple taxonomy, this paper provides a review of leading software tools and how to use them to scrape, cleanse and analyze the spectrum of social media. In addition, it discussed the requirement of an experimental computational environment for social media research and presents as an illustration the system architecture of a social media (analytics) platform built by University College London. The principal contribution of this paper is to provide an overview (including code fragments) for scientists seeking to utilize social media scraping and analytics either in their research or business. The data retrieval techniques that are presented in this paper are valid at the time of writing this paper (June 2014), but they are subject to change since social media data scraping APIs are rapidly changing
- …