10,372 research outputs found
Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations
Post-hoc explanations of machine learning models are crucial for people to
understand and act on algorithmic predictions. An intriguing class of
explanations is through counterfactuals, hypothetical examples that show people
how to obtain a different prediction. We posit that effective counterfactual
explanations should satisfy two properties: feasibility of the counterfactual
actions given user context and constraints, and diversity among the
counterfactuals presented. To this end, we propose a framework for generating
and evaluating a diverse set of counterfactual explanations based on
determinantal point processes. To evaluate the actionability of
counterfactuals, we provide metrics that enable comparison of
counterfactual-based methods to other local explanation methods. We further
address necessary tradeoffs and point to causal implications in optimizing for
counterfactuals. Our experiments on four real-world datasets show that our
framework can generate a set of counterfactuals that are diverse and well
approximate local decision boundaries, outperforming prior approaches to
generating diverse counterfactuals. We provide an implementation of the
framework at https://github.com/microsoft/DiCE.Comment: 13 page
Crowdsourcing Linked Data on listening experiences through reuse and enhancement of library data
Research has approached the practice of musical reception in a multitude of ways, such as the analysis of professional critique, sales figures and psychological processes activated by the act of listening. Studies in the Humanities, on the other hand, have been hindered by the lack of structured evidence of actual experiences of listening as reported by the listeners themselves, a concern that was voiced since the early Web era. It was however assumed that such evidence existed, albeit in pure textual form, but could not be leveraged until it was digitised and aggregated. The Listening Experience Database (LED) responds to this research need by providing a centralised hub for evidence of listening in the literature. Not only does LED support search and reuse across nearly 10,000 records, but it also provides machine-readable structured data of the knowledge around the contexts of listening. To take advantage of the mass of formal knowledge that already exists on the Web concerning these contexts, the entire framework adopts Linked Data principles and technologies. This also allows LED to directly reuse open data from the British Library for the source documentation that is already published. Reused data are re-published as open data with enhancements obtained by expanding over the model of the original data, such as the partitioning of published books and collections into individual stand-alone documents. The database was populated through crowdsourcing and seamlessly incorporates data reuse from the very early data entry phases. As the sources of the evidence often contain vague, fragmentary of uncertain information, facilities were put in place to generate structured data out of such fuzziness. Alongside elaborating on these functionalities, this article provides insights into the most recent features of the latest instalment of the dataset and portal, such as the interlinking with the MusicBrainz database, the relaxation of geographical input constraints through text mining, and the plotting of key locations in an interactive geographical browser
CERN openlab Whitepaper on Future IT Challenges in Scientific Research
This whitepaper describes the major IT challenges in scientific research at CERN and several other European and international research laboratories and projects. Each challenge is exemplified through a set of concrete use cases drawn from the requirements of large-scale scientific programs. The paper is based on contributions from many researchers and IT experts of the participating laboratories and also input from the existing CERN openlab industrial sponsors. The views expressed in this document are those of the individual contributors and do not necessarily reflect the view of their organisations and/or affiliates
Explaining Black-Box Models through Counterfactuals
We present CounterfactualExplanations.jl: a package for generating
Counterfactual Explanations (CE) and Algorithmic Recourse (AR) for black-box
models in Julia. CE explain how inputs into a model need to change to yield
specific model predictions. Explanations that involve realistic and actionable
changes can be used to provide AR: a set of proposed actions for individuals to
change an undesirable outcome for the better. In this article, we discuss the
usefulness of CE for Explainable Artificial Intelligence and demonstrate the
functionality of our package. The package is straightforward to use and
designed with a focus on customization and extensibility. We envision it to one
day be the go-to place for explaining arbitrary predictive models in Julia
through a diverse suite of counterfactual generators.Comment: 13 pages, 9 figures, originally published in The Proceedings of the
JuliaCon Conferences (JCON
Object-Proposal Evaluation Protocol is 'Gameable'
Object proposals have quickly become the de-facto pre-processing step in a
number of vision pipelines (for object detection, object discovery, and other
tasks). Their performance is usually evaluated on partially annotated datasets.
In this paper, we argue that the choice of using a partially annotated dataset
for evaluation of object proposals is problematic -- as we demonstrate via a
thought experiment, the evaluation protocol is 'gameable', in the sense that
progress under this protocol does not necessarily correspond to a "better"
category independent object proposal algorithm.
To alleviate this problem, we: (1) Introduce a nearly-fully annotated version
of PASCAL VOC dataset, which serves as a test-bed to check if object proposal
techniques are overfitting to a particular list of categories. (2) Perform an
exhaustive evaluation of object proposal methods on our introduced nearly-fully
annotated PASCAL dataset and perform cross-dataset generalization experiments;
and (3) Introduce a diagnostic experiment to detect the bias capacity in an
object proposal algorithm. This tool circumvents the need to collect a densely
annotated dataset, which can be expensive and cumbersome to collect. Finally,
we plan to release an easy-to-use toolbox which combines various publicly
available implementations of object proposal algorithms which standardizes the
proposal generation and evaluation so that new methods can be added and
evaluated on different datasets. We hope that the results presented in the
paper will motivate the community to test the category independence of various
object proposal methods by carefully choosing the evaluation protocol.Comment: 15 pages, 11 figures, 4 table
Towards Exascale Scientific Metadata Management
Advances in technology and computing hardware are enabling scientists from
all areas of science to produce massive amounts of data using large-scale
simulations or observational facilities. In this era of data deluge, effective
coordination between the data production and the analysis phases hinges on the
availability of metadata that describe the scientific datasets. Existing
workflow engines have been capturing a limited form of metadata to provide
provenance information about the identity and lineage of the data. However,
much of the data produced by simulations, experiments, and analyses still need
to be annotated manually in an ad hoc manner by domain scientists. Systematic
and transparent acquisition of rich metadata becomes a crucial prerequisite to
sustain and accelerate the pace of scientific innovation. Yet, ubiquitous and
domain-agnostic metadata management infrastructure that can meet the demands of
extreme-scale science is notable by its absence.
To address this gap in scientific data management research and practice, we
present our vision for an integrated approach that (1) automatically captures
and manipulates information-rich metadata while the data is being produced or
analyzed and (2) stores metadata within each dataset to permeate
metadata-oblivious processes and to query metadata through established and
standardized data access interfaces. We motivate the need for the proposed
integrated approach using applications from plasma physics, climate modeling
and neuroscience, and then discuss research challenges and possible solutions
- …