5,734 research outputs found
Learning Interpretable Models Using an Oracle
We look at a specific aspect of model interpretability: models often need to
be constrained in size for them to be considered interpretable. But smaller
models also tend to have high bias. This suggests a trade-off between
interpretability and accuracy. Our work addresses this by: (a) showing that
learning a training distribution (often different from the test distribution)
can often increase accuracy of small models, and therefore may be used as a
strategy to compensate for small sizes, and (b) providing a model-agnostic
algorithm to learn such training distributions. We pose the distribution
learning problem as one of optimizing parameters for an Infinite Beta Mixture
Model based on a Dirichlet Process, so that the held-out accuracy of a model
trained on a sample from this distribution is maximized. To make computation
tractable, we project the training data onto one dimension: prediction
uncertainty scores as provided by a highly accurate oracle model. A Bayesian
Optimizer is used for learning the parameters. Empirical results using multiple
real world datasets, various oracles and interpretable models with different
notions of model sizes, are presented. We observe significant relative
improvements in the F1-score in most cases, occasionally seeing improvements
greater than 100% over baselines. Additionally we show that the proposed
algorithm provides the following benefits: (a) its a framework which allows for
flexibility in implementation, (b) it can be used across feature spaces, e.g.,
the text classification accuracy of a Decision Tree using character n-grams is
shown to improve when using a Gated Recurrent Unit as an oracle, which uses a
sequence of characters as its input, (c) it can be used to train models that
have a non-differentiable training loss, e.g., Decision Trees, and (d)
reasonable defaults exist for most parameters of the algorithm, which makes it
convenient to use
Abduction-Based Explanations for Machine Learning Models
The growing range of applications of Machine Learning (ML) in a multitude of
settings motivates the ability of computing small explanations for predictions
made. Small explanations are generally accepted as easier for human decision
makers to understand. Most earlier work on computing explanations is based on
heuristic approaches, providing no guarantees of quality, in terms of how close
such solutions are from cardinality- or subset-minimal explanations. This paper
develops a constraint-agnostic solution for computing explanations for any ML
model. The proposed solution exploits abductive reasoning, and imposes the
requirement that the ML model can be represented as sets of constraints using
some target constraint reasoning system for which the decision problem can be
answered with some oracle. The experimental results, obtained on well-known
datasets, validate the scalability of the proposed approach as well as the
quality of the computed solutions
Conditional Similarity Networks
What makes images similar? To measure the similarity between images, they are
typically embedded in a feature-vector space, in which their distance preserve
the relative dissimilarity. However, when learning such similarity embeddings
the simplifying assumption is commonly made that images are only compared to
one unique measure of similarity. A main reason for this is that contradicting
notions of similarities cannot be captured in a single space. To address this
shortcoming, we propose Conditional Similarity Networks (CSNs) that learn
embeddings differentiated into semantically distinct subspaces that capture the
different notions of similarities. CSNs jointly learn a disentangled embedding
where features for different similarities are encoded in separate dimensions as
well as masks that select and reweight relevant dimensions to induce a subspace
that encodes a specific similarity notion. We show that our approach learns
interpretable image representations with visually relevant semantic subspaces.
Further, when evaluating on triplet questions from multiple similarity notions
our model even outperforms the accuracy obtained by training individual
specialized networks for each notion separately.Comment: CVPR 201
- …