2,047 research outputs found
Model Cards for Model Reporting
Trained machine learning models are increasingly used to perform high-impact
tasks in areas such as law enforcement, medicine, education, and employment. In
order to clarify the intended use cases of machine learning models and minimize
their usage in contexts for which they are not well suited, we recommend that
released models be accompanied by documentation detailing their performance
characteristics. In this paper, we propose a framework that we call model
cards, to encourage such transparent model reporting. Model cards are short
documents accompanying trained machine learning models that provide benchmarked
evaluation in a variety of conditions, such as across different cultural,
demographic, or phenotypic groups (e.g., race, geographic location, sex,
Fitzpatrick skin type) and intersectional groups (e.g., age and race, or sex
and Fitzpatrick skin type) that are relevant to the intended application
domains. Model cards also disclose the context in which models are intended to
be used, details of the performance evaluation procedures, and other relevant
information. While we focus primarily on human-centered machine learning models
in the application fields of computer vision and natural language processing,
this framework can be used to document any trained machine learning model. To
solidify the concept, we provide cards for two supervised models: One trained
to detect smiling faces in images, and one trained to detect toxic comments in
text. We propose model cards as a step towards the responsible democratization
of machine learning and related AI technology, increasing transparency into how
well AI technology works. We hope this work encourages those releasing trained
machine learning models to accompany model releases with similar detailed
evaluation numbers and other relevant documentation
Controlling for Unobserved Confounds in Classification Using Correlational Constraints
As statistical classifiers become integrated into real-world applications, it
is important to consider not only their accuracy but also their robustness to
changes in the data distribution. In this paper, we consider the case where
there is an unobserved confounding variable that influences both the
features and the class variable . When the influence of
changes from training to testing data, we find that the classifier accuracy can
degrade rapidly. In our approach, we assume that we can predict the value of
at training time with some error. The prediction for is then fed to
Pearl's back-door adjustment to build our model. Because of the attenuation
bias caused by measurement error in , standard approaches to controlling for
are ineffective. In response, we propose a method to properly control for
the influence of by first estimating its relationship with the class
variable , then updating predictions for to match that estimated
relationship. By adjusting the influence of , we show that we can build a
model that exceeds competing baselines on accuracy as well as on robustness
over a range of confounding relationships.Comment: 9 page
A Baseline for Shapley Values in MLPs: from Missingness to Neutrality
Being able to explain a prediction as well as having a model that performs
well are paramount in many machine learning applications. Deep neural networks
have gained momentum recently on the basis of their accuracy, however these are
often criticised to be black-boxes. Many authors have focused on proposing
methods to explain their predictions. Among these explainability methods,
feature attribution methods have been favoured for their strong theoretical
foundation: the Shapley value. A limitation of Shapley value is the need to
define a baseline (aka reference point) representing the missingness of a
feature. In this paper, we present a method to choose a baseline based on a
neutrality value: a parameter defined by decision makers at which their choices
are determined by the returned value of the model being either below or above
it. Based on this concept, we theoretically justify these neutral baselines and
find a way to identify them for MLPs. Then, we experimentally demonstrate that
for a binary classification task, using a synthetic dataset and a dataset
coming from the financial domain, the proposed baselines outperform, in terms
of local explanability power, standard ways of choosing them
- …