344 research outputs found
Is this model reliable for everyone? Testing for strong calibration
In a well-calibrated risk prediction model, the average predicted probability
is close to the true event rate for any given subgroup. Such models are
reliable across heterogeneous populations and satisfy strong notions of
algorithmic fairness. However, the task of auditing a model for strong
calibration is well-known to be difficult -- particularly for machine learning
(ML) algorithms -- due to the sheer number of potential subgroups. As such,
common practice is to only assess calibration with respect to a few predefined
subgroups. Recent developments in goodness-of-fit testing offer potential
solutions but are not designed for settings with weak signal or where the
poorly calibrated subgroup is small, as they either overly subdivide the data
or fail to divide the data at all. We introduce a new testing procedure based
on the following insight: if we can reorder observations by their expected
residuals, there should be a change in the association between the predicted
and observed residuals along this sequence if a poorly calibrated subgroup
exists. This lets us reframe the problem of calibration testing into one of
changepoint detection, for which powerful methods already exist. We begin with
introducing a sample-splitting procedure where a portion of the data is used to
train a suite of candidate models for predicting the residual, and the
remaining data are used to perform a score-based cumulative sum (CUSUM) test.
To further improve power, we then extend this adaptive CUSUM test to
incorporate cross-validation, while maintaining Type I error control under
minimal assumptions. Compared to existing methods, the proposed procedure
consistently achieved higher power in simulation studies and more than doubled
the power when auditing a mortality risk prediction model
Early changes in diaphragmatic function evaluated using ultrasound in cardiac surgery patients: a cohort study.
Little is known about the evolution of diaphragmatic function in the early post-cardiac surgery period. The main purpose of this work is to describe its evolution using ultrasound measurements of muscular excursion and thickening fraction (TF). Single-center prospective study of 79 consecutive uncomplicated elective cardiac surgery patients, using motion-mode during quiet unassisted breathing. Excursion and TF were measured sequentially for each patient [pre-operative (D1), 1 day (D2) and 5 days (D3) after surgery]. Pre-operative median for right and left hemidiaphragmatic excursions were 1.8 (IQR 1.6 to 2.1) cm and 1.7 (1.4 to 2.0) cm, respectively. Pre-operative median right and left thickening fractions were 28 (19 to 36) % and 33 (22 to 51) %, respectively. At D2, there was a reduction in both excursion (right: 1.5 (1.1 to 1.8) cm, p < 0.001, left: 1.5 (1.1 to 1.8), p = 0.003) and thickening fractions (right: 20 (15 to 34) %, p = 0.021, left: 24 (17 to 39) %, p = 0.002), followed by a return to pre-operative values at D3. A positive moderate correlation was found between excursion and thickening fraction (Spearman's rho 0.518 for right and 0.548 for left hemidiaphragm, p < 0.001). Interobserver reliability yielded a bias below 0.1 cm with limits of agreement (LOA) of ± 0.3 cm for excursion and - 2% with LOA of ± 21% for thickening fractions. After cardiac surgery, the evolution of diaphragmatic function is characterized by a transient impairment followed by a quick recovery. Although ultrasound diaphragmatic excursion and thickening fraction are correlated, excursion seems to be a more feasible and reproducible method in this population
A Brief Tutorial on Sample Size Calculations for Fairness Audits
In fairness audits, a standard objective is to detect whether a given
algorithm performs substantially differently between subgroups. Properly
powering the statistical analysis of such audits is crucial for obtaining
informative fairness assessments, as it ensures a high probability of detecting
unfairness when it exists. However, limited guidance is available on the amount
of data necessary for a fairness audit, lacking directly applicable results
concerning commonly used fairness metrics. Additionally, the consideration of
unequal subgroup sample sizes is also missing. In this tutorial, we address
these issues by providing guidance on how to determine the required subgroup
sample sizes to maximize the statistical power of hypothesis tests for
detecting unfairness. Our findings are applicable to audits of binary
classification models and multiple fairness metrics derived as summaries of the
confusion matrix. Furthermore, we discuss other aspects of audit study designs
that can increase the reliability of audit results.Comment: 4 pages, 1 figure, 1 table, Workshop on Regulatable Machine Learning
at the 37th Conference on Neural Information Processing System
Expert-Augmented Machine Learning
Machine Learning is proving invaluable across disciplines. However, its
success is often limited by the quality and quantity of available data, while
its adoption by the level of trust that models afford users. Human vs. machine
performance is commonly compared empirically to decide whether a certain task
should be performed by a computer or an expert. In reality, the optimal
learning strategy may involve combining the complementary strengths of man and
machine. Here we present Expert-Augmented Machine Learning (EAML), an automated
method that guides the extraction of expert knowledge and its integration into
machine-learned models. We use a large dataset of intensive care patient data
to predict mortality and show that we can extract expert knowledge using an
online platform, help reveal hidden confounders, improve generalizability on a
different population and learn using less data. EAML presents a novel framework
for high performance and dependable machine learning in critical applications
Recommended from our members
Expert-augmented machine learning.
Machine learning is proving invaluable across disciplines. However, its success is often limited by the quality and quantity of available data, while its adoption is limited by the level of trust afforded by given models. Human vs. machine performance is commonly compared empirically to decide whether a certain task should be performed by a computer or an expert. In reality, the optimal learning strategy may involve combining the complementary strengths of humans and machines. Here, we present expert-augmented machine learning (EAML), an automated method that guides the extraction of expert knowledge and its integration into machine-learned models. We used a large dataset of intensive-care patient data to derive 126 decision rules that predict hospital mortality. Using an online platform, we asked 15 clinicians to assess the relative risk of the subpopulation defined by each rule compared to the total sample. We compared the clinician-assessed risk to the empirical risk and found that, while clinicians agreed with the data in most cases, there were notable exceptions where they overestimated or underestimated the true risk. Studying the rules with greatest disagreement, we identified problems with the training data, including one miscoded variable and one hidden confounder. Filtering the rules based on the extent of disagreement between clinician-assessed risk and empirical risk, we improved performance on out-of-sample data and were able to train with less data. EAML provides a platform for automated creation of problem-specific priors, which help build robust and dependable machine-learning models in critical applications
- …