26 research outputs found

    Do Invariances in Deep Neural Networks Align with Human Perception?

    Get PDF
    An evaluation criterion for safe and trustworthy deep learning is how well the invariances captured by representations of deep neural networks (DNNs) are shared with humans. We identify challenges in measuring these invariances. Prior works used gradient-based methods to generate identically represented inputs (IRIs), i.e., inputs which have identical representations (on a given layer) of a neural network, and thus capture invariances of a given network. One necessary criterion for a network's invariances to align with human perception is for its IRIs look “similar” to humans. Prior works, however, have mixed takeaways; some argue that later layers of DNNs do not learn human-like invariances yet others seem to indicate otherwise. We argue that the loss function used to generate IRIs can heavily affect takeaways about invariances of the network and is the primary reason for these conflicting findings. We propose an adversarial regularizer on the IRI-generation loss that finds IRIs that make any model appear to have very little shared invariance with humans. Based on this evidence, we argue that there is scope for improving models to have human-like invariances, and further, to have meaningful comparisons between models one should use IRIs generated using the regularizer-free loss. We then conduct an in-depth investigation of how different components (e.g. architectures, training losses, data augmentations) of the deep learning pipeline contribute to learning models that have good alignment with humans. We find that architectures with residual connections trained using a (self-supervised) contrastive loss with `p ball adversarial data augmentation tend to learn invariances that are most aligned with humans. Code: github.com/nvedant07/Human-NN-Alignment. We strongly recommend reading the arxiv version of this paper: https://arxiv.org/abs/2111.14726

    A unified approach to quantifying algorithmic unfairness: Measuring individual & group unfairness via inequality indices

    Get PDF
    Discrimination via algorithmic decision making has received considerable attention. Prior work largely focuses on defining conditions for fairness, but does not define satisfactory measures of algorithmic unfairness. In this paper, we focus on the following question: Given two unfair algorithms, how should we determine which of the two is more unfair? Our core idea is to use existing inequality indices from economics to measure how unequally the outcomes of an algorithm benefit different individuals or groups in a population. Our work offers a justified and general framework to compare and contrast the (un)fairness of algorithmic predictors. This unifying approach enables us to quantify unfairness both at the individual and the group level. Further, our work reveals overlooked tradeoffs between different fairness notions: using our proposed measures, the overall individual-level unfairness of an algorithm can be decomposed into a between-group and a within-group component. Earlier methods are typically designed to tackle only between-group unfairness, which may be justified for legal or other reasons. However, we demonstrate that minimizing exclusively the between-group component may, in fact, increase the within-group, and hence the overall unfairness. We characterize and illustrate the tradeoffs between our measures of (un)fairness and the prediction accuracy

    Determinants of recovery from post-COVID-19 dyspnoea: analysis of UK prospective cohorts of hospitalised COVID-19 patients and community-based controls

    Get PDF
    Background The risk factors for recovery from COVID-19 dyspnoea are poorly understood. We investigated determinants of recovery from dyspnoea in adults with COVID-19 and compared these to determinants of recovery from non-COVID-19 dyspnoea. Methods We used data from two prospective cohort studies: PHOSP-COVID (patients hospitalised between March 2020 and April 2021 with COVID-19) and COVIDENCE UK (community cohort studied over the same time period). PHOSP-COVID data were collected during hospitalisation and at 5-month and 1-year follow-up visits. COVIDENCE UK data were obtained through baseline and monthly online questionnaires. Dyspnoea was measured in both cohorts with the Medical Research Council Dyspnoea Scale. We used multivariable logistic regression to identify determinants associated with a reduction in dyspnoea between 5-month and 1-year follow-up. Findings We included 990 PHOSP-COVID and 3309 COVIDENCE UK participants. We observed higher odds of improvement between 5-month and 1-year follow-up among PHOSP-COVID participants who were younger (odds ratio 1.02 per year, 95% CI 1.01–1.03), male (1.54, 1.16–2.04), neither obese nor severely obese (1.82, 1.06–3.13 and 4.19, 2.14–8.19, respectively), had no pre-existing anxiety or depression (1.56, 1.09–2.22) or cardiovascular disease (1.33, 1.00–1.79), and shorter hospital admission (1.01 per day, 1.00–1.02). Similar associations were found in those recovering from non-COVID-19 dyspnoea, excluding age (and length of hospital admission). Interpretation Factors associated with dyspnoea recovery at 1-year post-discharge among patients hospitalised with COVID-19 were similar to those among community controls without COVID-19. Funding PHOSP-COVID is supported by a grant from the MRC-UK Research and Innovation and the Department of Health and Social Care through the National Institute for Health Research (NIHR) rapid response panel to tackle COVID-19. The views expressed in the publication are those of the author(s) and not necessarily those of the National Health Service (NHS), the NIHR or the Department of Health and Social Care. COVIDENCE UK is supported by the UK Research and Innovation, the National Institute for Health Research, and Barts Charity. The views expressed are those of the authors and not necessarily those of the funders

    Cohort Profile: Post-Hospitalisation COVID-19 (PHOSP-COVID) study

    Get PDF

    Clinical characteristics with inflammation profiling of long COVID and association with 1-year recovery following hospitalisation in the UK: a prospective observational study

    Get PDF
    Background No effective pharmacological or non-pharmacological interventions exist for patients with long COVID. We aimed to describe recovery 1 year after hospital discharge for COVID-19, identify factors associated with patient-perceived recovery, and identify potential therapeutic targets by describing the underlying inflammatory profiles of the previously described recovery clusters at 5 months after hospital discharge. Methods The Post-hospitalisation COVID-19 study (PHOSP-COVID) is a prospective, longitudinal cohort study recruiting adults (aged ≥18 years) discharged from hospital with COVID-19 across the UK. Recovery was assessed using patient-reported outcome measures, physical performance, and organ function at 5 months and 1 year after hospital discharge, and stratified by both patient-perceived recovery and recovery cluster. Hierarchical logistic regression modelling was performed for patient-perceived recovery at 1 year. Cluster analysis was done using the clustering large applications k-medoids approach using clinical outcomes at 5 months. Inflammatory protein profiling was analysed from plasma at the 5-month visit. This study is registered on the ISRCTN Registry, ISRCTN10980107, and recruitment is ongoing. Findings 2320 participants discharged from hospital between March 7, 2020, and April 18, 2021, were assessed at 5 months after discharge and 807 (32·7%) participants completed both the 5-month and 1-year visits. 279 (35·6%) of these 807 patients were women and 505 (64·4%) were men, with a mean age of 58·7 (SD 12·5) years, and 224 (27·8%) had received invasive mechanical ventilation (WHO class 7–9). The proportion of patients reporting full recovery was unchanged between 5 months (501 [25·5%] of 1965) and 1 year (232 [28·9%] of 804). Factors associated with being less likely to report full recovery at 1 year were female sex (odds ratio 0·68 [95% CI 0·46–0·99]), obesity (0·50 [0·34–0·74]) and invasive mechanical ventilation (0·42 [0·23–0·76]). Cluster analysis (n=1636) corroborated the previously reported four clusters: very severe, severe, moderate with cognitive impairment, and mild, relating to the severity of physical health, mental health, and cognitive impairment at 5 months. We found increased inflammatory mediators of tissue damage and repair in both the very severe and the moderate with cognitive impairment clusters compared with the mild cluster, including IL-6 concentration, which was increased in both comparisons (n=626 participants). We found a substantial deficit in median EQ-5D-5L utility index from before COVID-19 (retrospective assessment; 0·88 [IQR 0·74–1·00]), at 5 months (0·74 [0·64–0·88]) to 1 year (0·75 [0·62–0·88]), with minimal improvements across all outcome measures at 1 year after discharge in the whole cohort and within each of the four clusters. Interpretation The sequelae of a hospital admission with COVID-19 were substantial 1 year after discharge across a range of health domains, with the minority in our cohort feeling fully recovered. Patient-perceived health-related quality of life was reduced at 1 year compared with before hospital admission. Systematic inflammation and obesity are potential treatable traits that warrant further investigation in clinical trials. Funding UK Research and Innovation and National Institute for Health Research

    Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning

    No full text
    With widespread use of machine learning methods in numerous domains involving humans, several studies have raised questions about the potential for unfairness towards certain individuals or groups. A number of recent works have proposed methods to measure and eliminate unfairness from machine learning models. However, most of this work has focused on only one dimension of fair decision making: distributive fairness, i.e., the fairness of the decision outcomes. In this work, we leverage the rich literature on organizational justice and focus on another dimension of fair decision making: procedural fairness, i.e., the fairness of the decision making process. We propose measures for procedural fairness that consider the input features used in the decision process, and evaluate the moral judgments of humans regarding the use of these features. We operationalize these measures on two real world datasets using human surveys on the Amazon Mechanical Turk (AMT) platform, demonstrating that our measures capture important properties of procedurally fair decision making. We provide fast submodular mechanisms to optimize the tradeoff between procedural fairness and prediction accuracy. On our datasets, we observe empirically that procedural fairness may be achieved with little cost to outcome fairness, but that some loss of accuracy is unavoidable

    Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction

    No full text
    As algorithms are increasingly used to make important decisions that affect human lives, ranging from social benefit assignment to predicting risk of criminal recidivism, concerns have been raised about the fairness of algorithmic decision making. Most prior works on algorithmic fairness normatively prescribe how fair decisions ought to be made. In contrast, here, we descriptively survey users for how they perceive and reason about fairness in algorithmic decision making. A key contribution of this work is the framework we propose to understand why people perceive certain features as fair or unfair to be used in algorithms. Our framework identifies eight properties of features, such as relevance, volitionality and reliability, as latent considerations that inform people's moral judgments about the fairness of feature use in decision-making algorithms. We validate our framework through a series of scenario-based surveys with 576 people. We find that, based on a person's assessment of the eight latent properties of a feature in our exemplar scenario, we can accurately (> 85%) predict if the person will judge the use of the feature as fair. Our findings have important implications. At a high-level, we show that people's unfairness concerns are multi-dimensional and argue that future studies need to address unfairness concerns beyond discrimination. At a low-level, we find considerable disagreements in people's fairness judgments. We identify root causes of the disagreements, and note possible pathways to resolve them

    On Fairness, Diversity and Randomness in Algorithmic Decision Making

    No full text
    Consider a binary decision making process where a single machine learning classifier replaces a multitude of humans. We raise questions about the resulting loss of diversity in the decision making process. We study the potential benefits of using random classifier ensembles instead of a single classifier in the context of fairness-aware learning and demonstrate various attractive properties: (i) an ensemble of fair classifiers is guaranteed to be fair, for several different measures of fairness, (ii) an ensemble of unfair classifiers can still achieve fair outcomes, and (iii) an ensemble of classifiers can achieve better accuracy-fairness trade-offs than a single classifier. Finally, we introduce notions of distributional fairness to characterize further potential benefits of random classifier ensembles

    From Parity to Preference-based Notions of Fairness in Classification

    No full text
    The adoption of automated, data-driven decision making in an ever expanding range of applications has raised concerns about its potential unfairness towards certain social groups. In this context, a number of recent studies have focused on defining, detecting, and removing unfairness from data-driven decision systems. However, the existing notions of fairness, based on parity (equality) in treatment or outcomes for different social groups, tend to be quite stringent, limiting the overall decision making accuracy. In this paper, we draw inspiration from the fair-division and envy-freeness literature in economics and game theory and propose preference-based notions of fairness -- given the choice between various sets of decision treatments or outcomes, any group of users would collectively prefer its treatment or outcomes, regardless of the (dis)parity as compared to the other groups. Then, we introduce tractable proxies to design margin-based classifiers that satisfy these preference-based notions of fairness. Finally, we experiment with a variety of synthetic and real-world datasets and show that preference-based fairness allows for greater decision accuracy than parity-based fairness

    An Empirical Study on Learning Fairness Metrics for COMPAS Data with Human Supervision

    No full text
    The notion of individual fairness requires that similar people receive similar treatment. However, this is hard to achieve in practice since it is difficult to specify the appropriate similarity metric. In this work, we attempt to learn such similarity metric from human annotated data. We gather a new dataset of human judgments on a criminal recidivism prediction (COMPAS) task. By assuming the human supervision obeys the principle of individual fairness, we leverage prior work on metric learning, evaluate the performance of several metric learning methods on our dataset, and show that the learned metrics outperform the Euclidean and Precision metric under various criteria. We do not provide a way to directly learn a similarity metric satisfying the individual fairness, but to provide an empirical study on how to derive the similarity metric from human supervisors, then future work can use this as a tool to understand human supervision
    corecore