62,402 research outputs found

    A Discussion of Discrimination and Fairness in Insurance Pricing

    Full text link
    Indirect discrimination is an issue of major concern in algorithmic models. This is particularly the case in insurance pricing where protected policyholder characteristics are not allowed to be used for insurance pricing. Simply disregarding protected policyholder information is not an appropriate solution because this still allows for the possibility of inferring the protected characteristics from the non-protected ones. This leads to so-called proxy or indirect discrimination. Though proxy discrimination is qualitatively different from the group fairness concepts in machine learning, these group fairness concepts are proposed to 'smooth out' the impact of protected characteristics in the calculation of insurance prices. The purpose of this note is to share some thoughts about group fairness concepts in the light of insurance pricing and to discuss their implications. We present a statistical model that is free of proxy discrimination, thus, unproblematic from an insurance pricing point of view. However, we find that the canonical price in this statistical model does not satisfy any of the three most popular group fairness axioms. This seems puzzling and we welcome feedback on our example and on the usefulness of these group fairness axioms for non-discriminatory insurance pricing.Comment: 14 page

    Fairness Under Demographic Scarce Regime

    Full text link
    Most existing works on fairness assume the model has full access to demographic information. However, there exist scenarios where demographic information is partially available because a record was not maintained throughout data collection or due to privacy reasons. This setting is known as demographic scarce regime. Prior research have shown that training an attribute classifier to replace the missing sensitive attributes (proxy) can still improve fairness. However, the use of proxy-sensitive attributes worsens fairness-accuracy trade-offs compared to true sensitive attributes. To address this limitation, we propose a framework to build attribute classifiers that achieve better fairness-accuracy trade-offs. Our method introduces uncertainty awareness in the attribute classifier and enforces fairness on samples with demographic information inferred with the lowest uncertainty. We show empirically that enforcing fairness constraints on samples with uncertain sensitive attributes is detrimental to fairness and accuracy. Our experiments on two datasets showed that the proposed framework yields models with significantly better fairness-accuracy trade-offs compared to classic attribute classifiers. Surprisingly, our framework outperforms models trained with constraints on the true sensitive attributes.Comment: 14 pages, 7 page

    Fair Visual Recognition via Intervention with Proxy Features

    Full text link
    Deep learning models often learn to make predictions that rely on sensitive social attributes like gender and race, which poses significant fairness risks, especially in societal applications, e.g., hiring, banking, and criminal justice. Existing work tackles this issue by minimizing information about social attributes in models for debiasing. However, the high correlation between target task and social attributes makes bias mitigation incompatible with target task accuracy. Recalling that model bias arises because the learning of features in regard to bias attributes (i.e., bias features) helps target task optimization, we explore the following research question: \emph{Can we leverage proxy features to replace the role of bias feature in target task optimization for debiasing?} To this end, we propose \emph{Proxy Debiasing}, to first transfer the target task's learning of bias information from bias features to artificial proxy features, and then employ causal intervention to eliminate proxy features in inference. The key idea of \emph{Proxy Debiasing} is to design controllable proxy features to on one hand replace bias features in contributing to target task during the training stage, and on the other hand easily to be removed by intervention during the inference stage. This guarantees the elimination of bias features without affecting the target information, thus addressing the fairness-accuracy paradox in previous debiasing solutions. We apply \emph{Proxy Debiasing} to several benchmark datasets, and achieve significant improvements over the state-of-the-art debiasing methods in both of accuracy and fairness

    Microfoundations of Social Capital

    Get PDF
    We show that the standard trust question routinely used in social capital research is importantly related to cooperation behavior and we provide evidence on the microfoundation of this relation. We run a large-scale public goods experiment over the internet in Denmark using a design that enables us to disentangle preferences for cooperation from beliefs about others’ cooperation. We find that the standard trust question is a proxy for cooperation preferences rather than beliefs about others’ cooperation. Moreover, we show that the “fairness question”, a recently proposed alternative to the standard trust question, is also related to cooperation behavior but operates through beliefs rather than preferences.Social capital, Trust, Fairness, Public goods, Cooperation, Experiment

    The Impact of Explanations on Fairness in Human-AI Decision-Making: Protected vs Proxy Features

    Full text link
    AI systems have been known to amplify biases in real world data. Explanations may help human-AI teams address these biases for fairer decision-making. Typically, explanations focus on salient input features. If a model is biased against some protected group, explanations may include features that demonstrate this bias, but when biases are realized through proxy features, the relationship between this proxy feature and the protected one may be less clear to a human. In this work, we study the effect of the presence of protected and proxy features on participants' perception of model fairness and their ability to improve demographic parity over an AI alone. Further, we examine how different treatments -- explanations, model bias disclosure and proxy correlation disclosure -- affect fairness perception and parity. We find that explanations help people detect direct biases but not indirect biases. Additionally, regardless of bias type, explanations tend to increase agreement with model biases. Disclosures can help mitigate this effect for indirect biases, improving both unfairness recognition and the decision-making fairness. We hope that our findings can help guide further research into advancing explanations in support of fair human-AI decision-making

    Testing Fair Wage Theory

    Get PDF
    Fairness considerations often are invoked to explain wage differences that appear unrelated to worker characteristics or job conditions, but non-experimental tests of fair wage models are rare and weak because of the limits of available market-generated data. In particular, such data rarely permit researchers to (a) identify suitable reference points that employees and employers might use in determining what is fair and (b) control for employees’ marginal output and its value. This study utilizes a unique dataset from the baseball labor market that solves both problems. We find no support for fair wage theory in this market. We also find that fairness premia can be illusory: Wages appear to be adjusted upward for reasons of fairness in regressions that control for variation in individuals’ physical output, but such premia evaporate when the value of that output (which can be market- or firm-specific) is held constant. This suggests that avoiding proxy measures of workers’ marginal revenue products in wage studies might reduce the number of labor market "anomalies" economists must resolve.fairness, efficiency wages, wage differentials

    Estimating and Controlling for Fairness via Sensitive Attribute Predictors

    Full text link
    The responsible use of machine learning tools in real world high-stakes decision making demands that we audit and control for potential biases against underrepresented groups. This process naturally requires access to the sensitive attribute one desires to control, such as demographics, gender, or other potentially sensitive features. Unfortunately, this information is often unavailable. In this work we demonstrate that one can still reliably estimate, and ultimately control, for fairness by using proxy sensitive attributes derived from a sensitive attribute predictor. Specifically, we first show that with just a little knowledge of the complete data distribution, one may use a sensitive attribute predictor to obtain bounds of the classifier's true fairness metric. Second, we demonstrate how one can provably control a classifier's worst-case fairness violation with respect to the true sensitive attribute by controlling for fairness with respect to the proxy sensitive attribute. Our results hold under assumptions that are significantly milder than previous works, and we illustrate these results with experiments on synthetic and real datasets

    Model Debiasing via Gradient-based Explanation on Representation

    Full text link
    Machine learning systems produce biased results towards certain demographic groups, known as the fairness problem. Recent approaches to tackle this problem learn a latent code (i.e., representation) through disentangled representation learning and then discard the latent code dimensions correlated with sensitive attributes (e.g., gender). Nevertheless, these approaches may suffer from incomplete disentanglement and overlook proxy attributes (proxies for sensitive attributes) when processing real-world data, especially for unstructured data, causing performance degradation in fairness and loss of useful information for downstream tasks. In this paper, we propose a novel fairness framework that performs debiasing with regard to both sensitive attributes and proxy attributes, which boosts the prediction performance of downstream task models without complete disentanglement. The main idea is to, first, leverage gradient-based explanation to find two model focuses, 1) one focus for predicting sensitive attributes and 2) the other focus for predicting downstream task labels, and second, use them to perturb the latent code that guides the training of downstream task models towards fairness and utility goals. We show empirically that our framework works with both disentangled and non-disentangled representation learning methods and achieves better fairness-accuracy trade-off on unstructured and structured datasets than previous state-of-the-art approaches
    • …
    corecore