21,550 research outputs found

    Learning Certified Individually Fair Representations

    Full text link
    Fair representation learning provides an effective way of enforcing fairness constraints without compromising utility for downstream users. A desirable family of such fairness constraints, each requiring similar treatment for similar individuals, is known as individual fairness. In this work, we introduce the first method that enables data consumers to obtain certificates of individual fairness for existing and new data points. The key idea is to map similar individuals to close latent representations and leverage this latent proximity to certify individual fairness. That is, our method enables the data producer to learn and certify a representation where for a data point all similar individuals are at \ell_\infty-distance at most ϵ\epsilon, thus allowing data consumers to certify individual fairness by proving ϵ\epsilon-robustness of their classifier. Our experimental evaluation on five real-world datasets and several fairness constraints demonstrates the expressivity and scalability of our approach.Comment: Conference Paper at NeurIPS 202

    Latent Space Smoothing for Individually Fair Representations

    Full text link
    Fair representation learning encodes user data to ensure fairness and utility, regardless of the downstream application. However, learning individually fair representations, i.e., guaranteeing that similar individuals are treated similarly, remains challenging in high-dimensional settings such as computer vision. In this work, we introduce LASSI, the first representation learning method for certifying individual fairness of high-dimensional data. Our key insight is to leverage recent advances in generative modeling to capture the set of similar individuals in the generative latent space. This allows learning individually fair representations where similar individuals are mapped close together, by using adversarial training to minimize the distance between their representations. Finally, we employ randomized smoothing to provably map similar individuals close together, in turn ensuring that local robustness verification of the downstream application results in end-to-end fairness certification. Our experimental evaluation on challenging real-world image data demonstrates that our method increases certified individual fairness by up to 60%, without significantly affecting task utility

    Certifying and removing disparate impact

    Full text link
    What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. This legal determination hinges on a definition of a protected class (ethnicity, gender, religious practice) and an explicit description of the process. When the process is implemented using computers, determining disparate impact (and hence bias) is harder. It might not be possible to disclose the process. In addition, even if the process is open, it might be hard to elucidate in a legal setting how the algorithm makes its decisions. Instead of requiring access to the algorithm, we propose making inferences based on the data the algorithm uses. We make four contributions to this problem. First, we link the legal notion of disparate impact to a measure of classification accuracy that while known, has received relatively little attention. Second, we propose a test for disparate impact based on analyzing the information leakage of the protected class from the other data attributes. Third, we describe methods by which data might be made unbiased. Finally, we present empirical evidence supporting the effectiveness of our test for disparate impact and our approach for both masking bias and preserving relevant information in the data. Interestingly, our approach resembles some actual selection practices that have recently received legal scrutiny.Comment: Extended version of paper accepted at 2015 ACM SIGKDD Conference on Knowledge Discovery and Data Minin

    Direct certification of a class of quantum simulations

    Get PDF
    One of the main challenges in the field of quantum simulation and computation is to identify ways to certify the correct functioning of a device when a classical efficient simulation is not available. Important cases are situations in which one cannot classically calculate local expectation values of state preparations efficiently. In this work, we develop weak-membership formulations of the certification of ground state preparations. We provide a non-interactive protocol for certifying ground states of frustration-free Hamiltonians based on simple energy measurements of local Hamiltonian terms. This certification protocol can be applied to classically intractable analog quantum simulations: For example, using Feynman-Kitaev Hamiltonians, one can encode universal quantum computation in such ground states. Moreover, our certification protocol is applicable to ground states encodings of IQP circuits demonstration of quantum supremacy. These can be certified efficiently when the error is polynomially bounded.Comment: 10 pages, corrected a small error in Eqs. (2) and (5

    Individual Fairness in Bayesian Neural Networks

    Full text link
    We study Individual Fairness (IF) for Bayesian neural networks (BNNs). Specifically, we consider the ϵ\epsilon-δ\delta-individual fairness notion, which requires that, for any pair of input points that are ϵ\epsilon-similar according to a given similarity metrics, the output of the BNN is within a given tolerance δ>0.\delta>0. We leverage bounds on statistical sampling over the input space and the relationship between adversarial robustness and individual fairness to derive a framework for the systematic estimation of ϵ\epsilon-δ\delta-IF, designing Fair-FGSM and Fair-PGD as global,fairness-aware extensions to gradient-based attacks for BNNs. We empirically study IF of a variety of approximately inferred BNNs with different architectures on fairness benchmarks, and compare against deterministic models learnt using frequentist techniques. Interestingly, we find that BNNs trained by means of approximate Bayesian inference consistently tend to be markedly more individually fair than their deterministic counterparts

    In Re: LifeUSA Holding, Inc.

    Get PDF
    USDC for the Eastern District of Pennsylvani
    corecore