52 research outputs found

    Ethical Adversaries: Towards Mitigating Unfairness with Adversarial Machine Learning

    Get PDF
    Machine learning is being integrated into a growing number of critical systems with far-reaching impacts on society. Unexpected behaviour and unfair decision processes are coming under increasing scrutiny due to this widespread use and its theoretical considerations. Individuals, as well as organisations, notice, test, and criticize unfair results to hold model designers and deployers accountable. We offer a framework that assists these groups in mitigating unfair representations stemming from the training datasets. Our framework relies on two inter-operating adversaries to improve fairness. First, a model is trained with the goal of preventing the guessing of protected attributes' values while limiting utility losses. This first step optimizes the model's parameters for fairness. Second, the framework leverages evasion attacks from adversarial machine learning to generate new examples that will be misclassified. These new examples are then used to retrain and improve the model in the first step. These two steps are iteratively applied until a significant improvement in fairness is obtained. We evaluated our framework on well-studied datasets in the fairness literature -- including COMPAS -- where it can surpass other approaches concerning demographic parity, equality of opportunity and also the model's utility. We also illustrate our findings on the subtle difficulties when mitigating unfairness and highlight how our framework can assist model designers.Comment: 15 pages, 3 figures, 1 tabl

    Some HCI Priorities for GDPR-Compliant Machine Learning

    Get PDF
    In this short paper, we consider the roles of HCI in enabling the better governance of consequential machine learning systems using the rights and obligations laid out in the recent 2016 EU General Data Protection Regulation (GDPR)---a law which involves heavy interaction with people and systems. Focussing on those areas that relate to algorithmic systems in society, we propose roles for HCI in legal contexts in relation to fairness, bias and discrimination; data protection by design; data protection impact assessments; transparency and explanations; the mitigation and understanding of automation bias; and the communication of envisaged consequences of processing.Comment: 8 pages, 0 figures, The General Data Protection Regulation: An Opportunity for the CHI Community? (CHI-GDPR 2018), Workshop at ACM CHI'18, 22 April 2018, Montreal, Canad

    Investigating the Gender Pronoun Gap in Wikipedia

    Get PDF
    In recent years there have been many studies investigating gender biases in the content and editorial process ofWikipedia. In addition to creating a distorted account of knowledge, biases in Wikipedia and similar corpora have especially harmful downstream eects as they are often used in Artificial Intelligence and Machine Learning applications. As a result, many of the algorithms that are deployed in production “learn the same biases inherent in the data that they churned. It is the therefore increasingly important to develop quantitative metrics to measure bias. In this study we propose a simple metric, the Gendered Pronoun Gap, that measures the ratio of the occurrences of the pronoun “he versus the pronoun “she. We use this metric to investigate the distribution of the Gendered Pronoun Gap in two Wikipedia corpora prepared by Machine Learning companies for developing and benchmarking algorithms. Our results suggests that the way these datasets have been produced introduce dierent types of gender biases that can potentially distort the learning process for Machine Learning algorithms. We stress that while a single metric is not sucient to completely capture the rich nuances of bias, we suggest that the Gendered Pronoun Gap can be used as one of many metrics
    • …
    corecore