41 research outputs found

    Discrimination-aware classification

    Get PDF
    Classifier construction is one of the most researched topics within the data mining and machine learning communities. Literally thousands of algorithms have been proposed. The quality of the learned models, however, depends critically on the quality of the training data. No matter which classifier inducer is applied, if the training data is incorrect, poor models will result. In this thesis, we study cases in which the input data is discriminatory and we are supposed to learn a classifier that optimizes accuracy, but does not discriminate in its predictions. Such situations occur naturally as artifacts of the data collection process when the training data is collected from different sources with different labeling criteria, when the data is generated by a biased decision process, or when the sensitive attribute, e.g., gender serves as a proxy for unobserved features. In many situations, a classifier that detects and uses the racial or gender discrimination is undesirable for legal reasons. The concept of discrimination is illustrated by the next example: Throughout the years, an employment bureau recorded various parameters of job candidates. Based on these parameters, the company wants to learn a model for partially automating the matchmaking between a job and a job candidate. A match is labeled as successful if the company hires the applicant. It turns out, however, that the historical data is biased; for higher board functions, Caucasian males are systematically being favored. A model learned directly on this data will learn this discriminatory behavior and apply it over future predictions. From an ethical and legal point of view it is of course unacceptable that a model discriminating in this way is deployed. Our proposed solutions to the discrimination problem fall into two broad categories. First, we propose pre-processing methods to remove the discrimination from the training dataset. Second, we propose solutions to the discrimination problem by directly pushing the non-discrimination constraints into classification models and post-processing of built models. We further studied the discrimination-aware classification paradigm in the presence of explanatory attributes that were correlated with the sensitive attribute, e.g., low income may be explained by the low education level. In such a case, as we show, not all discrimination can be considered bad. Therefore, we introduce a new way of measuring discrimination, by explicitly splitting it up into explainable and bad discrimination and propose methods to remove the bad discrimination only. We tried our discrimination-aware methods over real world data sets. We observed in our experiments that our methods show promising results and clearly outperform the traditional classification model w.r.t. accuracy discrimination trade-off. To conclude, we believe that discrimination-aware classification is a new and exciting area of research addressing a societally relevant problem

    Fairness-enhancing interventions in stream classification

    Full text link
    The wide spread usage of automated data-driven decision support systems has raised a lot of concerns regarding accountability and fairness of the employed models in the absence of human supervision. Existing fairness-aware approaches tackle fairness as a batch learning problem and aim at learning a fair model which can then be applied to future instances of the problem. In many applications, however, the data comes sequentially and its characteristics might evolve with time. In such a setting, it is counter-intuitive to "fix" a (fair) model over the data stream as changes in the data might incur changes in the underlying model therefore, affecting its fairness. In this work, we propose fairness-enhancing interventions that modify the input data so that the outcome of any stream classifier applied to that data will be fair. Experiments on real and synthetic data show that our approach achieves good predictive performance and low discrimination scores over the course of the stream.Comment: 15 pages, 7 figures. To appear in the proceedings of 30th International Conference on Database and Expert Systems Applications, Linz, Austria August 26 - 29, 201

    Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment

    Full text link
    Automated data-driven decision making systems are increasingly being used to assist, or even replace humans in many settings. These systems function by learning from historical decisions, often taken by humans. In order to maximize the utility of these systems (or, classifiers), their training involves minimizing the errors (or, misclassifications) over the given historical data. However, it is quite possible that the optimally trained classifier makes decisions for people belonging to different social groups with different misclassification rates (e.g., misclassification rates for females are higher than for males), thereby placing these groups at an unfair disadvantage. To account for and avoid such unfairness, in this paper, we introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates. We then propose intuitive measures of disparate mistreatment for decision boundary-based classifiers, which can be easily incorporated into their formulation as convex-concave constraints. Experiments on synthetic as well as real world datasets show that our methodology is effective at avoiding disparate mistreatment, often at a small cost in terms of accuracy.Comment: To appear in Proceedings of the 26th International World Wide Web Conference (WWW), 2017. Code available at: https://github.com/mbilalzafar/fair-classificatio

    Classifying Socially Sensitive Data Without Discrimination: An Analysis of a Crime Suspect Dataset

    Full text link

    Fairness in Algorithmic Decision Making: An Excursion Through the Lens of Causality

    Full text link
    As virtually all aspects of our lives are increasingly impacted by algorithmic decision making systems, it is incumbent upon us as a society to ensure such systems do not become instruments of unfair discrimination on the basis of gender, race, ethnicity, religion, etc. We consider the problem of determining whether the decisions made by such systems are discriminatory, through the lens of causal models. We introduce two definitions of group fairness grounded in causality: fair on average causal effect (FACE), and fair on average causal effect on the treated (FACT). We use the Rubin-Neyman potential outcomes framework for the analysis of cause-effect relationships to robustly estimate FACE and FACT. We demonstrate the effectiveness of our proposed approach on synthetic data. Our analyses of two real-world data sets, the Adult income data set from the UCI repository (with gender as the protected attribute), and the NYC Stop and Frisk data set (with race as the protected attribute), show that the evidence of discrimination obtained by FACE and FACT, or lack thereof, is often in agreement with the findings from other studies. We further show that FACT, being somewhat more nuanced compared to FACE, can yield findings of discrimination that differ from those obtained using FACE.Comment: 7 pages, 2 figures, 2 tables.To appear in Proceedings of the International Conference on World Wide Web (WWW), 201

    Theorizing technologically mediated policing in smart cities: an ethnographic approach to sensing infrastructures in security practices

    Get PDF
    Smart digital infrastructures predicated on myriads of sensors distributed in the environment are often rendered as key to contemporary urban security governance to detect risky or suspicious entities before or during a criminal event takes place. At the same time, they often involve surveillance of urban environments, and thus not only criminals but also large groups of people and entities unrelated to criminal phenomena can end up under close inspection.This chapter makes its contribution on two levels. For one, it offers a theoretical framework to the research and conceptualization of the role of sensing infrastructures in urban security practices. It shows how insights from Philosophy of Technology and Science and Technology Studies can produce a nuanced understanding of the role of digital technologies in security practices, beyond standard conceptualizations of technology. Moreover, the chapter proposes a geological approach to enrich our repertoire of imagining and researching smart urban ecosystems.Secondly, the chapter contributes to a higher level of transparency of these practices by presenting the results of ethnographic research performed in a set of police organizations that employ sensing infrastructures and algorithmic profiling in their practices. The chapter draws empirically on research performed in the Dutch police, both at municipal and national levels with some additional material gathered in a constabulary in England. In these organizations, resource allocation decisions are often predicated on automated number plate recognition technology that processes data from an array of smart cameras distributed in the environment. In these ways together, the chapter highlights a set of normative issues with implications for the effectiveness and legitimacy of urban security (surveillance) practices in smart environments. Security and Global Affair

    Classification with no discrimination by preferential sampling

    No full text
    The concept of classification without discrimination is a new area of research. (Kamiran & Calders, 2009) introduced the idea of Classification with No Discrimination (CND) and proposed a solution based on "massaging" the data to remove the discrimination from it with the least possible changes. In this paper, we propose a new solution to the CND problem by introducing a sampling scheme for making the data discrimination free instead of relabeling the dataset. On the resulting non-discriminatory dataset we then learn a classifier. This new method is not only less intrusive as compared to the "massaging" but also outperforms the "reweighing" approach of (Calders et al., 2009). The proposed method has been implemented and experimental results on the Census Income dataset show promising results: in all experiments our method performs onpar with the state-of-the art non-discriminatory techniques

    Discrimination aware classification (Extended abstract)

    No full text
    No abstract

    Discrimination aware classification (Extended abstract)

    No full text
    No abstract
    corecore