24,331 research outputs found

    Framework for Identification and Prevention of Direct and Indirect Discrimination using Data mining

    Get PDF
    Extraction of useful and important information from huge collection of data is known as data mining. Negative social perception about data mining is also there, among which potential privacy invasion and potential discrimination are there. Discrimination involves unequally or unfairly treating people on the basis of their belongings to a specific group. Automated data collection and data mining techniques like classification rule mining have made easier to make automated decisions, like loan granting/denial, insurance premium computation, etc. If the training data sets are biased in what regards discriminatory (sensitive) attributes like age, gender, race, religion, etc., discriminatory decisions may ensue. For this reason, antidiscrimination techniques including discrimination discovery, identification and prevention have been introduced in data mining. Discrimination may of two types, either direct or indirect. Direct discrimination is the one where decisions are taken on basis of sensitive attributes. Indirect discrimination is the one where decisions are made based on non-sensitive attributes which are strongly correlated with biased sensitive ones. In this paper, we are dealing with discrimination prevention in data mining and propose new methods applicable for direct or indirect discrimination prevention individually or both at the same time. We discuss how to clean training data sets and transformed data sets in such a way that direct and/or indirect discriminatory decision rules are converted to legitimate (non-discriminatory) classification rules. We also propose new measures and metrics to analyse the utility of the proposed approaches and we compare these approaches

    Observing and recommending from a social web with biases

    No full text
    The research question this report addresses is: how, and to what extent, those directly involved with the design, development and employment of a specific black box algorithm can be certain that it is not unlawfully discriminating (directly and/or indirectly) against particular persons with protected characteristics (e.g. gender, race and ethnicity)?Comment: Technical Report, University of Southampton, March 201

    Multilevel Anti Discrimination and Privacy Preservation Correlativity

    Get PDF
    In the fast growing technology, most organizations will need to reveal their crucial data which includes the sensitive information that discloses one’s identity during their business analytics and to provide services. To limit the access to such sensitive data, various privacy preservation techniques are applied based on the level of priority assumed. The multilevel privacy preserved discrimination free data transmission deals with the correlation of discrimination prevention and privacy preservation. By applying appropriate privacy preservation techniques, it can be shown that the discrimination prevention can be easily accomplished along with secure transmission of data to different levels of users. On the basis of sociological aspect, discrimination is the unfair treatment of an individual or group based on their membership on a particular category. So, the decision attribute that leads to discrimination needs to be hided or transformed. The unified discrimination prevention method is available which avoids both direct and indirect discrimination simultaneously or both at the same time. Although discriminatory biases are eliminated, it results in huge data loss which drops down the data transmission efficiency. The data quality is much preserved since encryption technique is included. The proposed system is dynamic in nature and can be implemented in any organization. The experimental evaluation aids us to conclude that the proposed work is efficient for data transmission without discrimination and with maximum privacy preservation. DOI: 10.17762/ijritcc2321-8169.15070

    Fairness in Algorithmic Decision Making: An Excursion Through the Lens of Causality

    Full text link
    As virtually all aspects of our lives are increasingly impacted by algorithmic decision making systems, it is incumbent upon us as a society to ensure such systems do not become instruments of unfair discrimination on the basis of gender, race, ethnicity, religion, etc. We consider the problem of determining whether the decisions made by such systems are discriminatory, through the lens of causal models. We introduce two definitions of group fairness grounded in causality: fair on average causal effect (FACE), and fair on average causal effect on the treated (FACT). We use the Rubin-Neyman potential outcomes framework for the analysis of cause-effect relationships to robustly estimate FACE and FACT. We demonstrate the effectiveness of our proposed approach on synthetic data. Our analyses of two real-world data sets, the Adult income data set from the UCI repository (with gender as the protected attribute), and the NYC Stop and Frisk data set (with race as the protected attribute), show that the evidence of discrimination obtained by FACE and FACT, or lack thereof, is often in agreement with the findings from other studies. We further show that FACT, being somewhat more nuanced compared to FACE, can yield findings of discrimination that differ from those obtained using FACE.Comment: 7 pages, 2 figures, 2 tables.To appear in Proceedings of the International Conference on World Wide Web (WWW), 201
    • …
    corecore