17,933 research outputs found

    iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making

    Get PDF
    People are rated and ranked, towards algorithmic decision making in an increasing number of applications, typically based on machine learning. Research on how to incorporate fairness into such tasks has prevalently pursued the paradigm of group fairness: giving adequate success rates to specifically protected groups. In contrast, the alternative paradigm of individual fairness has received relatively little attention, and this paper advances this less explored direction. The paper introduces a method for probabilistically mapping user records into a low-rank representation that reconciles individual fairness and the utility of classifiers and rankings in downstream applications. Our notion of individual fairness requires that users who are similar in all task-relevant attributes such as job qualification, and disregarding all potentially discriminating attributes such as gender, should have similar outcomes. We demonstrate the versatility of our method by applying it to classification and learning-to-rank tasks on a variety of real-world datasets. Our experiments show substantial improvements over the best prior work for this setting.Comment: Accepted at ICDE 2019. Please cite the ICDE 2019 proceedings versio

    {iFair}: {L}earning Individually Fair Data Representations for Algorithmic Decision Making

    Get PDF
    People are rated and ranked, towards algorithmic decision making in an increasing number of applications, typically based on machine learning. Research on how to incorporate fairness into such tasks has prevalently pursued the paradigm of group fairness: ensuring that each ethnic or social group receives its fair share in the outcome of classifiers and rankings. In contrast, the alternative paradigm of individual fairness has received relatively little attention. This paper introduces a method for probabilistically clustering user records into a low-rank representation that captures individual fairness yet also achieves high accuracy in classification and regression models. Our notion of individual fairness requires that users who are similar in all task-relevant attributes such as job qualification, and disregarding all potentially discriminating attributes such as gender, should have similar outcomes. Since the case for fairness is ubiquitous across many tasks, we aim to learn general representations that can be applied to arbitrary downstream use-cases. We demonstrate the versatility of our method by applying it to classification and learning-to-rank tasks on two real-world datasets. Our experiments show substantial improvements over the best prior work for this setting

    Attribute-Based Access Control Policy Generation Approach from Access Logs Based on CatBoost

    Get PDF
    Attribute-based access control (ABAC) has higher flexibility and better scalability than traditional access control and can be used for fine-grained access control of large-scale information systems. Although ABAC can depict a dynamic, complex access control policy, it is costly, tedious, and error-prone to manually define. Therefore, it is worth studying how to construct an ABAC policy efficiently and accurately. This paper proposes an ABAC policy generation approach based on the CatBoost algorithm to automatically learn policies from historical access logs. First, we perform a weighted reconstruction of the attributes for the policy to be mined. Second, we provide an ABAC rule extraction algorithm, rule pruning algorithm, and rule optimization algorithm, among which the rule pruning and rule optimization algorithms are used to improve the accuracy of the generated policies. In addition, we present a new policy quality indicator to measure the accuracy and simplicity of the generated policies. Finally, the results of an experiment conducted to validate the approach verify its feasibility and effectiveness
    • …
    corecore