2,891 research outputs found

    Fair Inputs and Fair Outputs: The Incompatibility of Fairness in Privacy and Accuracy

    Get PDF
    Fairness concerns about algorithmic decision-making systems have been mainly focused on the outputs (e.g., the accuracy of a classifier across individuals or groups). However, one may additionally be concerned with fairness in the inputs. In this paper, we propose and formulate two properties regarding the inputs of (features used by) a classifier. In particular, we claim that fair privacy (whether individuals are all asked to reveal the same information) and need-to-know (whether users are only asked for the minimal information required for the task at hand) are desirable properties of a decision system. We explore the interaction between these properties and fairness in the outputs (fair prediction accuracy). We show that for an optimal classifier these three properties are in general incompatible, and we explain what common properties of data make them incompatible. Finally we provide an algorithm to verify if the trade-off between the three properties exists in a given dataset, and use the algorithm to show that this trade-off is common in real data

    Observing and recommending from a social web with biases

    No full text
    The research question this report addresses is: how, and to what extent, those directly involved with the design, development and employment of a specific black box algorithm can be certain that it is not unlawfully discriminating (directly and/or indirectly) against particular persons with protected characteristics (e.g. gender, race and ethnicity)?Comment: Technical Report, University of Southampton, March 201

    A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability

    Full text link
    Graph Neural Networks (GNNs) have made rapid developments in the recent years. Due to their great ability in modeling graph-structured data, GNNs are vastly used in various applications, including high-stakes scenarios such as financial analysis, traffic predictions, and drug discovery. Despite their great potential in benefiting humans in the real world, recent study shows that GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data and lack interpretability, which have risk of causing unintentional harm to the users and society. For example, existing works demonstrate that attackers can fool the GNNs to give the outcome they desire with unnoticeable perturbation on training graph. GNNs trained on social networks may embed the discrimination in their decision process, strengthening the undesirable societal bias. Consequently, trustworthy GNNs in various aspects are emerging to prevent the harm from GNN models and increase the users' trust in GNNs. In this paper, we give a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability. For each aspect, we give the taxonomy of the related methods and formulate the general frameworks for the multiple categories of trustworthy GNNs. We also discuss the future research directions of each aspect and connections between these aspects to help achieve trustworthiness

    Geographies of HIV/AIDS in Bangladesh: Vunerability, Stigma and Place.

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Living with HIV/AIDS: turning points, transitions and transformations in the lives of women from Bombay and Edinburgh

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Exacerbating Algorithmic Bias through Fairness Attacks

    Full text link
    Algorithmic fairness has attracted significant attention in recent years, with many quantitative measures suggested for characterizing the fairness of different machine learning algorithms. Despite this interest, the robustness of those fairness measures with respect to an intentional adversarial attack has not been properly addressed. Indeed, most adversarial machine learning has focused on the impact of malicious attacks on the accuracy of the system, without any regard to the system's fairness. We propose new types of data poisoning attacks where an adversary intentionally targets the fairness of a system. Specifically, we propose two families of attacks that target fairness measures. In the anchoring attack, we skew the decision boundary by placing poisoned points near specific target points to bias the outcome. In the influence attack on fairness, we aim to maximize the covariance between the sensitive attributes and the decision outcome and affect the fairness of the model. We conduct extensive experiments that indicate the effectiveness of our proposed attacks

    Health communication theories: Implications for HIV reporting in Asia and the Pacific

    Get PDF
    This paper focuses on the expanding HIV (Human Immunodeficiency Virus) epidemic in parts of Asia and the Pacific region and recommends the adoption of insights from particular health communication theories. The author argues that these paradigms can assist in broadening the current scope and content of HIV reporting. One theory in particular - Social Change Communication (SCC) - challenges the media to extend the framing of HIV from primarily a health story to one that is linked to more macro socio-economic, cultural and political factors. Asian and Pacific countries that have an emerging or expanding HIV epidemic need to realise a common reality when reporting on the disease; that is, the complexity and interconnectedness of the web of issues into which the HIV pandemic is woven
    corecore