170 research outputs found

    Framework for Trustworthy AI in the Health Sector

    Get PDF
    The European Commission defines that Trustworthy AI should be lawful, ethical and robust. The ethical component and its technical methods are the main focus of the research. According to this, the initial research goal is to create a methodology for evaluating datasets for ML modeling using ethical principles in the healthcare domain. Ethical risk assessment will help to ensure compliance with principles such as privacy, fairness, safety and transparency which are especially important for the Health Care sector. At the same time, risks must be evaluated with respect to AI model performance and possible scenarios of risk mitigation. Ethical risk mitigation techniques involve data modification, elimination of private information from datasets that directly influence AI modelling. Therefore ethical risk mitigation techniques should be carefully selected depending on domain and context. In this research work, we present an analysis of these techniques.https://arrow.tudublin.ie/cddpos/1017/thumbnail.jp

    Privacy and Fairness in Recommender Systems via Adversarial Training of User Representations

    Full text link
    Latent factor models for recommender systems represent users and items as low dimensional vectors. Privacy risks of such systems have previously been studied mostly in the context of recovery of personal information in the form of usage records from the training data. However, the user representations themselves may be used together with external data to recover private user information such as gender and age. In this paper we show that user vectors calculated by a common recommender system can be exploited in this way. We propose the privacy-adversarial framework to eliminate such leakage of private information, and study the trade-off between recommender performance and leakage both theoretically and empirically using a benchmark dataset. An advantage of the proposed method is that it also helps guarantee fairness of results, since all implicit knowledge of a set of attributes is scrubbed from the representations used by the model, and thus can't enter into the decision making. We discuss further applications of this method towards the generation of deeper and more insightful recommendations.Comment: International Conference on Pattern Recognition and Method

    Generative Adversarial Networks for Mitigating Biases in Machine Learning Systems

    Full text link
    In this paper, we propose a new framework for mitigating biases in machine learning systems. The problem of the existing mitigation approaches is that they are model-oriented in the sense that they focus on tuning the training algorithms to produce fair results, while overlooking the fact that the training data can itself be the main reason for biased outcomes. Technically speaking, two essential limitations can be found in such model-based approaches: 1) the mitigation cannot be achieved without degrading the accuracy of the machine learning models, and 2) when the data used for training are largely biased, the training time automatically increases so as to find suitable learning parameters that help produce fair results. To address these shortcomings, we propose in this work a new framework that can largely mitigate the biases and discriminations in machine learning systems while at the same time enhancing the prediction accuracy of these systems. The proposed framework is based on conditional Generative Adversarial Networks (cGANs), which are used to generate new synthetic fair data with selective properties from the original data. We also propose a framework for analyzing data biases, which is important for understanding the amount and type of data that need to be synthetically sampled and labeled for each population group. Experimental results show that the proposed solution can efficiently mitigate different types of biases, while at the same time enhancing the prediction accuracy of the underlying machine learning model

    A survey in fairness in classification based machine learning

    Get PDF
    Abstract. As the usage and impact of machine learning applications increase, it is increasingly important to ensure that the systems in use are beneficial to users and larger society around them. One of steps to ensure this is limiting unfairness that the algorithm might have. Existing machine learning applications have sometimes shown that they have been disadvantageous to certain minorities and to combat this we have a need for defining what does fairness means, and how can we increase it in our machine learning applications. The survey is done as a literary review with the goal of presenting an overview of fairness in classification-based machine learning. The survey goes through the motivation for fairness briefly through philosophical background and examples of unfairness and goes through the most popular fairness definitions in machine learning. After this the paper lists some of the most important methods for restricting unfairness splitting the methods into pre- in- and post-processing methods
    corecore