157 research outputs found

    Framework for Trustworthy AI in the Health Sector

    Get PDF
    The European Commission defines that Trustworthy AI should be lawful, ethical and robust. The ethical component and its technical methods are the main focus of the research. According to this, the initial research goal is to create a methodology for evaluating datasets for ML modeling using ethical principles in the healthcare domain. Ethical risk assessment will help to ensure compliance with principles such as privacy, fairness, safety and transparency which are especially important for the Health Care sector. At the same time, risks must be evaluated with respect to AI model performance and possible scenarios of risk mitigation. Ethical risk mitigation techniques involve data modification, elimination of private information from datasets that directly influence AI modelling. Therefore ethical risk mitigation techniques should be carefully selected depending on domain and context. In this research work, we present an analysis of these techniques.https://arrow.tudublin.ie/cddpos/1017/thumbnail.jp

    Re-formalization of Individual Fairness

    Full text link
    The notion of individual fairness is a formalization of an ethical principle, "Treating like cases alike," which has been argued such as by Aristotle. In a fairness-aware machine learning context, Dwork et al. firstly formalized the notion. In their formalization, a similar pair of data in an unfair space should be mapped to similar positions in a fair space. We propose to re-formalize individual fairness by the statistical independence conditioned by individuals. This re-formalization has the following merits. First, our formalization is compatible with that of Dwork et al. Second, our formalization enables to combine individual fairness with the fairness notion, equalized odds or sufficiency, as well as statistical parity. Third, though their formalization implicitly assumes a pre-process approach for making fair prediction, our formalization is applicable to an in-process or post-process approach.Comment: Published at the 6th FAccTRec Workshop: Responsible Recommendatio

    Achieving non-discrimination in prediction

    Full text link
    Discrimination-aware classification is receiving an increasing attention in data science fields. The pre-process methods for constructing a discrimination-free classifier first remove discrimination from the training data, and then learn the classifier from the cleaned data. However, they lack a theoretical guarantee for the potential discrimination when the classifier is deployed for prediction. In this paper, we fill this gap by mathematically bounding the probability of the discrimination in prediction being within a given interval in terms of the training data and classifier. We adopt the causal model for modeling the data generation mechanism, and formally defining discrimination in population, in a dataset, and in prediction. We obtain two important theoretical results: (1) the discrimination in prediction can still exist even if the discrimination in the training data is completely removed; and (2) not all pre-process methods can ensure non-discrimination in prediction even though they can achieve non-discrimination in the modified training data. Based on the results, we develop a two-phase framework for constructing a discrimination-free classifier with a theoretical guarantee. The experiments demonstrate the theoretical results and show the effectiveness of our two-phase framework
    • …
    corecore