3,656 research outputs found

    A New Kind of Data Science: The Need for Ethical Analytics

    Get PDF
    Ethics can no longer be regarded as an add-on in data science and analytics. This paper argues for the necessity of formalizing a new, practically-oriented sub-discipline of AI ethics by outlining the needs, highlighting shortcomings in current approaches, and providing a framework for ethical analytics, which is concerned with the study of the ethical issues surrounding the development, deployment, and/or dissemination of ML/AI systems and data science research, as well as the development of tools and procedures to mitigate ethical harms. While data science and machine learning are primarily concerned with data from start to finish, ethical analytics is concerned primarily with people – moral agents, the groups and societies they comprise, and the world they inhabit. Ethical analytics should be seen as complementary to the more techno-abstracted analytic disciplines, interfacing with the nuanced, ethical issues that stem from ill-defined or vague, socially-relative normative concepts. It studies the issues that arise in this holistic sociotechnical environment, and it seeks to develop concrete solutions or interventions where possible – from mathematics and algorithms to procedures and protocols

    A Review of Bias and Fairness in Artificial Intelligence

    Get PDF
    Automating decision systems has led to hidden biases in the use of artificial intelligence (AI). Consequently, explaining these decisions and identifying responsibilities has become a challenge. As a result, a new field of research on algorithmic fairness has emerged. In this area, detecting biases and mitigating them is essential to ensure fair and discrimination-free decisions. This paper contributes with: (1) a categorization of biases and how these are associated with different phases of an AI model’s development (including the data-generation phase); (2) a revision of fairness metrics to audit the data and AI models trained with them (considering agnostic models when focusing on fairness); and, (3) a novel taxonomy of the procedures to mitigate biases in the different phases of an AI model’s development (pre-processing, training, and post-processing) with the addition of transversal actions that help to produce fairer models

    cii Student Papers - 2022

    Get PDF
    In this collection of papers, we, the Research Group Critical Information Infrastructures (cii) from the Karlsruhe Institute of Technology, present eight selected student research articles contributing to the design, development, and evaluation of critical information infrastructures. During our courses, students mostly work in groups and deal with problems and issues related to sociotechnical challenges in the realm of (critical) information systems. Student papers came from five different cii courses, namely Emerging Trends in Internet Technologies, Emerging Trends in Digital Health, Digital Health, Critical Information Infrastructures, and Selected Issues on Critical Information Infrastructures: Collaborative Development of Innovative Teaching Concepts in summer term of 2021 and the winter term of 2021/2022

    Secure and robust machine learning for healthcare: A survey

    Get PDF
    Recent years have witnessed widespread adoption of machine learning (ML)/deep learning (DL) techniques due to their superior performance for a variety of healthcare applications ranging from the prediction of cardiac arrest from one-dimensional heart signals to computer-aided diagnosis (CADx) using multi-dimensional medical images. Notwithstanding the impressive performance of ML/DL, there are still lingering doubts regarding the robustness of ML/DL in healthcare settings (which is traditionally considered quite challenging due to the myriad security and privacy issues involved), especially in light of recent results that have shown that ML/DL are vulnerable to adversarial attacks. In this paper, we present an overview of various application areas in healthcare that leverage such techniques from security and privacy point of view and present associated challenges. In addition, we present potential methods to ensure secure and privacy-preserving ML for healthcare applications. Finally, we provide insight into the current research challenges and promising directions for future research

    Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI

    Get PDF
    In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.Basque GovernmentConsolidated Research Group MATHMODE - Department of Education of the Basque Government IT1294-19Spanish GovernmentEuropean Commission TIN2017-89517-PBBVA Foundation through its Ayudas Fundacion BBVA a Equipos de Investigacion Cientifica 2018 call (DeepSCOP project)European Commission 82561

    Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

    Get PDF
    In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability

    Understanding and Managing Non-functional Requirements for Machine Learning Systems

    Get PDF
    Background: Machine Learning (ML) systems learn using big data and solve a wide range of prediction and decision making problems that would be difficult to solve with traditional systems. However, increasing use of ML in complex and safety-critical systems has raised concerns about quality requirements, which are defined as Non-Functional requirements (NFRs). Many NFRs, such as fairness, transparency, explainability, and safety are critical in ensuring the success and acceptance of ML systems. However, many NFRs for ML systems are not well understood (e.g., maintainability), some known NFRs may become more important (e.g., fairness), while some may become irrelevant in the ML context (e.g., modularity), some new NFRs may come into play (e.g., retrainability), and the scope of defining and measuring NFRs in ML systems is also a challenging task.Objective: The research project focuses on addressing and managing issues related to NFRs for ML systems. The objective of the research is to identify current practices and challenges related to NFRs in an ML context, and to develop solutions to manage NFRs for ML systems.Method: We are using design science as a base of the research method. We carried out different empirical methodologies–including interviews, survey, and a part of systematic mapping study to collect data, and to explore the problem space. To get in-depth insights on collected data, we performed thematic analysis on qualitative data and used descriptive statistics to analyze qualitative data. We are working towards proposing a quality framework as an artifact to identify, define, specify, and manage NFRs for ML systems.Findings: We found that NFRs are crucial and play an important role for the success of the ML systems. However, there is a research gap in this area, and managing NFRs for ML systems is challenging. To address the research objectives, we have identified important NFRs for ML systems, and NFR and NFR measurement-related challenges. We also identified preliminary NFR definition and measurement scope and RE-related challenges in different example contexts.Conclusion: Although NFRs are very important for ML systems, it is complex and difficult to define, allocate, specify, and measure NFRs for ML systems. Currently the industry and research is does not have specific and well organized solutions for managing NFRs for ML systems because of unintended bias, the non-deterministic behavior of ML, and expensive and time-consuming exhaustive testing. Currently, we are working on the development of a quality framework to manage (e.g., identify important NFRs, scoping and measuring NFRs) NFRs in the ML systems development process

    Ethics-based AI auditing core drivers and dimensions: A systematic literature review

    Get PDF
    This thesis provides a systematic literature review (SLR) of ethics-based AI auditing research. The review’s main goals are to report the current status of AI auditing academic literature and provide findings addressing the review objectives. The review incorporated 50 articles presenting ethics-based AI auditing. The SLR findings indicate that the AI auditing field is still new and rising. Most of the studies were conference proceeding published either 2019 or 2020. Therefore, there was a demand for a SLR work as the AI auditing field was wide and unorganized. Based on the SLR findings, fairness, transparency, non-maleficence and responsibility are the most important principles for the ethics-based AI auditing. Other commonly identified principles were privacy, beneficence, freedom and autonomy and trust. These principles were interpreted to belong to either drivers or dimensions depending on whether something is audited directly or whether achieving ethics is a desired outcome. The findings also suggest that the external AI auditing leads the ethics-based AI auditing discussion. Majority of the papers dealt specifically with external AI auditing. The most important stakeholders were recognized to be researchers, developers and deployers, regulators, auditors, users and individuals and companies. Roles of the stakeholders varied depending on whether they are proposed to conduct AI audits or whether they are in the position of beneficiary.Tässä Pro gradu -tutkielmassa esitellään systemaattinen kirjallisuuskatsaus etiikkalähtöiseen tekoälyn auditointiin. Kirjallisuuskatsauksen keskeisimmät tavoitteet ovat esittää tämänhetkinen tila tekoälyn auditoinnin akateemisesta kirjallisuudesta sekä esittää keskeisimmät löydökset tutkielman tavoitteiden mukaisesti. Kirjallisuuskatsaus sisälsi 50 artikkelia, mitkä käsittelivat etiikkalähtöistä tekoälyn auditointia. Systemaattisen kirjallisuuskatsauksen löydökset osoittivat, että tekoälyn auditoinnin ala on edelleen uusi ja kasvava. Suurin osa julkaisuista oli konferenssipapereita vuosilta 2019-2020. Ala on myös laaja sekä epäorganisoitu, joten systemaattiselle kirjallisuuskatsaukselle oli kysyntää. Löydöksien perusteella reiluus, läpinäkyvyys, ei-haitallisuus sekä vastuullisuus ovat tärkeimmät periaatteet etiikkalähtöiseen tekoälyn auditointiin. Muut yleisesti tunnistetut periaatteet olivat yksityisyys, hyvyys, vapaus ja autonomia sekä luottamus. Nämä periaatteet tulkittiin kuuluvaksi joko ajureihin tai dimensioihin sen perusteella auditoitiinko periaatetta suoraan vai oliko periaatteen saavuttaminen auditoinnin toivottu tulos. Löydökset osoittivat myös, että ulkoinen auditointi hallitsee tämänhetkistä keskustelua etiikkalähtöisessä tekoälyn auditoinnissa. Valtaosa papereista käsitteli erityisesti ulkoista tekoälyn auditointia. Lisäksi tärkeimmät sidosryhmät tunnistettiin. Nämä olivat tutkijat, järjestelmän kehittäjät, lainvalvojat, auditoijat, käyttäjät sekä yksilöt ja organisaatiot. Heidän roolinsa vaihtelivat sen perusteella vastasivatko he tekoälyn auditoinnin toteuttamisesta vai kuuluivatko he tekoälyn auditoinnin edunsaajiin
    corecore