2,371 research outputs found

    CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020

    Get PDF
    The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market, technical, ethical and governance challenges posed by the intersection of AI and cybersecurity, focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder by design and composed of academics, industry players from various sectors, policymakers and civil society. The Task Force is currently discussing issues such as the state and evolution of the application of AI in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics between cyber attackers and defenders; the increasing need for sharing information on threats and how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and possible EU policy measures to ease the adoption of AI in cybersecurity in Europe. As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed at helping the public and the private sector in operationalising Trustworthy AI. The list is composed of 131 items that are supposed to guide AI designers and developers throughout the process of design, development, and deployment of AI, although not intended as guidance to ensure compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a revision that will be finalised in early 2020. This report would like to contribute to this revision by addressing in particular the interplay between AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks are fundamentally different from traditional cyberattacks; whether they are compatible with different risk levels; whether they are flexible enough in terms of clear/easy measurement, implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles for the industry. The HLEG is a diverse group, with more than 50 members representing different stakeholders, such as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of producing a simple checklist for a complex issue. The public engagement exercise looks successful overall in that more than 450 stakeholders have signed in and are contributing to the process. The next sections of this report present the items listed by the HLEG followed by the analysis and suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)

    Meaningful XAI Based on User-Centric Design Methodology

    Get PDF

    Meaningful XAI Based on User-Centric Design Methodology

    Get PDF
    This report explores the concept of explainability in AI-based systems, distinguishing between "local" and "global" explanations. “Local” explanations refer to specific algorithmic outputs in their operational context, while “global” explanations encompass the system as a whole. The need to tailor explanations to users and tasks is emphasised, acknowledging that explanations are not universal solutions and can have unintended consequences. Two use cases illustrate the application of explainability techniques: an educational recommender system, and explainable AI for scientific discoveries. The report discusses the subjective nature of meaningfulness in explanations and proposes cognitive metrics for its evaluation. It concludes by providing recommendations, including the inclusion of “local” explainability guidelines in the EU AI proposal, the adoption of a user-centric design methodology, and the harmonisation of explainable AI requirements across different EU legislation and case law.Overall, this report delves into the framework and use cases surrounding explainability in AI-based systems, emphasising the need for “local” and “global” explanations, and ensuring they are tailored toward users of AI-based systems and their tasks

    Why it Remains Challenging to Assess Artificial Intelligence

    Get PDF
    Artificial Intelligence (AI) assessment to mitigate risks arising from biased, unreliable, or regulatory non-compliant systems remains an open challenge for researchers, policymakers, and organizations across industries. Due to the scattered nature of research on AI across disciplines, there is a lack of overview on the challenges that need to be overcome to move AI assessment forward. In this study, we synthesize existing research on AI assessment applying a descriptive literature review. Our study reveals seven challenges along three main categories: ethical implications, regulatory gaps, and technical limitations. This study contributes to a better understanding of the challenges in AI assessment so that AI researchers and practitioners can resolve these challenges to move AI assessment forward

    Why it Remains Challenging to Assess Artificial Intelligence

    Get PDF
    Artificial Intelligence (AI) assessment to mitigate risks arising from biased, unreliable, or regulatory non-compliant systems remains an open challenge for researchers, policymakers, and organizations across industries. Due to the scattered nature of research on AI across disciplines, there is a lack of overview on the challenges that need to be overcome to move AI assessment forward. In this study, we synthesize existing research on AI assessment applying a descriptive literature review. Our study reveals seven challenges along three main categories: ethical implications, regulatory gaps, and technical limitations. This study contributes to a better understanding of the challenges in AI assessment so that AI researchers and practitioners can resolve these challenges to move AI assessment forward

    Metrics, Explainability and the European AI Act Proposal

    Get PDF
    On 21 April 2021, the European Commission proposed the first legal framework on Artificial Intelligence (AI) to address the risks posed by this emerging method of computation. The Commission proposed a Regulation known as the AI Act. The proposed AI Act considers not only machine learning, but expert systems and statistical models long in place. Under the proposed AI Act, new obligations are set to ensure transparency, lawfulness, and fairness. Their goal is to establish mechanisms to ensure quality at launch and throughout the whole life cycle of AI-based systems, thus ensuring legal certainty that encourages innovation and investments on AI systems while preserving fundamental rights and values. A standardisation process is ongoing: several entities (e.g., ISO) and scholars are discussing how to design systems that are compliant with the forthcoming Act, and explainability metrics play a significant role. Specifically, the AI Act sets some new minimum requirements of explicability (transparency and explainability) for a list of AI systems labelled as “high-risk” listed in Annex III. These requirements include a plethora of technical explanations capable of covering the right amount of information, in a meaningful way. This paper aims to investigate how such technical explanations can be deemed to meet the minimum requirements set by the law and expected by society. To answer this question, with this paper we propose an analysis of the AI Act, aiming to understand (1) what specific explicability obligations are set and who shall comply with them and (2) whether any metric for measuring the degree of compliance of such explanatory documentation could be designed. Moreover, by envisaging the legal (or ethical) requirements that such a metric should possess, we discuss how to implement them in a practical way. More precisely, drawing inspiration from recent advancements in the theory of explanations, our analysis proposes that metrics to measure the kind of explainability endorsed by the proposed AI Act shall be risk-focused, model-agnostic, goal-aware, intelligible, and accessible. Therefore, we discuss the extent to which these requirements are met by the metrics currently under discussion
    corecore