5,286 research outputs found

    CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020

    Get PDF
    The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market, technical, ethical and governance challenges posed by the intersection of AI and cybersecurity, focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder by design and composed of academics, industry players from various sectors, policymakers and civil society. The Task Force is currently discussing issues such as the state and evolution of the application of AI in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics between cyber attackers and defenders; the increasing need for sharing information on threats and how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and possible EU policy measures to ease the adoption of AI in cybersecurity in Europe. As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed at helping the public and the private sector in operationalising Trustworthy AI. The list is composed of 131 items that are supposed to guide AI designers and developers throughout the process of design, development, and deployment of AI, although not intended as guidance to ensure compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a revision that will be finalised in early 2020. This report would like to contribute to this revision by addressing in particular the interplay between AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks are fundamentally different from traditional cyberattacks; whether they are compatible with different risk levels; whether they are flexible enough in terms of clear/easy measurement, implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles for the industry. The HLEG is a diverse group, with more than 50 members representing different stakeholders, such as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of producing a simple checklist for a complex issue. The public engagement exercise looks successful overall in that more than 450 stakeholders have signed in and are contributing to the process. The next sections of this report present the items listed by the HLEG followed by the analysis and suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)

    Ethics-based AI auditing core drivers and dimensions: A systematic literature review

    Get PDF
    This thesis provides a systematic literature review (SLR) of ethics-based AI auditing research. The review’s main goals are to report the current status of AI auditing academic literature and provide findings addressing the review objectives. The review incorporated 50 articles presenting ethics-based AI auditing. The SLR findings indicate that the AI auditing field is still new and rising. Most of the studies were conference proceeding published either 2019 or 2020. Therefore, there was a demand for a SLR work as the AI auditing field was wide and unorganized. Based on the SLR findings, fairness, transparency, non-maleficence and responsibility are the most important principles for the ethics-based AI auditing. Other commonly identified principles were privacy, beneficence, freedom and autonomy and trust. These principles were interpreted to belong to either drivers or dimensions depending on whether something is audited directly or whether achieving ethics is a desired outcome. The findings also suggest that the external AI auditing leads the ethics-based AI auditing discussion. Majority of the papers dealt specifically with external AI auditing. The most important stakeholders were recognized to be researchers, developers and deployers, regulators, auditors, users and individuals and companies. Roles of the stakeholders varied depending on whether they are proposed to conduct AI audits or whether they are in the position of beneficiary.Tässä Pro gradu -tutkielmassa esitellään systemaattinen kirjallisuuskatsaus etiikkalähtöiseen tekoälyn auditointiin. Kirjallisuuskatsauksen keskeisimmät tavoitteet ovat esittää tämänhetkinen tila tekoälyn auditoinnin akateemisesta kirjallisuudesta sekä esittää keskeisimmät löydökset tutkielman tavoitteiden mukaisesti. Kirjallisuuskatsaus sisälsi 50 artikkelia, mitkä käsittelivat etiikkalähtöistä tekoälyn auditointia. Systemaattisen kirjallisuuskatsauksen löydökset osoittivat, että tekoälyn auditoinnin ala on edelleen uusi ja kasvava. Suurin osa julkaisuista oli konferenssipapereita vuosilta 2019-2020. Ala on myös laaja sekä epäorganisoitu, joten systemaattiselle kirjallisuuskatsaukselle oli kysyntää. Löydöksien perusteella reiluus, läpinäkyvyys, ei-haitallisuus sekä vastuullisuus ovat tärkeimmät periaatteet etiikkalähtöiseen tekoälyn auditointiin. Muut yleisesti tunnistetut periaatteet olivat yksityisyys, hyvyys, vapaus ja autonomia sekä luottamus. Nämä periaatteet tulkittiin kuuluvaksi joko ajureihin tai dimensioihin sen perusteella auditoitiinko periaatetta suoraan vai oliko periaatteen saavuttaminen auditoinnin toivottu tulos. Löydökset osoittivat myös, että ulkoinen auditointi hallitsee tämänhetkistä keskustelua etiikkalähtöisessä tekoälyn auditoinnissa. Valtaosa papereista käsitteli erityisesti ulkoista tekoälyn auditointia. Lisäksi tärkeimmät sidosryhmät tunnistettiin. Nämä olivat tutkijat, järjestelmän kehittäjät, lainvalvojat, auditoijat, käyttäjät sekä yksilöt ja organisaatiot. Heidän roolinsa vaihtelivat sen perusteella vastasivatko he tekoälyn auditoinnin toteuttamisesta vai kuuluivatko he tekoälyn auditoinnin edunsaajiin

    ALGORITHMIC AUDITING: CHASING AI ACCOUNTABILITY

    Get PDF
    Calls for audits to expose and mitigate harms related to algorithmic decision systems are proliferating,3 and audit provisions are coming into force—notably in the E.U. Digital Services Act.4 In response to these growing concerns, research organizations working on technology accountability have called for ethics and/or human rights auditing of algorithms and an Artificial Intelligence (AI) audit industry is rapidly developing, signified by the consulting giants KPMG and Deloitte marketing their services.5 Algorithmic audits are a way to increase accountability for social media companies and to improve the governance of AI systems more generally. They can be elements of industry codes, prerequisites for liability immunity, or new regulatory requirements.6 Even when not expressly prescribed, audits may be predicates for enforcing data-related consumer protection law, or what U.S. Federal Trade Commissioner Rebecca Slaughter calls “algorithmic justice.” 7 The desire for audits reflect a growing sense that algorithms play an important, yet opaque, role in the decisions that shape people’s life chances—as well as a recognition that audits have been uniquely helpful in advancing our understanding of the concrete consequences of algorithms in the wild and in assessing their likely impacts.

    Systematizing Audit in Algorithmic Recruitment

    Get PDF
    Business psychologists study and assess relevant individual differences, such as intelligence and personality, in the context of work. Such studies have informed the development of artificial intelligence systems (AI) designed to measure individual differences. This has been capitalized on by companies who have developed AI-driven recruitment solutions that include aggregation of appropriate candidates (Hiretual), interviewing through a chatbot (Paradox), video interview assessment (MyInterview), and CV-analysis (Textio), as well as estimation of psychometric characteristics through image-(Traitify) and game-based assessments (HireVue) and video interviews (Cammio). However, driven by concern that such high-impact technology must be used responsibly due to the potential for unfair hiring to result from the algorithms used by these tools, there is an active effort towards proving mechanisms of governance for such automation. In this article, we apply a systematic algorithm audit framework in the context of the ethically critical industry of algorithmic recruitment systems, exploring how audit assessments on AI-driven systems can be used to assure that such systems are being responsibly deployed in a fair and well-governed manner. We outline sources of risk for the use of algorithmic hiring tools, suggest the most appropriate opportunities for audits to take place, recommend ways to measure bias in algorithms, and discuss the transparency of algorithms

    Predictive Algorithms in Justice Systems and the Limits of Tech-Reformism

    Get PDF
    Data-driven digital technologies are playing a pivotal role in shaping the global landscape of criminal justice across several jurisdictions. Predictive algorithms, in particular, now inform decision making at almost all levels of the criminal justice process. As the algorithms continue to proliferate, a fast-growing multidisciplinary scholarship has emerged to challenge their logics and highlight their capacity to perpetuate historical biases. Drawing on insights distilled from critical algorithm studies and the digital sociology scholarship, this paper outlines the limits of prevailing tech-reformist remedies. The paper also builds on the interstices between the two scholarships to make a case for a broader structural framework for understanding the conduits of algorithmic bias

    Auditing large language models: a three-layered approach

    Full text link
    The emergence of large language models (LLMs) represents a major advance in artificial intelligence (AI) research. However, the widespread use of LLMs is also coupled with significant ethical and social challenges. Previous research has pointed towards auditing as a promising governance mechanism to help ensure that AI systems are designed and deployed in ways that are ethical, legal, and technically robust. However, existing auditing procedures fail to address the governance challenges posed by LLMs, which are adaptable to a wide range of downstream tasks. To help bridge that gap, we offer three contributions in this article. First, we establish the need to develop new auditing procedures that capture the risks posed by LLMs by analysing the affordances and constraints of existing auditing procedures. Second, we outline a blueprint to audit LLMs in feasible and effective ways by drawing on best practices from IT governance and system engineering. Specifically, we propose a three-layered approach, whereby governance audits, model audits, and application audits complement and inform each other. Finally, we discuss the limitations not only of our three-layered approach but also of the prospect of auditing LLMs at all. Ultimately, this article seeks to expand the methodological toolkit available to technology providers and policymakers who wish to analyse and evaluate LLMs from technical, ethical, and legal perspectives.Comment: Preprint, 29 pages, 2 figure
    • …
    corecore