12,999 research outputs found

    A Layered Model for AI Governance

    Full text link

    Auditing large language models: a three-layered approach

    Full text link
    The emergence of large language models (LLMs) represents a major advance in artificial intelligence (AI) research. However, the widespread use of LLMs is also coupled with significant ethical and social challenges. Previous research has pointed towards auditing as a promising governance mechanism to help ensure that AI systems are designed and deployed in ways that are ethical, legal, and technically robust. However, existing auditing procedures fail to address the governance challenges posed by LLMs, which are adaptable to a wide range of downstream tasks. To help bridge that gap, we offer three contributions in this article. First, we establish the need to develop new auditing procedures that capture the risks posed by LLMs by analysing the affordances and constraints of existing auditing procedures. Second, we outline a blueprint to audit LLMs in feasible and effective ways by drawing on best practices from IT governance and system engineering. Specifically, we propose a three-layered approach, whereby governance audits, model audits, and application audits complement and inform each other. Finally, we discuss the limitations not only of our three-layered approach but also of the prospect of auditing LLMs at all. Ultimately, this article seeks to expand the methodological toolkit available to technology providers and policymakers who wish to analyse and evaluate LLMs from technical, ethical, and legal perspectives.Comment: Preprint, 29 pages, 2 figure

    International Experiences with Decentralisation

    Get PDF
    Effective decentralisation requires the clear assignment of duties and responsibilities (functions); sufficient resources (funds) and staff (functionaries) needed to carry out public duties at each level of government. The 3Fs as they are commonly known are critical to the design of any decentralised system and must be carefully sequenced to ensure their success. This paper looks at how three countries – Bolivia, Switzerland and Uganda – have devolved the 3Fs to local governments.Decentralization; local governance; federalism

    Music 2025 : The Music Data Dilemma: issues facing the music industry in improving data management

    Get PDF
    © Crown Copyright 2019Music 2025ʼ investigates the infrastructure issues around the management of digital data in an increasingly stream driven industry. The findings are the culmination of over 50 interviews with high profile music industry representatives across the sector and reflects key issues as well as areas of consensus and contrasting views. The findings reveal whilst there are great examples of data initiatives across the value chain, there are opportunities to improve efficiency and interoperability

    Algorithmic Impact Assessments Under the GDPR: Producing Multi-Layered Explanations

    Get PDF
    Policy-makers, scholars, and commentators are increasingly concerned with the risks of using profiling algorithms and automated decision-making. The EU’s General Data Protection Regulation (GDPR) has tried to address these concerns through an array of regulatory tools. As one of us has argued, the GDPR combines individual rights with systemic governance, towards algorithmic accountability. The individual tools are largely geared towards individual “legibility”: making the decision-making system understandable to an individual invoking her rights. The systemic governance tools, instead, focus on bringing expertise and oversight into the system as a whole, and rely on the tactics of “collaborative governance,” that is, use public-private partnerships towards these goals. How these two approaches to transparency and accountability interact remains a largely unexplored question, with much of the legal literature focusing instead on whether there is an individual right to explanation.The GDPR contains an array of systemic accountability tools. Of these tools, impact assessments (Art. 35) have recently received particular attention on both sides of the Atlantic, as a means of implementing algorithmic accountability at early stages of design, development, and training. The aim of this paper is to address how a Data Protection Impact Assessment (DPIA) links the two faces of the GDPR’s approach to algorithmic accountability: individual rights and systemic collaborative governance. We address the relationship between DPIAs and individual transparency rights. We propose, too, that impact assessments link the GDPR’s two methods of governing algorithmic decision-making by both providing systemic governance and serving as an important “suitable safeguard” (Art. 22) of individual rights.After noting the potential shortcomings of DPIAs, this paper closes with a call — and some suggestions — for a Model Algorithmic Impact Assessment in the context of the GDPR. Our examination of DPIAs suggests that the current focus on the right to explanation is too narrow. We call, instead, for data controllers to consciously use the required DPIA process to produce what we call “multi-layered explanations” of algorithmic systems. This concept of multi-layered explanations not only more accurately describes what the GDPR is attempting to do, but also normatively better fills potential gaps between the GDPR’s two approaches to algorithmic accountability

    Designing an AI governance framework: From research-based premises to meta-requirements

    Get PDF
    The development and increasing use of artificial intelligence (AI), particularly in high-risk application areas, calls for attention to the governance of AI systems. Organizations and researchers have proposed AI ethics principles, but translating principles into practice-oriented frameworks has proven difficult. This paper develops meta-requirements for organizational AI governance frameworks to help translate ethical AI principles into practice and align operations with the forthcoming European AI Act. We adopt a design science research approach. We put forward research-based premises, then we report the design method employed in an industry-academia research project. Based on these, we present seven meta-requirements for AI governance frameworks. The paper contributes to the IS research on AI governance by collating knowledge into meta-requirements and advancing a design approach to AI governance. The study underscores that governance frameworks need to incorporate the characteristics of AI, its contexts, and the different sources of requirements
    • …
    corecore