32 research outputs found

    Casting the Dragnet: Communications Data Retention Under the Investigatory Powers Act

    Get PDF
    In November 2016 the Investigatory Powers Act ('IPA') received Royal Assent. IPA was hailed by the Government as bringing the UK’s surveillance framework into the 21st Century and better allowing security and intelligence agencies to combat terrorism and serious crime. IPA raises a range of concerns, with its progress through Parliament marked by sustained opposition from civil liberties groups. One of the most controversial aspects of IPA is the bulk communications data retention and disclosure framework in Parts 3 and 4. This article concerns the compatibility of that framework with EU law in light of CJEU decisions in Digital Rights Ireland ('DRI') and Watson. It will begin by briefly providing some background, will then broadly set out the requirements that can be determined from these decisions, and will proceed to take a more detailed analysis of these requirements in relation to Parts 3 and 4 IPA

    Public Policy and Artificial Intelligence: Vantage Points for Critical Inquiry

    Get PDF
    This chapter introduces three key lines of critical inquiry to address the relationship of artificial intelligence (AI) and public policy. We provide three vantage points to better understand this relationship, in which dominant narratives about AI’s merits for public sector decisions and service provision often clash with real world experiences of their limitations and illegal effects. First, we critically examine the politicaldrivers for, and significance of, how we define AI, its role and workings in the policy world; and how we demarcate the scope of regulation. Second, we explore the AI/policy relationship, by focusing on how it unfolds through specific, but often contradictory and ambivalent, practices, that in different settings, combine meaning, strategic action, technological affordances, as well as material/digital objects and their effects. Our third vantage point critically assesses how these practices are situated in an uneven political economy of AI technology production, and with what implications for global justice

    Disclosure by Design : Designing information disclosures to support meaningful transparency and accountability

    Get PDF
    Funding Information: We acknowledge the financial support of the UK Engineering and Physical Sciences Research Council (EP/P024394/1 and EP/R033501/1) and Microsoft through the Microsoft Cloud Computing Research Centre

    Decision Provenance: Harnessing data flow for accountable systems

    Get PDF
    Demand is growing for more accountability regarding the technological systems that increasingly occupy our world. However, the complexity of many of these systems — often systems-of-systems — poses accountability challenges. This is because the details and nature of the information flows that interconnect and drive systems, which often occur across technical and organisational boundaries, tend to be invisible or opaque. This paper argues that data provenance methods show much promise as a technical means for increasing the transparency of these interconnected systems. Specifically, given the concerns regarding ever-increasing levels of automated and algorithmic decision-making, and so-called ‘algorithmic systems’ in general, we propose decision provenance as a concept showing much potential. Decision provenance entails using provenance methods to provide information exposing decision pipelines: chains of inputs to, nature of, and the flow-on effects from, the decisions and actions taken (at design and run-time) throughout systems. This paper introduces the concept of decision provenance, and takes an interdisciplinary (tech- legal) exploration into its potential for assisting accountability in algorithmic systems. We also indicate the implementation considerations and areas for research necessary to realise its vision. More generally, we make the case that considerations of data flow are important to discussions of accountability, complementing the community’s considerable focus on algorthmic specifics.EPSRC Microsoft (donation

    From transparency to accountability of intelligent systems: Moving beyond aspirations

    Get PDF
    A number of governmental and nongovernmental organizations have made significant efforts to encourage the development of artificial intelligence in line with a series of aspirational concepts such as transparency, interpretability, explainability, and accountability. The difficulty at present, however, is that these concepts exist at a fairly abstract level, whereas in order for them to have the tangible effects desired they need to become more concrete and specific. This article undertakes precisely this process of concretisation, mapping how the different concepts interrelate and what in particular they each require in order to move from being high-level aspirations to detailed and enforceable requirements. We argue that the key concept in this process is accountability, since unless an entity can be held accountable for compliance with the other concepts, and indeed more generally, those concepts cannot do the work required of them. There is a variety of taxonomies of accountability in the literature. However, at the core of each account appears to be a sense of “answerability”; a need to explain or to give an account. It is this ability to call an entity to account which provides the impetus for each of the other concepts and helps us to understand what they must each require

    Legal Singularity and the Reflexivity of Law

    No full text
    corecore