87,960 research outputs found

    Applying Cost-Benefit to Past Decisions: Was Environmental Protection Ever a Good Idea?

    Get PDF
    In this Article, however, we do not mount a critique from outside the technique of cost-benefit analysis. Instead, we examine an argument that proponents of cost-benefit analysis have offered as a linchpin of the case for cost-benefit: that this technique is neither anti- nor pro-regulatory, but rather a neutral tool for evaluating public policy. In making this argument, these proponents have often invoked the use of cost-benefit analysis to support previous regulatory decisions (their favorite example involves the phase down of lead in gasoline, which we shall shortly discuss) as a sign that this technique can be used to support as well as to undermine protective regulation. As we demonstrate, however, cost-benefit analysis would have stood as an obstacle to early regulatory successes. Before turning to the various case studies illustrating this point, we first take a brief look at previous efforts to undertake retrospective cost-benefit analyses of important regulatory achievements

    Artificial intelligence and UK national security: Policy considerations

    Get PDF
    RUSI was commissioned by GCHQ to conduct an independent research study into the use of artificial intelligence (AI) for national security purposes. The aim of this project is to establish an independent evidence base to inform future policy development regarding national security uses of AI. The findings are based on in-depth consultation with stakeholders from across the UK national security community, law enforcement agencies, private sector companies, academic and legal experts, and civil society representatives. This was complemented by a targeted review of existing literature on the topic of AI and national security. The research has found that AI offers numerous opportunities for the UK national security community to improve efficiency and effectiveness of existing processes. AI methods can rapidly derive insights from large, disparate datasets and identify connections that would otherwise go unnoticed by human operators. However, in the context of national security and the powers given to UK intelligence agencies, use of AI could give rise to additional privacy and human rights considerations which would need to be assessed within the existing legal and regulatory framework. For this reason, enhanced policy and guidance is needed to ensure the privacy and human rights implications of national security uses of AI are reviewed on an ongoing basis as new analysis methods are applied to data

    Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making Systems

    Get PDF
    Automated decision making is becoming the norm across large parts of society, which raises interesting liability challenges when human control over technical systems becomes increasingly limited. This article defines "quasi-automation" as inclusion of humans as a basic rubber-stamping mechanism in an otherwise completely automated decision-making system. Three cases of quasi- automation are examined, where human agency in decision making is currently debatable: self- driving cars, border searches based on passenger name records, and content moderation on social media. While there are specific regulatory mechanisms for purely automated decision making, these regulatory mechanisms do not apply if human beings are (rubber-stamping) automated decisions. More broadly, most regulatory mechanisms follow a pattern of binary liability in attempting to regulate human or machine agency, rather than looking to regulate both. This results in regulatory gray areas where the regulatory mechanisms do not apply, harming human rights by preventing meaningful liability for socio-technical decision making. The article concludes by proposing criteria to ensure meaningful agency when humans are included in automated decision-making systems, and relates this to the ongoing debate on enabling human rights in Internet infrastructure

    Islands of Effective International Adjudication: Constructing an Intellectual Property Rule of Law in the Andean Community

    Get PDF
    The Andean Community - a forty-year-old regional integration pact of small developing countries in South America - is widely viewed as a failure. In this Article, we show that the Andean Community has in fact achieved remarkable success within one part of its legal system. The Andean Tribunal of Justice (ATJ) is the world\u27s third most active international court, with over 1400 rulings issued to date. Over 90% of those rulings concern intellectual property (IP). The ATJ has helped to establish IP as a rule of law island in the Andean Community where national judges, administrative officials, and private parties actively participate in regional litigation and conform their behavior to Andean IP rules. In the vast seas surrounding this island, by contrast, Andean rules remain riddled with exceptions, under-enforced, and often circumvented by domestic actors. We explain how the ATJ helped to construct the IP rule of law island and why litigation has not spilled over to other issue areas regulated by the Andean Community. Our analysis makes four broad contributions to international law and international relations scholarship. First, we adopt and apply a broad definition of an effective rule of law, using qualitative and quantitative analysis to explain how the Andean legal system contributes to changing national decision-making in favor of compliance with Andean rules. Our definition and our explanation of the ATJ\u27s contributions to constructing an effective rule of law provide a model that can be replicated elsewhere. Second, we explain how the Andean legal system has helped domestic IP administrative agencies in the region resist pressures for stronger IP protection from national executives, the United States, and American corporations. We emphasize the importance of these agencies rather than domestic judges as key constituencies that have facilitated the emergence of an effective rule of law for IP. As a result of the agencies\u27 actions, Andean IP rules remain more closely tailored to the economic and social needs of developing counties than do the IP rules of the Community\u27s regional neighbors. Third, the reality that the ATJ is effective, but only within a single issue area, makes the Andean experience of broader theoretical interest. We offer an explanation for why Andean legal integration has not extended beyond IP. But our answer suggests avenues for additional research. We note that Andean IP rules are more specific than other areas of Andean law and that most administrative agencies in the region lack the autonomy needed to serve as compliance partners for ATJ rulings. We also find that, outside of IP, the ATJ is unwilling to issue the sort of purposive interpretations that encourages private parties to invoke Andean rules in litigation. The result is both a lack of demand for and supply of ATJ rulings. Fourth, our study of the Andean legal system provides new evidence to assess three competing theories of effective international adjudication - theories that ascribe effectiveness to the design of international legal systems, to the ability of member states to sanction international judges, and to domestic legal and political factors. We also explore the possibility that rule of law islands may be emerging in other treaty-based systems subject to the jurisdiction of international tribunals

    Building the case for actionable ethics in digital health research supported by artificial intelligence

    Get PDF
    The digital revolution is disrupting the ways in which health research is conducted, and subsequently, changing healthcare. Direct-to-consumer wellness products and mobile apps, pervasive sensor technologies and access to social network data offer exciting opportunities for researchers to passively observe and/or track patients ‘in the wild’ and 24/7. The volume of granular personal health data gathered using these technologies is unprecedented, and is increasingly leveraged to inform personalized health promotion and disease treatment interventions. The use of artificial intelligence in the health sector is also increasing. Although rich with potential, the digital health ecosystem presents new ethical challenges for those making decisions about the selection, testing, implementation and evaluation of technologies for use in healthcare. As the ‘Wild West’ of digital health research unfolds, it is important to recognize who is involved, and identify how each party can and should take responsibility to advance the ethical practices of this work. While not a comprehensive review, we describe the landscape, identify gaps to be addressed, and offer recommendations as to how stakeholders can and should take responsibility to advance socially responsible digital health research

    Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR

    Get PDF
    There has been much discussion of the right to explanation in the EU General Data Protection Regulation, and its existence, merits, and disadvantages. Implementing a right to explanation that opens the black box of algorithmic decision-making faces major legal and technical barriers. Explaining the functionality of complex algorithmic decision-making systems and their rationale in specific cases is a technically challenging problem. Some explanations may offer little meaningful information to data subjects, raising questions around their value. Explanations of automated decisions need not hinge on the general public understanding how algorithmic systems function. Even though such interpretability is of great importance and should be pursued, explanations can, in principle, be offered without opening the black box. Looking at explanations as a means to help a data subject act rather than merely understand, one could gauge the scope and content of explanations according to the specific goal or action they are intended to support. From the perspective of individuals affected by automated decision-making, we propose three aims for explanations: (1) to inform and help the individual understand why a particular decision was reached, (2) to provide grounds to contest the decision if the outcome is undesired, and (3) to understand what would need to change in order to receive a desired result in the future, based on the current decision-making model. We assess how each of these goals finds support in the GDPR. We suggest data controllers should offer a particular type of explanation, unconditional counterfactual explanations, to support these three aims. These counterfactual explanations describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the closest possible world, without needing to explain the internal logic of the system
    • …
    corecore