30 research outputs found

    "World Community Interest" Approach to Interim Measures on "Robot Weapons": Revisiting the nuclear test cases

    Get PDF
    Forty-four years after the ICJ provisional measures orders in the Nuclear Test Cases requesting France stop atmospheric nuclear weapons testing in the South Pacific, we are facing another deadly threat from the use and development of "robot weapons". Back in the 1970s nuclear weapons were seen as the new frontier in state defensive capability, just as "robot weapons" are today. The ICJ set a precedent for using provisional measures to prevent harm caused by new weapons technology. The orders under art 41 ICJ Statute requested France to avoid nuclear tests causing the deposit of radioactive fall-out on Australian and New Zealand territories. The Nuclear Test Cases are instructive on how the Court may deal with new weapons technology, providing clarification on an urgent basis while a case is pending. Although provisional measures were ordered only in relation to atmospheric tests, the ICJ remains the only international judicial body to have ordered cessation of nuclear tests pending a case. In contrast, individual complaints before the HRC and the ECtHR were refused provisional measures to prevent nuclear tests

    Strengthening European Union Democratic Accountability Through National and Treaty-Based Pre-Legislative Controls

    Get PDF
    This article considers whether greater accountability for EU supranational decision-making can be achieved through a combination of member states’ legislative processes and EU treaty-based mechanisms. The EU is formed by member states’ national consent through treaty ratification and a system of domestic pre-legislative controls on consent— parliamentary approval, public consultation and referendum—which operates to limit the nature and extent of EU law. Using the UK as an example to compare with other member states, the article contends that such domestic controls are prerequisites to national incorporation of EU law and strengthen democratic accountability. Consent alone, however, does not provide an adequate basis for accountability of supranational decisions; EU constitutional principles of citizenship, democracy, and political rights illustrate how the EU fulfills a role as protector of rights. The article further argues that the EU’s protector role represents partial legitimacy and accountability for supranational decisions. Greater legitimacy and accountability derives from national parliaments’ pre-legislative controls under EU law—scrutinizing legislation, monitoring subsidiarity, and exercising veto powers. The article concludes that if these controls are exercised properly, they represent powerful accountability mechanisms

    “World Community Interest” Approach to Interim Measures on “Robot Weapons”: Revisiting the Nuclear Test Cases

    Get PDF
    Forty-four years after the ICJ provisional measures orders in the Nuclear Test Cases requesting France stop atmospheric nuclear weapons testing in the South Pacific, we are facing another deadly threat from the use and development of "robot weapons". Back in the 1970s nuclear weapons were seen as the new frontier in state defensive capability, just as "robot weapons" are today. The ICJ set a precedent for using provisional measures to prevent harm caused by new weapons technology. The orders under art 41 ICJ Statute requested France to avoid nuclear tests causing the deposit of radioactive fall-out on Australian and New Zealand territories. The Nuclear Test Cases are instructive on how the Court may deal with new weapons technology, providing clarification on an urgent basis while a case is pending. Although provisional measures were ordered only in relation to atmospheric tests, the ICJ remains the only international judicial body to have ordered cessation of nuclear tests pending a case. In contrast, individual complaints before the HRC and the ECtHR were refused provisional measures to prevent nuclear tests

    Kantian Ethics in the Age of Artificial Intelligence and Robotics

    Get PDF
    Artificial intelligence and robotics is pervasive in daily life and set to expand to new levels potentially replacing human decision-making and action. Self-driving cars, home and healthcare robots, and autonomous weapons are some examples. A distinction appears to be emerging between potentially benevolent civilian uses of the technology (e.g. unmanned aerial vehicles delivering medicines), and potentially malevolent military uses (e.g. lethal autonomous weapons killing human combatants). Machine-mediated human interaction challenges the philosophical basis of human existence and ethical conduct. Aside from technical challenges of ensuring ethical conduct in artificial intelligence and robotics, there are moral questions about the desirability of replacing human functions and the human mind with such technology. How will artificial intelligence and robotics engage in moral reasoning in order to act ethically? Is there a need for a new set of moral rules? What happens to human interaction when it is mediated by technology? Should such technology be used to end human life? Who bears responsibility for wrongdoing or harmful conduct by artificial intelligence and robotics? Whilst Kant may be familiar to international lawyers for setting restraints on the use of force and rules for perpetual peace, his foundational work on ethics provides an inclusive moral philosophy for assessing ethical conduct of individuals and states and, thus, is relevant to discussions on the use and development of artificial intelligence and robotics. His philosophy is inclusive because it incorporates justifications for morals and legitimate responses to immoral conduct, and applies to all human agents irrespective of whether they are wrongdoers, unlawful combatants, or unjust enemies. Humans are at the centre of rational thinking, action, and norm-creation so that the rationale for restraints on methods and means of warfare, for example, is based on preserving human dignity as well as ensuring conditions for perpetual peace among states. Unlike utilitarian arguments which favour use of autonomous weapons on the basis of cost-benefit reasoning or the potential to save lives, Kantian ethics establish non-consequentialist and deontological rules which are good in themselves to follow and not dependent on expediency or achieving a greater public good. Kantian ethics make two distinct contributions to the debate. First, they provide a human-centric ethical framework whereby human existence and capacity are at the centre of a norm-creating moral philosophy guiding our understanding of moral conduct. Second, the ultimate aim of Kantian ethics is practical philosophy that is relevant and applicable to achieving moral conduct. I will seek to address the moral questions outlined above by exploring how core elements of Kantian ethics relate to use of artificial intelligence and robotics in the civilian and military spheres. Section 2 sets out and examines core elements of Kantian ethics: the categorical imperative; autonomy of the will; rational beings and rational thinking capacity; and human dignity and humanity as an end in itself. Sections 3-7 consider how these core elements apply to artificial intelligence and robotics with discussion of fully autonomous and human-machine rule-generating approaches; types of moral reasoning; the difference between ‘human will’ and ‘machine will’; and respecting human dignity

    Kantian Ethics in the Age of Artificial Intelligence and Robotics

    Get PDF
    Artificial intelligence and robotics is pervasive in daily life and set to expand to new levels potentially replacing human decision-making and action. Self-driving cars, home and healthcare robots, and autonomous weapons are some examples. A distinction appears to be emerging between potentially benevolent civilian uses of the technology (eg unmanned aerial vehicles delivering medicines), and potentially malevolent military uses (eg lethal autonomous weapons killing human com- batants). Machine-mediated human interaction challenges the philosophical basis of human existence and ethical conduct. Aside from technical challenges of ensuring ethical conduct in artificial intelligence and robotics, there are moral questions about the desirability of replacing human functions and the human mind with such technology. How will artificial intelligence and robotics engage in moral reasoning in order to act ethically? Is there a need for a new set of moral rules? What happens to human interaction when it is mediated by technology? Should such technology be used to end human life? Who bears responsibility for wrongdoing or harmful conduct by artificial intelligence and robotics? Whilst Kant may be familiar to international lawyers for setting restraints on the use of force and rules for perpetual peace, his foundational work on ethics provides an inclusive moral philosophy for assessing ethical conduct of individuals and states and, thus, is relevant to discussions on the use and development of artificial intelligence and robotics. His philosophy is inclusive because it incorporates justifications for morals and legitimate responses to immoral conduct, and applies to all human agents irrespective of whether they are wrongdoers, unlawful combatants, or unjust enemies. Humans are at the centre of rational thinking, action, and norm-creation so that the rationale for restraints on methods and means of warfare, for example, is based on preserving human dignity as well as ensuring conditions for perpetual peace among states. Unlike utilitarian arguments which favour use of autonomous weapons on the basis of cost-benefit reasoning or the potential to save lives, Kantian ethics establish non-consequentialist and deontological rules which are good in themselves to follow and not dependent on expediency or achieving a greater public good. Kantian ethics make two distinct contributions to the debate. First, they provide a human-centric ethical framework whereby human exist- ence and capacity are at the centre of a norm-creating moral philosophy guiding our understanding of moral conduct. Second, the ultimate aim of Kantian ethics is practical philosophy that is relevant and applicable to achieving moral conduct. I will seek to address the moral questions outlined above by exploring how core elements of Kantian ethics relate to use of artificial intelli- gence and robotics in the civilian and military spheres. Section 2 sets out and examines core elements of Kantian ethics: the categorical imperative; autonomy of the will; rational beings and rational thinking capacity; and human dignity and humanity as an end in itself. Sections 3-7 consider how these core elements apply to artificial intelligence and robotics with discussion of fully autonomous and human-machine rule-generating approaches; types of moral reasoning; the difference be- tween ‘human will’ and ‘machine will’; and respecting human dignity

    The ethical implications of developing and using artificial intelligence and robotics in the civilian and military spheres

    Get PDF
    Machine-mediated human interaction challenges the philosophical basis of human existence and ethical conduct. Aside from technical challenges of ensuring ethical conduct in artificial intelligence and robotics, there are moral questions about the desirability of replacing human functions and the human mind with such technology. How will artificial intelligence and robotics engage in moral reasoning in order to act ethically? Is there a need for a new set of moral rules? What happens to human interaction when it is mediated by technology? Should such technology be used to end human life? Who bears responsibility for wrongdoing or harmful conduct by artificial intelligence and robotics? This paper seeks to address some ethical issues surrounding the development and use of artificial intelligence and robotics in the civilian and military spheres. It explores the implications of fully autonomous and human-machine rule-generating approaches, the difference between “human will” and “machine will, and between machine logic and human judgment

    The Cosmopolitan “No-Harm” Duty in Warfare: Exposing the Utilitarian Pretence of Universalism

    Get PDF
    This article demonstrates a priori cosmopolitan values of restraint and harm limitation exist to establish a cosmopolitan “no-harm” duty in warfare, predating utilitarianism and permeating modern international humanitarian law. In doing so, the author exposes the atemporal and ahistorical nature of utilitarianism which introduces chaos and brutality into the international legal system. Part 2 conceptualises the duty as derived from the “no-harm” principle under international environmental law. Part 3 frames the discussion within legal pluralism and cosmopolitan ethics, arguing that divergent legal jurisdictions without an international authority necessitates a “public international sphere” to mediate differences leading to strong value-commitment norm-creation. One such norm is the “no-harm” duty in warfare. Part 4 traces the duty to the Stoics, Christianity, Islam, Judaism, African traditional culture, Hinduism, and Confucianism. Parts 5 and 6 explain how the duty manifests in principles of distinction and proportionality under international humanitarian law

    Technological innovations and the changing character of warfare: the significance of the 1949 Geneva Conventions seventy years on

    Get PDF
    Seventy years after the adoption of the four Geneva Conventions on 12 August 1949 the changing character of warfare is influenced by, among other things, technological innovations such as artificial intelligence and robotics. States are integrating new technologies into the military sphere for both defensive and offensive capabilities. This impacts on military doctrines, weaponry, and operational strategies. Under the auspices of the 1980 UN Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which may be Deemed to be Excessively Injurious or to have Indiscriminate Effects, the UN Group of Governmental Experts on Lethal Autonomous Weapons Systems is currently deliberating on the legal and ethical issues regarding autonomous weapons, and whether new legally binding or non-legally binding rules should be established regarding their use, restriction, or prohibition. In this context, it is worth reviewing the role and significance of Geneva law provisions in relation to technological innovations in methods and means of warfare
    corecore