9,105 research outputs found

    Think Tank Review Issue 62 December 2018

    Get PDF

    The Global Governance of Artificial Intelligence: Next Steps for Empirical and Normative Research

    Full text link
    Artificial intelligence (AI) represents a technological upheaval with the potential to change human society. Because of its transformative potential, AI is increasingly becoming subject to regulatory initiatives at the global level. Yet, so far, scholarship in political science and international relations has focused more on AI applications than on the emerging architecture of global AI regulation. The purpose of this article is to outline an agenda for research into the global governance of AI. The article distinguishes between two broad perspectives: an empirical approach, aimed at mapping and explaining global AI governance; and a normative approach, aimed at developing and applying standards for appropriate global AI governance. The two approaches offer questions, concepts, and theories that are helpful in gaining an understanding of the emerging global governance of AI. Conversely, exploring AI as a regulatory issue offers a critical opportunity to refine existing general approaches to the study of global governance

    The Use of Artificial Intelligence in Armed Conflict under International Law

    Get PDF
    Artificial Intelligence (AI) is a technological achievement that simulates human intelligence through machines or computer programs. The integration of AI in military operations aims to minimize combatant casualties and enhance effectiveness in warfare. Despite the advantages and significance of this research, concerns arise regarding the ideal implementation of AI in armed conflicts due to potential security challenges. A significant issue lies in the legal perspective governing AI as a comprehensive defense tool. This paper employs a juridical normative research method based on a statutory approach to provide a descriptive analysis and examine the regulatory framework surrounding AI in armed conflict. The results indicate that the absence of comprehensive regulations complicates the accountability framework, making liability determination intricate, particularly when AI malfunctions due to substandard quality or improper use. In such cases, accountability may extend to both the creator and the user. The concept of liability for violations in armed conflict is explored according to international law, highlighting the implications and associated responsibilities of using AI within legal principles. This paper concludes that AI regulation must be crafted to ensure usage aligns with established procedures within the framework of international law.

    The weaponization of artificial intelligence (AI) and its implications on the security dilemma between states: could it create a situation similar to mutually assured destruction (MAD)

    Get PDF
    There is no a consensus in the IR literature on the possible implications of AI for cyber or nuclear capabilities, and whether AI would exacerbate, or potentially mitigate, the security dilemma between actors with varying capabilities. This capstone project explores these questions, using experts\u27 interviews and secondary data. It has tackled the issue under study by using the most-similar method in which most of the variables are similar. The paper argues the weaponization of AI exacerbates the security dilemma between states since it increases uncertainty. What is actually problematic about the military AI applications, as opposed to other military capabilities, is the declining role of humans. AI could be productive and counterproductive when it comes to policy making, implying the necessity of keeping humans over-the-loop. Neutralization makes AI deterrence reasonable for avoiding destructive, disruptive and manipulative outcomes. Like nuclear capabilities, establishing an AI-MAD structure, regulating the uses of AI and establishing a governing regime for AI arms race are the best possible policies. Keywords: Artificial Intelligence, Deterrence, Mutually Assured Destruction, Arms Contro

    Full Issue: Spring 2017

    Get PDF

    Differential technology development: A responsible innovation principle for navigating technology risks

    Get PDF
    Responsible innovation efforts to date have largely focused on shaping individual technologies. However, as demonstrated by the preferential advancement of low-emission technologies, certain technologies reduce risks from other technologies or constitute low-risk substitutes. Governments and other relevant actors may leverage risk-reducing interactions across technology portfolios to mitigate risks beyond climate change. We propose a responsible innovation principle of “differential technology development”, which calls for leveraging risk-reducing interactions between technologies by affecting their relative timing. Thus, it may be beneficial to delay risk-increasing technologies and preferentially advance risk-reducing defensive, safety, or substitute technologies. Implementing differential technology development requires the ability to anticipate or identify impacts and intervene in the relative timing of technologies. We find that both are sometimes viable and that differential technology development may still be usefully applied even late in the diffusion of a harmful technology. A principle of differential technology development may inform government research funding priorities and technology regulation, as well as philanthropic research and development funders and corporate social responsibility measures. Differential technology development may be particularly promising to mitigate potential catastrophic risks from emerging technologies like synthetic biology and artificial intelligence

    War Algorithms in Modern Deliberative Democracies: Parliamentary Technology Assessment as a Public Conscience Discovery Tool?

    Get PDF
    This paper is focused on the intersection of public international law and parliamentary assessment of technologies in the context of discussions on the lethal applications of artificial intelligence. The authors discuss the ‘public conscience requirements’ of the Martens clause as an opportunity to increase the legitimacy of international law by including qualified public opinion in the international law-making process. This is particularly important in the case of controversial technologies such as lethal autonomous weapons systems, which have a fundamental impact on warfare and the application of which comes with both unprecedented benefits and as well as risks for humankind. The authors advocate the actual use of the Parliamentary Technology Assessment (PTA) mechanism as a method based on democratic deliberation and participation, which – especially in times of disinformation and fake news – can provide a reliable source of information and sights for both policy makers as well as the general public. PTA can be also seen as an institutionalised channel allowing civil society to exercise oversight over disruptive military technologies

    Towards European Anticipatory Governance for Artificial Intelligence

    Get PDF
    This report presents the findings of the Interdisciplinary Research Group “Responsibility: Machine Learning and Artificial Intelligence” of the Berlin-Brandenburg Academy of Sciences and Humanities and the Technology and Global Affairs research area of DGAP. In September 2019, they brought leading experts from research and academia together with policy makers and representatives of standardization authorities and technology organizations to set framework conditions for a European anticipatory governance regime for artificial intelligence (AI)
    corecore