331 research outputs found

    The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

    No full text
    This report surveys the landscape of potential security threats from malicious uses of AI, and proposes ways to better forecast, prevent, and mitigate these threats. After analyzing the ways in which AI may influence the threat landscape in the digital, physical, and political domains, we make four high-level recommendations for AI researchers and other stakeholders. We also suggest several promising areas for further research that could expand the portfolio of defenses, or make attacks less effective or harder to execute. Finally, we discuss, but do not conclusively resolve, the long-term equilibrium of attackers and defenders.Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, University of Cambridge, Center for a New American Security, Electronic Frontier Foundation, OpenAI. The Future of Life Institute is acknowledged as a funder

    Can apparent bystanders distinctively shape an outcome? Global south countries and global catastrophic risk-focused governance of artificial intelligence

    Full text link
    Increasingly, there is well-grounded concern that through perpetual scaling-up of computation power and data, current deep learning techniques will create highly capable artificial intelligence that could pursue goals in a manner that is not aligned with human values. In turn, such AI could have the potential of leading to a scenario in which there is serious global-scale damage to human wellbeing. Against this backdrop, a number of researchers and public policy professionals have been developing ideas about how to govern AI in a manner that reduces the chances that it could lead to a global catastrophe. The jurisdictional focus of a vast majority of their assessments so far has been the United States, China, and Europe. That preference seems to reveal an assumption underlying most of the work in this field: That global south countries can only have a marginal role in attempts to govern AI development from a global catastrophic risk -focused perspective. Our paper sets out to undermine this assumption. We argue that global south countries like India and Singapore (and specific coalitions) could in fact be fairly consequential in the global catastrophic risk-focused governance of AI. We support our position using 4 key claims. 3 are constructed out of the current ways in which advanced foundational AI models are built and used while one is constructed on the strategic roles that global south countries and coalitions have historically played in the design and use of multilateral rules and institutions. As each claim is elaborated, we also suggest some ways through which global south countries can play a positive role in designing, strengthening and operationalizing global catastrophic risk-focused AI governance

    Catastrophic Risk from Rapid Developments in Artificial Intelligence: what is yet to be addressed and how might New Zealand policymakers respond?

    Get PDF
    This article describes important possible scenarios in which rapid advances in artificial intelligence (AI) pose multiple risks, including to democracy and for inter-state conflict. In parallel with other countries, New Zealand needs policies to monitor, anticipate and mitigate global catastrophic and existential risks from advanced new technologies. A dedicated policy capacity could translate emerging research and policy options into the New Zealand context. It could also identify how New Zealand could best contribute to global solutions. It is desirable that the potential benefits of AI are realised, while the risks are also mitigated to the greatest extent possible

    International Governance of Civilian AI: A Jurisdictional Certification Approach

    Full text link
    This report describes trade-offs in the design of international governance arrangements for civilian artificial intelligence (AI) and presents one approach in detail. This approach represents the extension of a standards, licensing, and liability regime to the global level. We propose that states establish an International AI Organization (IAIO) to certify state jurisdictions (not firms or AI projects) for compliance with international oversight standards. States can give force to these international standards by adopting regulations prohibiting the import of goods whose supply chains embody AI from non-IAIO-certified jurisdictions. This borrows attributes from models of existing international organizations, such as the International Civilian Aviation Organization (ICAO), the International Maritime Organization (IMO), and the Financial Action Task Force (FATF). States can also adopt multilateral controls on the export of AI product inputs, such as specialized hardware, to non-certified jurisdictions. Indeed, both the import and export standards could be required for certification. As international actors reach consensus on risks of and minimum standards for advanced AI, a jurisdictional certification regime could mitigate a broad range of potential harms, including threats to public safety

    The Landscape of Artificial Intelligence Ethics: Analysis of Developments, Challenges, and Comparison of Different Markets

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies ManagementArtificial Intelligence has become a disruptive force in the everyday lives of billions of people worldwide, and the impact it has will only increase in the future. Be it an algorithm that knows precisely what we want before we are consciously aware of it or a fully automized and weaponized drone that decides in a fraction of a second if it may strike a lethal attack or not. Those algorithms are here to stay. Even if the world could come together and ban, e.g., algorithm-based weaponized systems, there would still be many systems that unintentionally harm individuals and whole societies. Therefore, we must think of AI with Ethical considerations to mitigate the harm and bias of human design, especially with the data on which the machine consciousness is created. Although it may just be an algorithm for a simple automated task, like visual classification, the outcome can have discriminatory results with long-term consequences. This thesis explores the developments and challenges of Artificial Intelligence Ethics in different markets based on specific factors, aims to answer scientific questions, and seeks to raise new ones for future research. Furthermore, measurements and approaches for mitigating risks that lead to such harmful algorithmic decisions and identifying global differences in this field are the main objectives of this research

    Singularity and Coordination Problems: Pandemic Lessons from 2020

    Get PDF
    One of the strands of the Transhumanist movement, Singulitarianism, studies the possibility that high-level artificial intelligence may be created in the future, debating ways to ensure that the interaction between human society and advanced artificial intelligence can occur safely and beneficially. But how can we guarantee this safe interaction? Are there any indications that a Singularity may be on the horizon? In trying to answer these questions, We'll make a small introduction to the area of security research in artificial intelligence. We'll review some of the current paradigms in the development of autonomous intelligent systems and evidence that we can use to prospect the coming of a possible technological Singularity. Finally, we will present a reflection using the COVID-19 pandemic, something that showed that our biggest problem in managing existential risks is our lack of coordination skills as a global society

    Responsible Governance of Artificial Intelligence: An Assessment, Theoretical Framework, and Exploration

    Get PDF
    abstract: While artificial intelligence (AI) has seen enormous technical progress in recent years, less progress has occurred in understanding the governance issues raised by AI. In this dissertation, I make four contributions to the study and practice of AI governance. First, I connect AI to the literature and practices of responsible research and innovation (RRI) and explore their applicability to AI governance. I focus in particular on AI’s status as a general purpose technology (GPT), and suggest some of the distinctive challenges for RRI in this context such as the critical importance of publication norms in AI and the need for coordination. Second, I provide an assessment of existing AI governance efforts from an RRI perspective, synthesizing for the first time a wide range of literatures on AI governance and highlighting several limitations of extant efforts. This assessment helps identify areas for methodological exploration. Third, I explore, through several short case studies, the value of three different RRI-inspired methods for making AI governance more anticipatory and reflexive: expert elicitation, scenario planning, and formal modeling. In each case, I explain why these particular methods were deployed, what they produced, and what lessons can be learned for improving the governance of AI in the future. I find that RRI-inspired methods have substantial potential in the context of AI, and early utility to the GPT-oriented perspective on what RRI in AI entails. Finally, I describe several areas for future work that would put RRI in AI on a sounder footing.Dissertation/ThesisDoctoral Dissertation Human and Social Dimensions of Science and Technology 201
    corecore