304 research outputs found

    Friendly Superintelligent AI: All You Need is Love

    Get PDF
    There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge in part because most of the final goals we could give an AI admit of so-called "perverse instantiations". I propose a novel solution to this puzzle: instruct the AI to love humanity. The proposal is compared with Yudkowsky's Coherent Extrapolated Volition, and Bostrom's Moral Modeling proposals

    Why AI Doomsayers are Like Sceptical Theists and Why it Matters

    Get PDF
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the existence of God. And while this analogy is interesting in its own right, what is more interesting are its potential implications. It has been repeatedly argued that sceptical theism has devastating effects on our beliefs and practices. Could it be that AI-doomsaying has similar effects? I argue that it could. Specifically, and somewhat paradoxically, I argue that it could amount to either a reductio of the doomsayers position, or an important and additional reason to join their cause. I use this paradox to suggest that the modal standards for argument in the superintelligence debate need to be addressed

    Current and Near-Term AI as a Potential Existential Risk Factor

    Full text link
    There is a substantial and ever-growing corpus of evidence and literature exploring the impacts of Artificial intelligence (AI) technologies on society, politics, and humanity as a whole. A separate, parallel body of work has explored existential risks to humanity, including but not limited to that stemming from unaligned Artificial General Intelligence (AGI). In this paper, we problematise the notion that current and near-term artificial intelligence technologies have the potential to contribute to existential risk by acting as intermediate risk factors, and that this potential is not limited to the unaligned AGI scenario. We propose the hypothesis that certain already-documented effects of AI can act as existential risk factors, magnifying the likelihood of previously identified sources of existential risk. Moreover, future developments in the coming decade hold the potential to significantly exacerbate these risk factors, even in the absence of artificial general intelligence. Our main contribution is a (non-exhaustive) exposition of potential AI risk factors and the causal relationships between them, focusing on how AI can affect power dynamics and information security. This exposition demonstrates that there exist causal pathways from AI systems to existential risks that do not presuppose hypothetical future AI capabilities

    Modern Artificial Intelligence: Philosophical Context and Future Consequences

    Get PDF
    This paper analyzes the state of modern artificial intelligence research from a philosophical perspective, in order to argue for interdisciplinary cooperation in building an effective AI research paradigm. I assess four different aspects of AI: its ability to perceive the world through pattern recognition, its ability to act in the world through reinforcement learning, its role in technological society as predicted by Heidegger’s theory of technology, and its future development into potential superintelligence. By connecting these to an unexamined theory of the human mind underpinning AI research, I seek to show the relationship between our understanding of natural minds and of artificial intelligence, and how each may inform the other. Thus, this paper suggests that an improved understanding of natural minds may inspire AI researchers in their innovations, and that the discoveries made in AI research may in turn weigh for or against the hypotheses of philosophers

    Keynote: Can We Coexist with Superintelligent Machines?

    Get PDF
    Within the next few decades, machine intelligence will match then surpass human intelligence. Can we share the planet with smarter-than-human machines and survive

    Godmanhood vs Mangodhood: An eastern orthodox response to transhumanism

    Get PDF
    This is the author accepted manuscript. The final version is available from SAGE Publications via the DOI in this record.This article distances the classic Patristic teaching of Eastern Orthodoxy on theosis from the pseudo-religious ideology of transhumanism. By appealing to the Silver Age of Russian theologians a century ago, today’s transhumanist vision is dubbed Mangodhood, an idolatrous construction of a technological Tower of Babel. In contrast, the classical Orthodox teaching of deification or theosis relies on the spiritual grace of the true God, rendering the true goal of religion to be Godmanhood
    • …
    corecore