6,577 research outputs found

    Robustness to fundamental uncertainty in AGI alignment

    Get PDF
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of metaphysical and practical uncertainty associated with the alignment problem by limiting and choosing necessary assumptions to reduce the risk false positives. Herein we explore in detail some of the relevant points of uncertainty that AGI alignment research hinges on and consider how to reduce false positives in response to them

    A Computational Model and Convergence Theorem for Rumor Dissemination in Social Networks

    Full text link
    The spread of rumors, which are known as unverified statements of uncertain origin, may cause tremendous number of social problems. If it would be possible to identify factors affecting spreading a rumor (such as agents' desires, trust network, etc.), then this could be used to slowdown or stop its spreading. A computational model that includes rumor features and the way a rumor is spread among society's members, based on their desires, is therefore needed. Our research is centering on the relation between the homogeneity of the society and rumor convergence in it and result shows that the homogeneity of the society is a necessary condition for convergence of the spreading rumor.Comment: 29 pages, 7 figure

    Ethics of Artificial Intelligence Demarcations

    Full text link
    In this paper we present a set of key demarcations, particularly important when discussing ethical and societal issues of current AI research and applications. Properly distinguishing issues and concerns related to Artificial General Intelligence and weak AI, between symbolic and connectionist AI, AI methods, data and applications are prerequisites for an informed debate. Such demarcations would not only facilitate much-needed discussions on ethics on current AI technologies and research. In addition sufficiently establishing such demarcations would also enhance knowledge-sharing and support rigor in interdisciplinary research between technical and social sciences.Comment: Proceedings of the Norwegian AI Symposium 2019 (NAIS 2019), Trondheim, Norwa

    The Archimedean trap: Why traditional reinforcement learning will probably not yield AGI

    Get PDF
    After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways traditional reinforcement learning could be altered to remove this roadblock

    Can the g Factor Play a Role in Artificial General Intelligence Research?

    Get PDF
    In recent years, a trend in AI research has started to pursue human-level, general artificial intelli-gence (AGI). Although the AGI framework is characterised by different viewpoints on what intelligence is and how to implement it in artificial systems, it conceptualises intelligence as flexible, general-purposed, and capable of self-adapting to different contexts and tasks. Two important ques-tions remain open: a) should AGI projects simu-late the biological, neural, and cognitive mecha-nisms realising the human intelligent behaviour? and b) what is the relationship, if any, between the concept of general intelligence adopted by AGI and that adopted by psychometricians, i.e., the g factor? In this paper, we address these ques-tions and invite researchers in AI to open a dis-cussion on the theoretical conceptions and practi-cal purposes of the AGI approach
    corecore