13 research outputs found

    Human ≠ AGI

    Get PDF
    Terms Artificial General Intelligence (AGI) and Human-Level Artificial Intelligence (HLAI) have been used interchangeably to refer to the Holy Grail of Artificial Intelligence (AI) research, creation of a machine capable of achieving goals in a wide range of environments. However, widespread implicit assumption of equivalence between capabilities of AGI and HLAI appears to be unjustified, as humans are not general intelligences. In this paper, we will prove this distinction

    What are the ultimate limits to computational techniques: Verifier theory and unverifiability

    Get PDF
    Despite significant developments in proof theory, surprisingly little attention has been devoted to the concept of proof verifiers. In particular, the mathematical community may be interested in studying different types of proof verifiers (people, programs, oracles, communities, superintelligences) as mathematical objects. Such an effort could reveal their properties, their powers and limitations (particularly in human mathematicians), minimum and maximum complexity, as well as self-verification and self-reference issues. We propose an initial classification system for verifiers and provide some rudimentary analysis of solved and open problems in this important domain. Our main contribution is a formal introduction of the notion of unverifiability, for which the paper could serve as a general citation in domains of theorem proving, as well as software and AI verification

    On the origin of synthetic life: Attribution of output to a particular algorithm

    Get PDF
    With unprecedented advances in genetic engineering we are starting to see progressively more original examples of synthetic life. As such organisms become more common it is desirable to gain an ability to distinguish between natural and artificial life forms. In this paper, we address this challenge as a generalized version of Darwin\u27s original problem, which he so brilliantly described in On the Origin of Species. After formalizing the problem of determining the samples\u27 origin, we demonstrate that the problem is in fact unsolvable. In the general case, if computational resources of considered originator algorithms have not been limited and priors for such algorithms are known to be equal, both explanations are equality likely. Our results should attract attention of astrobiologists and scientists interested in developing a more complete theory of life, as well as of AI-Safety researchers

    On the origin of synthetic life: Attribution of output to a particular algorithm

    Get PDF
    With unprecedented advances in genetic engineering we are starting to see progressively more original examples of synthetic life. As such organisms become more common it is desirable to gain an ability to distinguish between natural and artificial life forms. In this paper, we address this challenge as a generalized version of Darwin\u27s original problem, which he so brilliantly described in On the Origin of Species. After formalizing the problem of determining the samples\u27 origin, we demonstrate that the problem is in fact unsolvable. In the general case, if computational resources of considered originator algorithms have not been limited and priors for such algorithms are known to be equal, both explanations are equality likely. Our results should attract attention of astrobiologists and scientists interested in developing a more complete theory of life, as well as of AI-Safety researchers

    On the Limits of Recursively Self-Improving AGI

    Get PDF
    Abstract. Self-improving software has been a goal of computer scientists since the founding of the field of Artificial Intelligence. In this work we analyze limits on computation which might restrict recursive self-improvement. We also introduce Convergence Theory which aims to predict general behavior of RSI systems
    corecore