5 research outputs found

    Unbridled Mental Power

    Full text link
    Hernández-Orallo, J. (2019). Unbridled Mental Power. Nature Physics. 15(1). https://doi.org/10.1038/s41567-018-0388-1S10615

    The Animal-AI Testbed and Competition

    Full text link
    [EN] Modern machine learning systems are still lacking in the kind of general intelligence and common sense reasoning found, not only in humans, but across the animal kingdom. Many animals are capable of solving seemingly simple tasks such as inferring object location through object persistence and spatial elimination, and navigating efficiently in out-of-distribution novel environments. Such tasks are difficult for AI, but provide a natural stepping stone towards the goal of more complex human-like general intelligence. The extensive literature on animal cognition provides methodology and experimental paradigms for testing such abilities but, so far, these experiments have not been translated en masse into an AI-friendly setting. We present a new testbed, Animal-AI, first released as part of the Animal-AI Olympics competition at NeurIPS 2019, which is a comprehensive environment and testing paradigm for tasks inspired by animal cognition. In this paper we outline the environment, the testbed, the results of the competition, and discuss the open challenges for building and testing artificial agents capable of the kind of nonverbal common sense reasoning found in many non-human animals.This work was supported by the Leverhulme Centre for the Future of Intelligence, LeverhulmeTrust, under Grant RC-2015-067.Crosby, M.; Beyret, B.; Shanahan, M.; Hernández-Orallo, J.; Cheke, L.; Halina, M. (2020). The Animal-AI Testbed and Competition. Proceedings of Machine Learning Research. 123:164-176. http://hdl.handle.net/10251/176140S16417612

    AI Generality and Spearman's Law of Diminishing Returns

    Full text link
    [EN] Many areas of AI today use benchmarks and competitions with larger and wider sets of tasks. This tries to deter AI systems (and research effort) from specialising to a single task, and encourage them to be prepared to solve previously unseen tasks. It is unclear, however, whether the methods with best performance are actually those that are most general and, in perspective, whether the trend moves towards more general AI systems. This question has a striking similarity with the analysis of the so-called positive manifold and general factors in the area of human intelligence. In this paper, we first show how the existence of a manifold (positive average pairwise task correlation) can also be analysed in AI, and how this relates to the notion of agent generality, from the individual and the populational points of view. From the populational perspective, we analyse the following question: is this manifold correlation higher for the most or for the least able group of agents? We contrast this analysis with one of the most controversial issues in human intelligence research, the so-called Spearman's Law of Diminishing Returns (SLODR), which basically states that the relevance of a general factor diminishes for most able human groups. We perform two empirical studies on these issues in AI. We analyse the results of the 2015 general video game AI (GVGAI) competition, with games as tasks and "controllers" as agents, and the results of a synthetic setting, with modified elementary cellular automata (ECA) rules as tasks and simple interactive programs as agents. In both cases, we see that SLODR does not appear. The data, and the use of just two scenarios, does not clearly support the reverse either, a Universal Law of Augmenting Returns (ULOAR), but calls for more experiments on this question.I thank the anonymous reviewers of ECAI'2016 for their comments on an early version of the experiments shown in Section 4. I'm really grateful to Philip J. Bontrager, Ahmed Khalifa, Diego Perez-Liebana and Julian Togelius for providing me with the GVGAI competition data that made Section 3 possible. David Stillwell and Aiden Loe suggested the use of person-fit as a measure of generality. The JAIR reviewers have provided very insightful and constructive comments, which have greatly helped to improve the final version of this paper. This work has been partially supported by the EU (FEDER) and Spanish MINECO grant TIN2015-69175-C4-1-R, and by Generalitat Valenciana PROMETEOII/2015/013 and PROMETEO/2019/098. I also thank the support from the Future of Life Institute through FLI grant RFP2-152. Part of this work has been done while visiting the Leverhulme Centre for the Future of Intelligence, generously funded by the Leverhulme Trust. I also thank the UPV for granting me a sabbatical leave and the funding from the Spanish MECD programme "Salvador de Madariaga" (PRX17/00467) and a BEST grant (BEST/2017/045) from the Generalitat Valenciana for another research stay also at the CFI.Hernández-Orallo, J. (2019). AI Generality and Spearman's Law of Diminishing Returns. Journal of Artificial Intelligence Research. 64:529-562. https://doi.org/10.1613/jair.1.11388S5295626

    Artificial Superintelligence: Coordination & Strategy

    Get PDF
    Attention in the AI safety community has increasingly started to include strategic considerations of coordination between relevant actors in the field of AI and AI safety, in addition to the steadily growing work on the technical considerations of building safe AI systems. This shift has several reasons: Multiplier effects, pragmatism, and urgency. Given the benefits of coordination between those working towards safe superintelligence, this book surveys promising research in this emerging field regarding AI safety. On a meta-level, the hope is that this book can serve as a map to inform those working in the field of AI coordination about other promising efforts. While this book focuses on AI safety coordination, coordination is important to most other known existential risks (e.g., biotechnology risks), and future, human-made existential risks. Thus, while most coordination strategies in this book are specific to superintelligence, we hope that some insights yield “collateral benefits” for the reduction of other existential risks, by creating an overall civilizational framework that increases robustness, resiliency, and antifragility
    corecore