961 research outputs found

    Bounding the impact of AGI

    Get PDF
    Humans already have a certain level of autonomy, defined here as capability for voluntary purposive action, and a certain level of rationality, i.e. capability of reasoning about the consequences of their own actions and those of others. Under the prevailing concept of artificial general intelligences (AGIs), we envision artificial agents that have at least this high, and possibly considerably higher, levels of autonomy and rationality. We use the method of bounds to argue that AGIs meeting these criteria are subject to Gewirth's dialectical argument to the necessity of morality, compelling them to behave in a moral fashion, provided Gewirth's argument can be formally shown to be conclusive. The main practical obstacles to bounding AGIs by means of ethical rationalism are also discussed. © 2014 © 2014 Taylor & Francis

    Risks of artificial intelligence

    Get PDF
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to examining the risks of AI. The book evaluates predictions of the future of AI, proposes ways to ensure that AI systems will be beneficial to humans, and then critically evaluates such proposals. 1 Vincent C. Müller, Editorial: Risks of Artificial Intelligence - 2 Steve Omohundro, Autonomous Technology and the Greater Human Good - 3 Stuart Armstrong, Kaj Sotala and Sean O’Heigeartaigh, The Errors, Insights and Lessons of Famous AI Predictions - and What they Mean for the Future - 4 Ted Goertzel, The Path to More General Artificial Intelligence - 5 Miles Brundage, Limitations and Risks of Machine Ethics - 6 Roman Yampolskiy, Utility Function Security in Artificially Intelligent Agents - 7 Ben Goertzel, GOLEM: Toward an AGI Meta-Architecture Enabling Both Goal Preservation and Radical Self-Improvement - 8 Alexey Potapov and Sergey Rodionov, Universal Empathy and Ethical Bias for Artificial General Intelligence - 9 András Kornai, Bounding the Impact of AGI - 10 Anders Sandberg, Ethics and Impact of Brain Emulations 11 Daniel Dewey, Long-Term Strategies for Ending Existential Risk from Fast Takeoff - 12 Mark Bishop, The Singularity, or How I Learned to Stop Worrying and Love AI

    Unpredictability of AI

    Get PDF
    The young field of AI Safety is still in the process of identifying its challenges and limitations. In this paper, we formally describe one such impossibility result, namely Unpredictability of AI. We prove that it is impossible to precisely and consistently predict what specific actions a smarter-than-human intelligent system will take to achieve its objectives, even if we know terminal goals of the system. In conclusion, impact of Unpredictability on AI Safety is discussed

    Harnessing Higher-Order (Meta-)Logic to Represent and Reason with Complex Ethical Theories

    Get PDF
    The computer-mechanization of an ambitious explicit ethical theory, Gewirth's Principle of Generic Consistency, is used to showcase an approach for representing and reasoning with ethical theories exhibiting complex logical features like alethic and deontic modalities, indexicals, higher-order quantification, among others. Harnessing the high expressive power of Church's type theory as a meta-logic to semantically embed a combination of quantified non-classical logics, our work pushes existing boundaries in knowledge representation and reasoning. We demonstrate that intuitive encodings of complex ethical theories and their automation on the computer are no longer antipodes.Comment: 14 page

    Optimal Control of Wireless Computing Networks

    Full text link
    Augmented information (AgI) services allow users to consume information that results from the execution of a chain of service functions that process source information to create real-time augmented value. Applications include real-time analysis of remote sensing data, real-time computer vision, personalized video streaming, and augmented reality, among others. We consider the problem of optimal distribution of AgI services over a wireless computing network, in which nodes are equipped with both communication and computing resources. We characterize the wireless computing network capacity region and design a joint flow scheduling and resource allocation algorithm that stabilizes the underlying queuing system while achieving a network cost arbitrarily close to the minimum, with a tradeoff in network delay. Our solution captures the unique chaining and flow scaling aspects of AgI services, while exploiting the use of the broadcast approach coding scheme over the wireless channel.Comment: 30 pages, journa

    Exploring AI Safety in Degrees: Generality, Capability and Control

    Full text link
    [EN] The landscape of AI safety is frequently explored differently by contrasting specialised AI versus general AI (or AGI), by analysing the short-term hazards of systems with limited capabilities against those more long-term risks posed by `superintelligence¿, and by conceptualising sophisticated ways of bounding control an AI system has over its environment and itself (impact, harm to humans, self-harm, containment, etc.). In this position paper we reconsider these three aspects of AI safety as quantitative factors ¿generality, capability and control¿, suggesting that by defining metrics for these dimensions, AI risks can be characterised and analysed more precisely. As an example, we illustrate how to define these metrics and their values for some simple agents in a toy scenario within a reinforcement learning setting.We thank the anonymous reviewers for their comments. This work was funded by the Future of Life Institute, FLI, under grant RFP2-152, and also supported by the EU (FEDER) and Spanish MINECO under RTI2018-094403-B-C32, and Generalitat Valenciana under PROMETEO/2019/098.Burden, J.; Hernández-Orallo, J. (2020). Exploring AI Safety in Degrees: Generality, Capability and Control. ceur-ws.org. 36-40. http://hdl.handle.net/10251/177484S364

    ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots

    Full text link
    We present a new task and dataset, ScreenQA, for screen content understanding via question answering. The existing screen datasets are focused either on structure and component-level understanding, or on a much higher-level composite task such as navigation and task completion. We attempt to bridge the gap between these two by annotating 80,000+ question-answer pairs over the RICO dataset in hope to benchmark the screen reading comprehension capacity

    Editorial: Risks of general artificial intelligence

    Get PDF
    This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what the impact of superintelligent machines might be, how we might design safe and controllable systems, and whether there are directions of research that should best be avoided or strengthened
    • …
    corecore