35 research outputs found

    What are the ultimate limits to computational techniques: Verifier theory and unverifiability

    Get PDF
    Despite significant developments in proof theory, surprisingly little attention has been devoted to the concept of proof verifiers. In particular, the mathematical community may be interested in studying different types of proof verifiers (people, programs, oracles, communities, superintelligences) as mathematical objects. Such an effort could reveal their properties, their powers and limitations (particularly in human mathematicians), minimum and maximum complexity, as well as self-verification and self-reference issues. We propose an initial classification system for verifiers and provide some rudimentary analysis of solved and open problems in this important domain. Our main contribution is a formal introduction of the notion of unverifiability, for which the paper could serve as a general citation in domains of theorem proving, as well as software and AI verification

    Impossibility Results in AI: A Survey

    Get PDF
    An impossibility theorem demonstrates that a particular problem or set of problems cannot be solved as described in the claim. Such theorems put limits on what is possible to do concerning artificial intelligence, especially the super-intelligent one. As such, these results serve as guidelines, reminders, and warnings to AI safety, AI policy, and governance researchers. These might enable solutions to some long-standing questions in the form of formalizing theories in the framework of constraint satisfaction without committing to one option. In this paper, we have categorized impossibility theorems applicable to the domain of AI into five categories: deduction, indistinguishability, induction, tradeoffs, and intractability. We found that certain theorems are too specific or have implicit assumptions that limit application. Also, we added a new result (theorem) about the unfairness of explainability, the first explainability-related result in the induction category. We concluded that deductive impossibilities deny 100%-guarantees for security. In the end, we give some ideas that hold potential in explainability, controllability, value alignment, ethics, and group decision-making. They can be deepened by further investigation

    Unpredictability of AI

    Get PDF
    The young field of AI Safety is still in the process of identifying its challenges and limitations. In this paper, we formally describe one such impossibility result, namely Unpredictability of AI. We prove that it is impossible to precisely and consistently predict what specific actions a smarter-than-human intelligent system will take to achieve its objectives, even if we know terminal goals of the system. In conclusion, impact of Unpredictability on AI Safety is discussed

    Strong Designated Verifier Signature Schemes with Undeniable Property and Their Applications

    Get PDF
    Most of the strong designated verifier signature (SDVS) schemes cannot tell the real signature generator when the signer and the designated verifier dispute on a signature. In other words, most of the SDVS schemes do not have the undeniability property. In this paper, we propose two SDVS schemes which hold the undeniability property, namely, strong designated verifier signature with undeniability property (SDVSUP). Our two schemes are called SDVSUP-1 and SDVSUP-2. In our two SDVSUP schemes, the signer not only can designate a verifier but also can designate an arbiter who can judge the signature when the signer and the designated verifier dispute on the signature. What is more, the judgment procedure can be performed by the arbiter alone without help from the signer or the designated verifier, which increases the judgment efficiency and reduces the complexity of signature confirmation. We also demonstrate a real instance of applying our SDVSUP scheme to electronic bidding system

    Unexplainability and Incomprehensibility of Artificial Intelligence

    Get PDF
    Explainability and comprehensibility of AI are important requirements for intelligent systems deployed in real-world domains. Users want and frequently need to understand how decisions impacting them are made. Similarly it is important to understand how an intelligent system functions for safety and security reasons. In this paper, we describe two complementary impossibility results (Unexplainability and Incomprehensibility), essentially showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain people would not understand some of those explanations
    corecore