2,022 research outputs found

    Approaching Arithmetic Theories with Finite-State Automata

    Get PDF
    The automata-theoretic approach provides an elegant method for deciding linear arithmetic theories. This approach has recently been instrumental for settling long-standing open problems about the complexity of deciding the existential fragments of BĂŒchi arithmetic and linear arithmetic over p-adic fields. In this article, which accompanies an invited talk, we give a high-level exposition of the NP upper bound for existential BĂŒchi arithmetic, obtain some derived results, and further discuss some open problems

    Rapid Recovery for Systems with Scarce Faults

    Full text link
    Our goal is to achieve a high degree of fault tolerance through the control of a safety critical systems. This reduces to solving a game between a malicious environment that injects failures and a controller who tries to establish a correct behavior. We suggest a new control objective for such systems that offers a better balance between complexity and precision: we seek systems that are k-resilient. In order to be k-resilient, a system needs to be able to rapidly recover from a small number, up to k, of local faults infinitely many times, provided that blocks of up to k faults are separated by short recovery periods in which no fault occurs. k-resilience is a simple but powerful abstraction from the precise distribution of local faults, but much more refined than the traditional objective to maximize the number of local faults. We argue why we believe this to be the right level of abstraction for safety critical systems when local faults are few and far between. We show that the computational complexity of constructing optimal control with respect to resilience is low and demonstrate the feasibility through an implementation and experimental results.Comment: In Proceedings GandALF 2012, arXiv:1210.202

    A Component-oriented Framework for Autonomous Agents

    Get PDF
    The design of a complex system warrants a compositional methodology, i.e., composing simple components to obtain a larger system that exhibits their collective behavior in a meaningful way. We propose an automaton-based paradigm for compositional design of such systems where an action is accompanied by one or more preferences. At run-time, these preferences provide a natural fallback mechanism for the component, while at design-time they can be used to reason about the behavior of the component in an uncertain physical world. Using structures that tell us how to compose preferences and actions, we can compose formal representations of individual components or agents to obtain a representation of the composed system. We extend Linear Temporal Logic with two unary connectives that reflect the compositional structure of the actions, and show how it can be used to diagnose undesired behavior by tracing the falsification of a specification back to one or more culpable components

    Changing a semantics: opportunism or courage?

    Full text link
    The generalized models for higher-order logics introduced by Leon Henkin, and their multiple offspring over the years, have become a standard tool in many areas of logic. Even so, discussion has persisted about their technical status, and perhaps even their conceptual legitimacy. This paper gives a systematic view of generalized model techniques, discusses what they mean in mathematical and philosophical terms, and presents a few technical themes and results about their role in algebraic representation, calibrating provability, lowering complexity, understanding fixed-point logics, and achieving set-theoretic absoluteness. We also show how thinking about Henkin's approach to semantics of logical systems in this generality can yield new results, dispelling the impression of adhocness. This paper is dedicated to Leon Henkin, a deep logician who has changed the way we all work, while also being an always open, modest, and encouraging colleague and friend.Comment: 27 pages. To appear in: The life and work of Leon Henkin: Essays on his contributions (Studies in Universal Logic) eds: Manzano, M., Sain, I. and Alonso, E., 201

    G\"odel Incompleteness and the Black Hole Information Paradox

    Full text link
    Semiclassical reasoning suggests that the process by which an object collapses into a black hole and then evaporates by emitting Hawking radiation may destroy information, a problem often referred to as the black hole information paradox. Further, there seems to be no unique prediction of where the information about the collapsing body is localized. We propose that the latter aspect of the paradox may be a manifestation of an inconsistent self-reference in the semiclassical theory of black hole evolution. This suggests the inadequacy of the semiclassical approach or, at worst, that standard quantum mechanics and general relavity are fundamentally incompatible. One option for the resolution for the paradox in the localization is to identify the G\"odel-like incompleteness that corresponds to an imposition of consistency, and introduce possibly new physics that supplies this incompleteness. Another option is to modify the theory in such a way as to prohibit self-reference. We discuss various possible scenarios to implement these options, including eternally collapsing objects, black hole remnants, black hole final states, and simple variants of semiclassical quantum gravity.Comment: 14 pages, 2 figures; revised according to journal requirement

    On the Inherent Incompleteness of Scientific Theories

    Get PDF
    We examine the question of whether scientific theories can ever be complete. For two closely related reasons, we will argue that they cannot. The first reason is the inability to determine what are “valid empirical observations”, a result that is based on a self-reference Gödel/Tarski-like proof. The second reason is the existence of “meta-empirical” evidence of the inherent incompleteness of observations. These reasons, along with theoretical incompleteness, are intimately connected to the notion of belief and to theses within the philosophy of science: the Quine-Duhem (and underdetermination) thesis and the observational/theoretical distinction failure. Some puzzling aspects of the philosophical theses will become clearer in light of these connections. Other results that follow are: no absolute measure of the informational content of empirical data, no absolute measure of the entropy of physical systems, and no complete computer simulation of the natural world are possible. The connections with the mathematical theorems of Gödel and Tarski reveal the existence of other connections between scientific and mathematical incompleteness: computational irreducibility, complexity, infinity, arbitrariness and self-reference. Finally, suggestions will be offered of where a more rigorous (or formal) “proof” of scientific incompleteness can be found

    Light On String Solving: Approaches to Efficiently and Correctly Solving String Constraints

    Get PDF
    Widespread use of string solvers in formal analysis of string-heavy programs has led to a growing demand for more efficient and reliable techniques which can be applied in this context, especially for real-world cases. Designing an algorithm for the (generally undecidable) satisfiability problem for systems of string constraints requires a thorough understanding of the structure of constraints present in the targeted cases. We target the aforementioned case in different perspectives: We present an algorithm which works by reformulating the satisfiability of bounded word equations as a reachability problem for non-deterministic finite automata. Secondly, we present a transformation-system-based technique to solving string constraints. Thirdly, we investigate benchmarks presented in the literature containing regular expression membership predicates and design a decission procedure for a PSPACE-complete sub-theory. Additionally, we introduce a new benchmarking framework for string solvers and use it to showcase the power of our algorithms via an extensive empirical evaluation over a diverse set of benchmarks
    • 

    corecore