928 research outputs found

    Memory effects can make the transmission capability of a communication channel uncomputable

    Full text link
    Most communication channels are subjected to noise. One of the goals of Information Theory is to add redundancy in the transmission of information so that the information is transmitted reliably and the amount of information transmitted through the channel is as large as possible. The maximum rate at which reliable transmission is possible is called the capacity. If the channel does not keep memory of its past, the capacity is given by a simple optimization problem and can be efficiently computed. The situation of channels with memory is less clear. Here we show that for channels with memory the capacity cannot be computed to within precision 1/5. Our result holds even if we consider one of the simplest families of such channels -information-stable finite state machine channels-, restrict the input and output of the channel to 4 and 1 bit respectively and allow 6 bits of memory.Comment: Improved presentation and clarified claim

    Real-Time Synthesis is Hard!

    Full text link
    We study the reactive synthesis problem (RS) for specifications given in Metric Interval Temporal Logic (MITL). RS is known to be undecidable in a very general setting, but on infinite words only; and only the very restrictive BRRS subcase is known to be decidable (see D'Souza et al. and Bouyer et al.). In this paper, we precise the decidability border of MITL synthesis. We show RS is undecidable on finite words too, and present a landscape of restrictions (both on the logic and on the possible controllers) that are still undecidable. On the positive side, we revisit BRRS and introduce an efficient on-the-fly algorithm to solve it

    Verifying nondeterministic probabilistic channel systems against ω\omega-regular linear-time properties

    Full text link
    Lossy channel systems (LCSs) are systems of finite state automata that communicate via unreliable unbounded fifo channels. In order to circumvent the undecidability of model checking for nondeterministic LCSs, probabilistic models have been introduced, where it can be decided whether a linear-time property holds almost surely. However, such fully probabilistic systems are not a faithful model of nondeterministic protocols. We study a hybrid model for LCSs where losses of messages are seen as faults occurring with some given probability, and where the internal behavior of the system remains nondeterministic. Thus the semantics is in terms of infinite-state Markov decision processes. The purpose of this article is to discuss the decidability of linear-time properties formalized by formulas of linear temporal logic (LTL). Our focus is on the qualitative setting where one asks, e.g., whether a LTL-formula holds almost surely or with zero probability (in case the formula describes the bad behaviors). Surprisingly, it turns out that -- in contrast to finite-state Markov decision processes -- the satisfaction relation for LTL formulas depends on the chosen type of schedulers that resolve the nondeterminism. While all variants of the qualitative LTL model checking problem for the full class of history-dependent schedulers are undecidable, the same questions for finite-memory scheduler can be solved algorithmically. However, the restriction to reachability properties and special kinds of recurrent reachability properties yields decidable verification problems for the full class of schedulers, which -- for this restricted class of properties -- are as powerful as finite-memory schedulers, or even a subclass of them.Comment: 39 page

    Cuckoo: a Language for Implementing Memory- and Thread-safe System Services

    Full text link
    This paper is centered around the design of a thread- and memory-safe language, primarily for the compilation of application-specific services for extensible operating systems. We describe various issues that have influenced the design of our language, called Cuckoo, that guarantees safety of programs with potentially asynchronous flows of control. Comparisons are drawn between Cuckoo and related software safety techniques, including Cyclone and software-based fault isolation (SFI), and performance results suggest our prototype compiler is capable of generating safe code that executes with low runtime overheads, even without potential code optimizations. Compared to Cyclone, Cuckoo is able to safely guard accesses to memory when programs are multithreaded. Similarly, Cuckoo is capable of enforcing memory safety in situations that are potentially troublesome for techniques such as SFI

    A Bi-Directional Refinement Algorithm for the Calculus of (Co)Inductive Constructions

    Full text link
    The paper describes the refinement algorithm for the Calculus of (Co)Inductive Constructions (CIC) implemented in the interactive theorem prover Matita. The refinement algorithm is in charge of giving a meaning to the terms, types and proof terms directly written by the user or generated by using tactics, decision procedures or general automation. The terms are written in an "external syntax" meant to be user friendly that allows omission of information, untyped binders and a certain liberal use of user defined sub-typing. The refiner modifies the terms to obtain related well typed terms in the internal syntax understood by the kernel of the ITP. In particular, it acts as a type inference algorithm when all the binders are untyped. The proposed algorithm is bi-directional: given a term in external syntax and a type expected for the term, it propagates as much typing information as possible towards the leaves of the term. Traditional mono-directional algorithms, instead, proceed in a bottom-up way by inferring the type of a sub-term and comparing (unifying) it with the type expected by its context only at the end. We propose some novel bi-directional rules for CIC that are particularly effective. Among the benefits of bi-directionality we have better error message reporting and better inference of dependent types. Moreover, thanks to bi-directionality, the coercion system for sub-typing is more effective and type inference generates simpler unification problems that are more likely to be solved by the inherently incomplete higher order unification algorithms implemented. Finally we introduce in the external syntax the notion of vector of placeholders that enables to omit at once an arbitrary number of arguments. Vectors of placeholders allow a trivial implementation of implicit arguments and greatly simplify the implementation of primitive and simple tactics

    Fuzzy Description Logics with General Concept Inclusions

    Get PDF
    Description logics (DLs) are used to represent knowledge of an application domain and provide standard reasoning services to infer consequences of this knowledge. However, classical DLs are not suited to represent vagueness in the description of the knowledge. We consider a combination of DLs and Fuzzy Logics to address this task. In particular, we consider the t-norm-based semantics for fuzzy DLs introduced by Hájek in 2005. Since then, many tableau algorithms have been developed for reasoning in fuzzy DLs. Another popular approach is to reduce fuzzy ontologies to classical ones and use existing highly optimized classical reasoners to deal with them. However, a systematic study of the computational complexity of the different reasoning problems is so far missing from the literature on fuzzy DLs. Recently, some of the developed tableau algorithms have been shown to be incorrect in the presence of general concept inclusion axioms (GCIs). In some fuzzy DLs, reasoning with GCIs has even turned out to be undecidable. This work provides a rigorous analysis of the boundary between decidable and undecidable reasoning problems in t-norm-based fuzzy DLs, in particular for GCIs. Existing undecidability proofs are extended to cover large classes of fuzzy DLs, and decidability is shown for most of the remaining logics considered here. Additionally, the computational complexity of reasoning in fuzzy DLs with semantics based on finite lattices is analyzed. For most decidability results, tight complexity bounds can be derived

    Matter as Information. Quantum Information as Matter

    Get PDF
    Quantum information is discussed as the universal substance of the world. It is interpreted as that generalization of classical information, which includes both finite and transfinite ordinal numbers. On the other hand, any wave function and thus any state of any quantum system is just one value of quantum information. Information and its generalization as quantum information are considered as quantities of elementary choices. Their units are correspondingly a bit and a qubit. The course of time is what generates choices by itself, thus quantum information and any item in the world in final analysis. The course of time generates necessarily choices so: The future is absolutely unorderable in principle while the past is always well-ordered and thus unchangeable. The present as the mediation between them needs the well-ordered theorem equivalent to the axiom of choice. The latter guarantees the choice even among the elements of an infinite set, which is the case of quantum information. The concrete and abstract objects share information as their common base, which is quantum as to the formers and classical as to the latter. The general quantities of matter in physics, mass and energy can be considered as particular cases of quantum information. The link between choice and abstraction in set theory allows of “Hume’s principle” to be interpreted in terms of quantum mechanics as equivalence of “many” and “much” underlying quantum information. Quantum information as the universal substance of the world calls for the unity of physics and mathematics rather than that of the concrete and abstract objects and thus for a form of quantum neo-Pythagoreanism in final analysis

    Eternal Immolation: could a Trinitarian coordinating-concept for Theistic Metaphysics solve the Problems of Theodicy?

    Get PDF
    The author contextualizes the Problem of Evil in Open Theism system, listing its main theses, primarily the logic-of- love-defense (and free-will-defense) connected to Trinitarian speculation. After evaluating the discussion in Analytic Philosophy of Religion, the focus is on the personal mystery of evil, claiming that, because of mystery and vagueness, the Problem of Evil is undecidable. Recalling other schools of thought (Pareyson: ontology of freedom; Moltmann: Dialectical theology; Kenotic theology; Original Sin hermeneutics), the author tries to grasp their common insights. One of them is the evident explanatory failure of theodicies, expressed in the antinomian statements ‘God is not innocent’. The author follows these insights, developing the concept of Eternal Immolation (Bulgakov), arguing that, without a proper understanding of its mystery (what is, and what is not), theistic theodicy could remain compromised. ‘Eternal Immolation’ is considered consequent – or already present – in recent speculations, it stands or falls when we accept that these reveal some unresolved points in Christian doctrine. Hence, ‘Eternal Immolation’ becomes a coordinating-concept, able to bring together their assumptions: several kinds of kenosis, the ontology of freedom with a logic-of-love defense, strongly linked to a libertarian human freedom, and the acknowledgement of the unresolved mystery of evil
    corecore