13 research outputs found

    A critique of Tractarian semantics

    Get PDF
    This is a critique of the principal claims made within Ludwig Wittgenstein\u27s Tractatus Logico-Philosophicus. It traces the development of his thought from the time he dictated the pre-Tractarian Notes on Logic to Russell up until about 1932 when he began work on the Philosophical Grammar. The influence exercised upon him by Frege, Russell and Moore are considered at length. Chapter one examines Moore\u27s relational theory of judgment which Wittgenstein apparently accepted upon his arrival at Cambridge in 1911. From Moore Wittgenstein would inherit one of the fundamental metaphysical theses of the Tractatus, namely, that the world consists of facts rather than things. Wittgenstein\u27s attempt to overcome the relational theory\u27s inability to account for falsehood, negation, and the possibility of truly ascribing false beliefs to others would herald some of the principal theses of Tractarian semantics: that propositional signs must exhibit bipolarity, that a distinction must be drawn between Sinn and Bedeutung, and that a distinction holds between what can be said and what can only be shown. Chapter Two examines how these theses are sharpened by considering the influence of Frege and the manner in which Wittgenstein disposes of Russell\u27s Paradox. considerable attention is given to the issue of whether Frege is to be interpreted as a semantic Platonist. It is argued that he is not, and that Tractarian semantics shores up the problematic features of Frege\u27s philosophy which make it susceptible to the paradox. From Frege Wittgenstein derives the idea that all representation requires a structured medium. The chapter concludes by considering how this entails the falsehood of semantic Platonism. Chapter Three studies Wittgenstein\u27s argument for logical atomism and gives it a favorable assessment. The influence of Russell\u27s conception of logical analysis is considered. The chapter concludes by showing the way Wittgenstein\u27s thesis that there must be simple subsistent objects depends upon the truth of his Grundgedanke, i.e., the claim that the logical constants are not referring terms. Chapter Four examines the argument for the Grundgedanke, and defends it against criticism based upon phenomenological considerations for objectifying negativity. It is demonstrated that Wittgenstein\u27s view entails that a distinction must be drawn between propositions possessing sense and those that are senseless but no less a part of our language. Chapter Five examines Wittgenstein\u27s claim that the essence of a proposition consists in a propositional sign\u27s projective relation to the world, and it considers the Tractarian analysis of propositional attitude ascriptions. It is argued that the analysis of these sorts of sentences forms the principal problem with the Tractatus. The chapter includes a discussion of why the Color Exclusion Problem need not be considered problematic for the author of the Tractatus, and it defends the realistic interpretation given of the Tractatus throughout the dissertation against criticisms arising from a consideration of Wittgenstein\u27s remarks on solipsism

    Logic and Automata

    Get PDF
    Mathematical logic and automata theory are two scientific disciplines with a fundamentally close relationship. The authors of Logic and Automata take the occasion of the sixtieth birthday of Wolfgang Thomas to present a tour d'horizon of automata theory and logic. The twenty papers in this volume cover many different facets of logic and automata theory, emphasizing the connections to other disciplines such as games, algorithms, and semigroup theory, as well as discussing current challenges in the field

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 12224 and 12225 constitutes the refereed proceedings of the 32st International Conference on Computer Aided Verification, CAV 2020, held in Los Angeles, CA, USA, in July 2020.* The 43 full papers presented together with 18 tool papers and 4 case studies, were carefully reviewed and selected from 240 submissions. The papers were organized in the following topical sections: Part I: AI verification; blockchain and Security; Concurrency; hardware verification and decision procedures; and hybrid and dynamic systems. Part II: model checking; software verification; stochastic systems; and synthesis. *The conference was held virtually due to the COVID-19 pandemic

    Σ1\Sigma_1 gaps as derived models and correctness of mice

    Full text link
    Assume ZF + AD + V=L(R). Let [α,β][\alpha,\beta] be a Σ1\Sigma_1 gap with Jα(R)J_\alpha(R) admissible. We analyze Jβ(R)J_\beta(R) as a natural form of ``derived model'' of a premouse PP, where PP is found in a generic extension of VV. In particular, we will have P(R)Jβ(R)=P(R)D\mathcal{P}(R)\cap J_\beta(R)=\mathcal{P}(R)\cap D, and if Jβ(R)J_\beta(R)\models``Θ\Theta exists'', then Jβ(R)J_\beta(R) and DD in fact have the same universe. This analysis will be employed in further work, yet to appear, toward a resolution of a conjecture of Rudominer and Steel on the nature of (L(R))M(L(R))^M, for ω\omega-small mice MM. We also establish some preliminary work toward this conjecture in the present paper.Comment: 128 page

    Lambda-calculus and formal language theory

    Get PDF
    Formal and symbolic approaches have offered computer science many application fields. The rich and fruitful connection between logic, automata and algebra is one such approach. It has been used to model natural languages as well as in program verification. In the mathematics of language it is able to model phenomena ranging from syntax to phonology while in verification it gives model checking algorithms to a wide family of programs. This thesis extends this approach to simply typed lambda-calculus by providing a natural extension of recognizability to programs that are representable by simply typed terms. This notion is then applied to both the mathematics of language and program verification. In the case of the mathematics of language, it is used to generalize parsing algorithms and to propose high-level methods to describe languages. Concerning program verification, it is used to describe methods for verifying the behavioral properties of higher-order programs. In both cases, the link that is drawn between finite state methods and denotational semantics provide the means to mix powerful tools coming from the two worlds

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book

    Playing with Trees and Logic

    Get PDF
    This document proposes an overview of my research sinc

    The design and implementation of a relational programming system.

    Get PDF
    The declarative class of computer languages consists mainly of two paradigms - the logic and the functional. Much research has been devoted in recent years to the integration of the two with the aim of securing the advantages of both without retaining their disadvantages. To date this research has, arguably, been less fruitful than initially hoped. A large number of composite functional/logical languages have been proposed but have generally been marred by the lack of a firm, cohesive, mathematical basis. More recently new declarative paradigms, equational and constraint languages, have been advocated. These however do not fully encompass those features we perceive as being central to functional and logic languages. The crucial functional features are higher-order definitions, static polymorphic typing, applicative expressions and laziness. The crucial logic features are ability to reason about both functional and non-functional relationships and to handle computations involving search. This thesis advocates a new declarative paradigm which lies midway between functional and logic languages - the so-called relational paradigm. In a relationallanguage program and data alike are denoted by relations. All expressions are relations constructed from simpler expressions using operators which form a relational algebra. The impetus for use of relations in a declarative language comes from observations concerning their connection to functional and logic programming. Relations are mathematically more general than functions modelling non-functional as well as functional relationships. They also form the basis of many logic languages, for example, Prolog. This thesis proposes a new relational language based entirely on binary relations, named Drusilla. We demonstrate the functional and logic aspects of Drusilla. It retains the higher-order objects and polymorphism found in modern functional languages but handles non-determinism and models relationships between objects in the manner of a logic language with notion of algorithm being composed of logic and control elements. Different programming styles - functional, logic and relational- are illustrated. However, such expressive power does not come for free; it has associated with it a high cost of implementation. Two main techniques are used in the necessarily complex language interpreter. A type inference system checks programs to ensure they are meaningful and simultaneously performs automatic representation selection for relations. A symbolic manipulation system transforms programs to improve. efficiency of expressions and to increase the number of possible representations for relations while preserving program meaning

    Generalized simulation relations with applications in automata theory

    Get PDF
    Finite-state automata are a central computational model in computer science, with numerous and diverse applications. In one such application, viz. model-checking, automata over infinite words play a central rˆole. In this thesis, we concentrate on B¨uchi automata (BA), which are arguably the simplest finite-state model recognizing languages of infinite words. Two algorithmic problems are paramount in the theory of automata: language inclusion and automata minimization. They are both PSPACE-complete, thus under standard complexity-theoretic assumptions no deterministic algorithm with worst case polynomial time can be expected. In this thesis, we develop techniques to tackle these problems. In automata minimization, one seeks the smallest automaton recognizing a given language (“small” means with few states). Despite PSPACE-hardness of minimization, the size of an automaton can often be reduced substantially by means of quotienting. In quotienting, states deemed equivalent according to a given equivalence are merged together; if this merging operation preserves the language, then the equivalence is said to be Good for Quotienting (GFQ). In general, quotienting cannot achieve exact minimization, but, in practice, it can still offer a very good reduction in size. The central topic of this thesis is the design of GFQ equivalences for B¨uchi automata. A particularly successful approach to the design of GFQ equivalences is based on simulation relations. Simulation relations are a powerful tool to compare the local behavior of automata. The main contribution of this thesis is to generalize simulations, by relaxing locality in three perpendicular ways: by fixing the input word in advance (fixed-word simulations, Ch. 3), by allowing jumps (jumping simulations, Ch. 4), and by using multiple pebbles (multipebble simulations for alternating BA, Ch. 5). In each case, we show that our generalized simulations induce GFQ equivalences. For fixed-word simulation, we argue that it is the coarsest GFQ simulation implying language inclusion, by showing that it subsumes a natural hierarchy of GFQ multipebble simulations. From a theoretical perspective, our study significantly extends the theory of simulations for BA; relaxing locality is a general principle, and it may find useful applications outside automata theory. From a practical perspective, we obtain GFQ equivalences coarser than previously possible. This yields smaller quotient automata, which is beneficial in applications. Finally, we show how simulation relations have recently been applied to significantly optimize exact (exponential) language inclusion algorithms (Ch. 6), thus extending their practical applicability
    corecore