152 research outputs found

    Constraints in Non-Boolean Contexts

    Get PDF
    In high-level constraint modelling languages, constraints can occur in non-Boolean contexts: implicitly, in the form of partial functions, or more explicitly, in the form of constraints on local variables in non-Boolean expressions. Specifications using these facilities are often more succinct. However, these specifications are typically executed on solvers that only support questions of the form of existentially quantified conjunctions of constraints. We show how we can translate expressions with constraints appearing in non-Boolean contexts into conjunctions of ordinary constraints. The translation is clearly structured into constrained type elimination, local variable lifting and partial function elimination. We explain our approach in the context of the modelling language Zinc. An implementation of it is an integral part of our Zinc compiler

    On the mechanisation of the logic of partial functions

    Get PDF
    PhD ThesisIt is well known that partial functions arise frequently in formal reasoning about programs. A partial function may not yield a value for every member of its domain. Terms that apply partial functions thus may not denote, and coping with such terms is problematic in two-valued classical logic. A question is raised: how can reasoning about logical formulae that can contain references to terms that may fail to denote (partial terms) be conducted formally? Over the years a number of approaches to coping with partial terms have been documented. Some of these approaches attempt to stay within the realm of two-valued classical logic, while others are based on non-classical logics. However, as yet there is no consensus on which approach is the best one to use. A comparison of numerous approaches to coping with partial terms is presented based upon formal semantic definitions. One approach to coping with partial terms that has received attention over the years is the Logic of Partial Functions (LPF), which is the logic underlying the Vienna Development Method. LPF is a non-classical three-valued logic designed to cope with partial terms, where both terms and propositions may fail to denote. As opposed to using concrete undfined values, undefinedness is treated as a \gap", that is, the absence of a defined value. LPF is based upon Strong Kleene logic, where the interpretations of the logical operators are extended to cope with truth value \gaps". Over the years a large body of research and engineering has gone into the development of proof based tool support for two-valued classical logic. This has created a major obstacle that affects the adoption of LPF, since such proof support cannot be carried over directly to LPF. Presently, there is a lack of direct proof support for LPF. An aim of this work is to investigate the applicability of mechanised (automated) proof support for reasoning about logical formulae that can contain references to partial terms in LPF. The focus of the investigation is on the basic but fundamental two-valued classical logic proof procedure: resolution and the associated technique proof by contradiction. Advanced proof techniques are built on the foundation that is provided by these basic fundamental proof techniques. Looking at the impact of these basic fundamental proof techniques in LPF is thus the essential and obvious starting point for investigating proof support for LPF. The work highlights the issues that arise when applying these basic techniques in LPF, and investigates the extent of the modifications needed to carry them over to LPF. This work provides the essential foundation on which to facilitate research into the modification of advanced proof techniques for LPF.EPSR

    Definiteness and determinacy

    Get PDF
    This paper distinguishes between definiteness and determinacy. Definiteness is seen as a morphological category which, in English, marks a (weak) uniqueness presupposition, while determinacy consists in denoting an individual. Definite descriptions are argued to be fundamentally predicative, presupposing uniqueness but not existence, and to acquire existential import through general type-shifting operations that apply not only to definites, but also indefinites and possessives. Through these shifts, argumental definite descriptions may become either determinate (and thus denote an individual) or indeterminate (functioning as an existential quantifier). The latter option is observed in examples like ‘Anna didn’t give the only invited talk at the conference’, which, on its indeterminate reading, implies that there is nothing in the extension of ‘only invited talk at the conference’. The paper also offers a resolution of the issue of whether possessives are inherently indefinite or definite, suggesting that, like indefinites, they do not mark definiteness lexically, but like definites, they typically yield determinate readings due to a general preference for the shifting operation that produces them.We thank Dag Haug, Reinhard Muskens, Luca Crnic, Cleo Condoravdi, Lucas Champollion, Stanley Peters, Roger Levy, Craige Roberts, Bert LeBruyn, Robin Cooper, Hans Kamp, Sebastian Lobner, Francois Recanati, Dan Giberman, Benjamin Schnieder, Rajka Smiljanic, Ede Zimmerman, as well as audiences at SALT 22 in Chicago, IATL 29 in Jerusalem, Going Heim in Connecticut, the Workshop on Bare Nominals and Non-Standard Definites in Utrecht, the University of Cambridge, the University of Gothenburg, the University of Konstanz, New York University, the University of Oxford, Rutgers University, the University of Southern California, Stanford University, and the University of Texas at Austin. Beaver was supported by NSF grants BCS-0952862 and BCS-1452663. Coppock was supported by Swedish Research Council project 2009-1569 and Riksbankens Jubileumsfond's Pro Futura Scientia program, administered through the Swedish Collegium for Advanced Study. (BCS-0952862 - NSF; BCS-1452663 - NSF; 2009-1569 - Swedish Research Council; Riksbankens Jubileumsfond's Pro Futura Scientia program

    The proper treatment of egophoricity in Kathmandu Newari

    Full text link
    We develop a theory of so-called 'conjunct-disjunct marking', also known as 'egophoricity', in Kathmandu Newari. The signature pattern of egophoricity looks a bit like person agreement: In declaratives, there is a special marker that goes on first person verbs, but not second or third person (e.g. 'I drank-EGO too much'). But in interrogatives, the same marker goes on second person (e.g. 'Did you-EGO drink too much?'). This is called interrogative flip. Egophoric marking also interacts interestingly with the presence of evidential markers, and comes with an implication of knowing self-reference (emphasized in Newari by a restriction to volitional action). Our paper discusses two previous approaches, which we label indexical and evidential, and motivate our account, which we label egophoric. Along the way, we develop a theory of how de se attitudes are communicated.http://eecoppock.info/egophoricity-oup.pdfAccepted manuscrip

    CASL for CafeOBJ Users

    Get PDF
    Casl is an expressive language for the algebraic specificationof software requirements, design, and architecture. It has been developed by an open collaborative effort called CoFI (Common Framework Initiative for algebraic specification and development). Casl combines the best features of many previous main-stream algebraic specification languages, and it should provide a focus for future research and development in the use of algebraic techniques, as well facilitating interoperability ofexisting and future tools. This paper presents Casl for users of the CafeOBJ framework, focusing on the relationship between the two languages. It first considers those constructs of CafeOBJ that have direct counterparts in Casl, and then (briefly) those that do not. It also motivates various Casl constructsthat are not provided by CafeOBJ. Finally, it gives a concise overview of Casl, and illustrates how some CafeOBJ specifications may be expressed in Casl

    Unifying Theories of Logics with Undefinedness

    Get PDF
    A relational approach to the question of how different logics relate formally is described. We consider three three-valued logics, as well as classical and semi-classical logic. A fundamental representation of three-valued predicates is developed in the Unifying Theories of Programming (UTP) framework of Hoare and He. On this foundation, the five logics are encoded semantically as UTP theories. Several fundamental relationships are revealed using theory linking mechanisms, which corroborate results found in the literature, and which have direct applicability to the sound mixing of logics in order to prove facts. The initial development of the fundamental three-valued predicate model, on which the theories are based, is then applied to the novel systems-of-systems specification language CML, in order to reveal proof obligations which bridge a gap that exists between the semantics of CML and the existing semantics of one of its sub-languages, VDM. Finally, a detailed account is given of an envisioned model theory for our proposed structuring, which aims to lift the sentences of the five logics encoded to the second order, allowing them to range over elements of existing UTP theories of computation, such as designs and CSP processes. We explain how this would form a complete treatment of logic interplay that is expressed entirely inside UTP

    Relational Expressions for Data Transformation and Computation

    Full text link
    Separate programming models for data transformation (declarative) and computation (procedural) impact programmer ergonomics, code reusability and database efficiency. To eliminate the necessity for two models or paradigms, we propose a small but high-leverage innovation: the introduction of complete relations into the relational database. Complete relations and the discipline of constraint programming, which concerns them, are founded on the same algebra as relational databases. We claim that by synthesising the relational database of Codd and Date, with the results of the constraint programming community, the relational model holistically offers programmers a single declarative paradigm for both data transformation and computation, reusable code with computations that are indifferent to what is input and what is output, and efficient applications with the query engine optimising and parallelising all levels of data transformation and computation.Comment: 12 pages, 4 tables. To be published in the proceedings of the Shepherding Track of the 2023 Australasian Database Conference Melbourne (Nov 1-3
    • …
    corecore