205 research outputs found

    On the complexity of evaluating multivariate polynomials

    Get PDF

    Flexible Coinduction

    Get PDF
    openRecursive definitions of predicates by means of inference rules are ubiquitous in computer science. They are usually interpreted inductively or coinductively, however there are situations where none of these two options provides the expected meaning. In the thesis we propose a flexible form of coinductive interpretation, based on the notion of corules, able to deal with such situations. In the first part, we define such flexible coinductive interpretation as a fixed point of the standard inference operator lying between the least and the greatest one, and we provide several equivalent proof-theoretic semantics, combining well-founded and non-well-founded derivations. This flexible interpretation nicely subsumes standard inductive and coinductive ones and is naturally associated with a proof principle, which smoothly extends the usual coinduction principle. In the second part, we focus on the problem of modelling infinite behaviour by a big-step operational semantics, which is a paradigmatic example where neither induction nor coinduction provide the desired interpretation. In order to be independent from specific examples, we provide a general, but simple, definition of what a big-step semantics is. Then, we extend it to include also observations, describing the interaction with the environment, thus providing a richer description of the behaviour of programs. In both settings, we show how corules can be successfully adopted to model infinite behaviour, by providing a construction extending a big-step semantics, which as usual only describes finite computations, to a richer one including infinite computations as well. Finally, relying on these constructions, we provide a proof technique to show soundness of a predicate with respect to a big-step semantics. In the third part, we ez face eez the problem of providing an algorithmic support to corules. To this end, we consider the restriction of the flexible coinductive interpretation to regular derivations, analysing again both proof-theoretic and fixed point semantics and developing proof techniques. Furthermore, we show that this flexible regular interpretation can be equivalently characterised inductively by a cycle detection mechanism, thus obtaining a sound and complete (abstract) (semi-)algorithm to check whether a judgement is derivable. Finally, we apply such results to extend logic programming by coclauses, the analogous of corules, defining declarative and operational semantics and proving ez that eez the latter is sound and complete with respect to the regular declarative model, thus obtaining a concrete support to flexible coinduction.openXXXIII CICLO - INFORMATICA E INGEGNERIA DEI SISTEMI/ COMPUTER SCIENCE AND SYSTEMS ENGINEERING - Informatica/computer scienceDagnino, Francesc

    A generic theory of datatypes

    Get PDF

    Towards weak bisimilarity on a class of parallel processes.

    Get PDF
    A directed labelled graph may be used, at a certain abstraction, to represent a system's behaviour. Its nodes, the possible states the system can be in; its arrows labelled by the actions required to move from one state to another. Processes are, for our purposes, synonymous with these labelled transition systems. With this view a well-studied notion of behavioural equivalence is bisimilarity, where processes are bisimilar when whatever one can do, the other can match, while maintaining bisimilarity. Weak bisimilarity accommodates a notion of silent or internal action. A natural class of labelled transition systems is given by considering the derivations of commutative context-free grammars in Greibach Normal Form: the Basic Parallel Processes (BPP), introduced by Christensen in his PhD thesis. They represent a simple model of communication-free parallel computation, and for them bisimilarity is PSPACE-complete. Weak bisimilarity is believed to be decidable, but only partial results exist. Non-bisimilarity is trivially semidecidable on BPP (each process has finitely many next states, so the state space can be explored until a mis-match is found); the research effort in proving it fully decidable centred on semideciding the positive case. Conversely, weak bisimilarity has been known to be semidecidable for a decade, but no method for semideciding inequivalence has yet been found - the presence of silent actions allows a process to have infinitely many possible successor states, so simple exploration is no longer possible. Weak bisimilarity is defined coinductively, but may be approached, and even reached, by its inductively defined approximants. Game theoretically, these change the Defender's winning condition from survival for infinitely many turns to survival for K turns, for an ordinal k, creating a hierarchy of relations successively closer to full weak bisimilarity. It can be seen that on any set of processes this approximant hierarchy collapses: there will always exist some K such that the kth approximant coincides with weak bisimilarity. One avenue towards the semidecidability of non- weak bisimilarity is the decidability of its approximants. It is a long-standing conjecture that on BPP the weak approximant hierarchy collapses at o x 2. If true, in order to semidecide inequivalence it would suffice to be able to decide the o + n approximants. Again, there exist only limited results: the finite approximants are known to be decidable, but no progress has been made on the wth approximant, and thus far the best proven lower-bound of collapse is w1CK (the least non-recursive ordinal number). We significantly improve this bound to okx2(for a k-variable BPP); a key part of the proof being a novel constructive version of Dickson's Lemma. The distances-to-disablings or DD functions were invented by Jancar in order to prove the PSPACE-completeness of bisimilarity on BPP. At the end of his paper is a conjecture that weak bisimilarity might be amenable to the theory; a suggestion we have taken up. We generalise and extend the DD functions, widening the subset of BPP on which weak bisimilarity is known to be computable, and creating a new means for testing inequivalence. The thesis ends with two conjectures. The first, that our extended DD functions in fact capture weak bisimilarity on full BPP (a corollary of which would be to take the lower bound of approximant collapse to and second, that they are computable, which would enable us to semidecide inequivalence, and hence give us the decidability of weak bisimilarity

    Deciding the consistency of non-linear real arithmetic constraints with a conflict driven search using cylindrical algebraic coverings

    Get PDF
    We present a new algorithm for determining the satisfiability of conjunctions of non-linear polynomial constraints over the reals, which can be used as a theory solver for satisfiability modulo theory (SMT) solving for non-linear real arithmetic. The algorithm is a variant of Cylindrical Algebraic Decomposition (CAD) adapted for satisfiability, where solution candidates (sample points) are constructed incrementally, either until a satisfying sample is found or sufficient samples have been sampled to conclude unsatisfiability. The choice of samples is guided by the input constraints and previous conflicts. The key idea behind our new approach is to start with a partial sample; demonstrate that it cannot be extended to a full sample; and from the reasons for that rule out a larger space around the partial sample, which build up incrementally into a cylindrical algebraic covering of the space. There are similarities with the incremental variant of CAD, the NLSAT method of Jovanovic and de Moura, and the NuCAD algorithm of Brown; but we present worked examples and experimental results on a preliminary implementation to demonstrate the differences to these, and the benefits of the new approach

    Implementations of process synchronisation, and their analysis

    Get PDF

    The IO- and OI-hierarchies

    Get PDF
    AbstractAn analysis of recursive procedures in ALGOL 68 with finite modes shows, that a denotational semantics of this language can be described on the level of program schemes using a typed λ-calculus with fixed-point operators. In the first part of this paper, we derive classical schematological theorems for the resulting class of level-n schemes. In part two, we investigate the language families obtained by call-by-value and call-by-name interpretation of level-n schemes over the algebra of formal languages. It is proved, that differentiating according to the functional level of recursion leads to two infinite hierarchies of recursive languages, the IO- and OI-hierarchies, which can be characterized as canonical extensions of the regular, context-free, and IO- and OI-macro languages, respectively. Sufficient conditions are derived to establish strictness of IO-like hierarchies. Finally we derive, that recursion on higher types induces an infinite hierarchy of control structures by proving that level-n schemes are strictly less powerful than level-n+1 schemes

    Constraining lexical phonology: evidence from English vowels

    Get PDF
    Standard Generative Phonology is inadequate in at least three respects: it is unable to curtail the abstractness of underlying forms and the complexity of derivations in any principled way; the assumption that related dialects share an identical system of underlying representations leads to an inadequate account of dialect variation; and no coherent model for the incorporation of sound changes into the synchronic grammar is proposed. The purpose of this thesis is to demonstrate that a well-constrained model of Lexical Phonology, which is a generative, derivational successor of the Standard Generative model, need not suffer from these inadequacies. Chapter 1 provides an outline of the development and characteristics of Lexical Phonology and Morphology. In Chapters 2 and 3, the model of Lexical Phonology proposed for English by Halle and Mohanan (1985) is revised: the lexical phonology is limited to two levels; substantially more concrete underlying vowel systems are proposed for RP and General American; and radically revised formulations of certain modern English phonological rules, including the Vowel Shift Rule and j-Insertion, are suggested. These constrained analyses and rules are found to be consistent with internal data, and with external evidence from a number of sources, including dialect differences, diachrony, speech errors and psycholinguistic experiments. In Chapters 4-6, a third reference accent, Scottish Standard English, is introduced. In Chapter 4, the diachronic development and synchronic characteristics of this accent, and the related Scots dialects, are outlined. Chapters 5 and 6 provide a synchronic and diachronic account of the Scottish Vowel Length Rule (SVLR). I argue that SVLR represents a Scots-specific phonologisation of part of a pan-dialectal postlexical lengthening rule, which remains productive in all varieties of English, while SVLR has acquired certain properties of a lexical rule, and has been relocated into the lexicon. In becoming lexical, SVLR has neutralised the long/short distinction for Scots vowels, so that synchronically, the underlying vowel system of Scots/SSE is organised differently from that of other varieties of English. It is established that a constrained lexicalist model necessitates the recognition of underlying dialect variation; demonstrates a connection of lexical and postlexical rules with two distinct types of sound change; gives an illuminating account of the transition of sound changes to synchronic phonological rules; and permits the characterisation of dialect and language variation as a continuum

    Coalgebraic modelling of timed processes

    Get PDF
    • 

    corecore