907 research outputs found

    A Direct-Style Effect Notation for Sequential and Parallel Programs

    Get PDF
    Modeling sequential and parallel composition of effectful computations has been investigated in a variety of languages for a long time. In particular, the popular do-notation provides a lightweight effect embedding for any instance of a monad. Idiom bracket notation, on the other hand, provides an embedding for applicatives. First, while monads force effects to be executed sequentially, ignoring potential for parallelism, applicatives do not support sequential effects. Composing sequential with parallel effects remains an open problem. This is even more of an issue as real programs consist of a combination of both sequential and parallel segments. Second, common notations do not support invoking effects in direct-style, instead forcing a rigid structure upon the code. In this paper, we propose a mixed applicative/monadic notation that retains parallelism where possible, but allows sequentiality where necessary. We leverage a direct-style notation where sequentiality or parallelism is derived from the structure of the code. We provide a mechanisation of our effectful language in Coq and prove that our compilation approach retains the parallelism of the source program

    A Core Calculus for Documents

    Full text link
    Passive documents and active programs now widely comingle. Document languages include Turing-complete programming elements, and programming languages include sophisticated document notations. However, there are no formal foundations that model these languages. This matters because the interaction between document and program can be subtle and error-prone. In this paper we describe several such problems, then taxonomize and formalize document languages as levels of a document calculus. We employ the calculus as a foundation for implementing complex features such as reactivity, as well as for proving theorems about the boundary of content and computation. We intend for the document calculus to provide a theoretical basis for new document languages, and to assist designers in cleaning up the unsavory corners of existing languages.Comment: Published at POPL 202

    Towards specification formalisms for data warehouse systems design

    Get PDF
    Text in English with abstracts and keywords in English, Afrikaans and SetswanaSeveral studies have been conducted on formal methods; however, few of these studies have used formal methods in the data warehousing area, specifically system development. Many reasons may be linked to that, such as that few experts know how to use them. Formal methods have been used in software development using mathematical notations. Despite the advantages of using formal methods in software development, their application in the data warehousing area has been restricted when compared with the use of informal (natural language) and semi-formal notations. This research aims to determine the extent to which formal methods may mitigate failures that mostly occur in the development of data warehouse systems. As part of this research, an enhanced framework was proposed to facilitate the usage of formal methods in the development of such systems. The enhanced framework focuses mainly on the requirements definition, the Unified Modelling Language (UML) constructs, the Star model and formal specification. A medium-sized case study of a data mart was considered to validate the enhanced framework. This dissertation also discusses the object-orientation paradigm and UML notations. The requirements specification of a data warehouse system is presented in natural language and formal notation to show how a formal specification may be drifted from natural language to UML structures and thereafter to the Z specification using an established strategy as a guideline to construct a Z specificationAlhoewel verskeie studies oor formele metodes gedoen is, het min hiervan formele metodes in die databergingarea, spesifiek stelselontwerp, gebruik. Dit kan aan baie redes toegeskryf word, soos dat min kundiges weet hoe om dit te gebruik. Formele metodes is in sagtewareontwikkeling gebruik wat wiskundige notasies gebruik. Ten spyte van die voordele van formele metodes in sagtewareontwikkeling, is die toepassing daarvan in die databergingarea beperk wanneer dit met die gebruik van informele (natuurlike taal) en semiformele notasies vergelyk word. Hierdie navorsing beoog om te bepaal tot watter mate formele metodes foute kan uitskakel wat hoofsaaklik in die ontwikkeling van databeringstelsels voorkom. As deel van hierdie navorsing is 'n beter raamwerk voorgestel om die gebruik van formele metodes in die ontwikkeling van sulke stelsels te fasiliteer. Die beter raamwerk fokus hoofsaaklik op die definisie van vereistes, die Unified Modelling Language (UML) - konstukte, die Star-model en formele spesifikasies. Die mediumgrootte gevallestudie van 'n datamark is oorweeg om die beter raamwerk geldig te verklaar. Hierdie verhandeling bespreek ook die voorwerpgeoriënteerde paradigma en die UML-notasies. Die vereiste spesifikasie van 'n databergingstelsel word in natuurlike taal en formele notasie voorgehou om aan te dui hoe 'n formele spesifikasie van natuurlik taal na UML strukture kan verskuif en daarna na die Z-spesifiekasie deur 'n gevestigde strategie as 'n riglyn te gebruik om 'n Z-spesifikasie te konstrueer.Go nnile le dithutopatlisiso di le mmalwa ka mekgwa e e fomale, fela ga se dithutopatlisiso tse dintsi tsa tseno tse di dirisitseng mekgwa e e fomale mo karolong ya bobolokelobogolo jwa data, bogolo segolo mo ntlheng ya thadiso ya ditsamaiso tsa dikhomphiutha. Go ka nna le mabaka a le mantsi a a ka golaganngwang le seno, go tshwana le gore ga se baitseanape ba le kalo ba ba itseng go e dirisa. Mekgwa e e fomale e e dirisitswe mo tlhabololong ya dirweboleta go dirisiwa matshwao a dipalo. Le fa go na le melemo ya go dirisa mekgwa e e fomale mo tlhabololong ya dirweboleta, tiriso ya yona mo bobolokelobogolong jwa data e lekanyeditswe fa e tshwantshanngwa le tiriso ya matshwao a a seng fomale (puo ya tlwaelo) le a a batlang a le fomale. Patlisiso eno e ikaelela go bona gore a mekgwa e e fomale e ka fokotsa go retelelwa go go diragalang gantsi mo tlhabololong ya ditsamaiso tsa bobolokelobogolo jwa data. Jaaka karolo ya patlisiso eno, go tshitshintswe letlhomeso le le tokafaditsweng go bebofatsa tiriso ya mekgwa e e fomale mo tlhabololong ya ditsamaiso tse di jalo. Letlhomeso le le tokafaditsweng le tota ditlhokego tsa tlhaloso, megopolo ya Unified Modelling Language (UML), sekao sa Star le ditlhokego tse di rulaganeng. Go dirisitswe patlisiso ya tobiso e e magareng ya data mart go tlhomamisa letlhomeso le le tokafaditsweng. Tlhotlhomisi eno gape e lebelela pharataeme e e totileng sedirwa/selo le matshwao a UML. Ditlhokego tsa tsamaiso ya polokelokgolo ya data di tlhagisiwa ka puo ya tlholego le matshwao a a fomale go bontsha ka moo tlhagiso e e fomale e ka lebisiwang go tswa kwa puong ya tlholego go ya kwa dipopegong tsa UML mme morago e lebe kwa tlhalosong ya ditlhokego ya Z go dirisiwa togamaano e e ntseng e le gona jaaka kaedi ya go aga tlhaloso ya ditlhokego ya Z.School of ComputingM. Sc. (Computing

    Simulation of Turing machines with analytic discrete ODEs: FPTIME and FPSPACE over the reals characterised with discrete ordinary differential equations

    Full text link
    We prove that functions over the reals computable in polynomial time can be characterised using discrete ordinary differential equations (ODE), also known as finite differences. We also provide a characterisation of functions computable in polynomial space over the reals. In particular, this covers space complexity, while existing characterisations were only able to cover time complexity, and were restricted to functions over the integers. We prove furthermore that no artificial sign or test function is needed even for time complexity. At a technical level, this is obtained by proving that Turing machines can be simulated with analytic discrete ordinary differential equations. We believe this result opens the way to many applications, as it opens the possibility of programming with ODEs, with an underlying well-understood time and space complexity.Comment: arXiv admin note: text overlap with arXiv:2209.1340

    A Metatheoretic Analysis of Subtype Universes

    Get PDF
    Subtype universes were initially introduced as an expressive mechanisation of bounded quantification extending a modern type theory. In this paper, we consider a dependent type theory equipped with coercive subtyping and a generalisation of subtype universes. We prove results regarding the metatheoretic properties of subtype universes, such as consistency and strong normalisation. We analyse the causes of undecidability in bounded quantification, and discuss how coherency impacts the metatheoretic properties of theories implementing bounded quantification. We describe the effects of certain choices of subtyping inference rules on the expressiveness of a type theory, and examine various applications in natural language semantics, programming languages, and mathematics formalisation

    Meta-ontology fault detection

    Get PDF
    Ontology engineering is the field, within knowledge representation, concerned with using logic-based formalisms to represent knowledge, typically moderately sized knowledge bases called ontologies. How to best develop, use and maintain these ontologies has produced relatively large bodies of both formal, theoretical and methodological research. One subfield of ontology engineering is ontology debugging, and is concerned with preventing, detecting and repairing errors (or more generally pitfalls, bad practices or faults) in ontologies. Due to the logical nature of ontologies and, in particular, entailment, these faults are often both hard to prevent and detect and have far reaching consequences. This makes ontology debugging one of the principal challenges to more widespread adoption of ontologies in applications. Moreover, another important subfield in ontology engineering is that of ontology alignment: combining multiple ontologies to produce more powerful results than the simple sum of the parts. Ontology alignment further increases the issues, difficulties and challenges of ontology debugging by introducing, propagating and exacerbating faults in ontologies. A relevant aspect of the field of ontology debugging is that, due to the challenges and difficulties, research within it is usually notably constrained in its scope, focusing on particular aspects of the problem or on the application to only certain subdomains or under specific methodologies. Similarly, the approaches are often ad hoc and only related to other approaches at a conceptual level. There are no well established and widely used formalisms, definitions or benchmarks that form a foundation of the field of ontology debugging. In this thesis, I tackle the problem of ontology debugging from a more abstract than usual point of view, looking at existing literature in the field and attempting to extract common ideas and specially focussing on formulating them in a common language and under a common approach. Meta-ontology fault detection is a framework for detecting faults in ontologies that utilizes semantic fault patterns to express schematic entailments that typically indicate faults in a systematic way. The formalism that I developed to represent these patterns is called existential second-order query logic (abbreviated as ESQ logic). I further reformulated a large proportion of the ideas present in some of the existing research pieces into this framework and as patterns in ESQ logic, providing a pattern catalogue. Most of the work during my PhD has been spent in designing and implementing an algorithm to effectively automatically detect arbitrary ESQ patterns in arbitrary ontologies. The result is what we call minimal commitment resolution for ESQ logic, an extension of first-order resolution, drawing on important ideas from higher-order unification and implementing a novel approach to unification problems using dependency graphs. I have proven important theoretical properties about this algorithm such as its soundness, its termination (in a certain sense and under certain conditions) and its fairness or completeness in the enumeration of infinite spaces of solutions. Moreover, I have produced an implementation of minimal commitment resolution for ESQ logic in Haskell that has passed all unit tests and produces non-trivial results on small examples. However, attempts to apply this algorithm to examples of a more realistic size have proven unsuccessful, with computation times that exceed our tolerance levels. In this thesis, I have provided both details of the challenges faced in this regard, as well as other successful forms of qualitative evaluation of the meta-ontology fault detection approach, and discussions about both what I believe are the main causes of the computational feasibility problems, ideas on how to overcome them, and also ideas on other directions of future work that could use the results in the thesis to contribute to the production of foundational formalisms, ideas and approaches to ontology debugging that can properly combine existing constrained research. It is unclear to me whether minimal commitment resolution for ESQ logic can, in its current shape, be implemented efficiently or not, but I believe that, at the very least, the theoretical and conceptual underpinnings that I have presented in this thesis will be useful to produce more foundational results in the field

    Proof-theoretic Semantics for Intuitionistic Multiplicative Linear Logic

    Get PDF
    This work is the first exploration of proof-theoretic semantics for a substructural logic. It focuses on the base-extension semantics (B-eS) for intuitionistic multiplicative linear logic (IMLL). The starting point is a review of Sandqvist’s B-eS for intuitionistic propositional logic (IPL), for which we propose an alternative treatment of conjunction that takes the form of the generalized elimination rule for the connective. The resulting semantics is shown to be sound and complete. This motivates our main contribution, a B-eS for IMLL , in which the definitions of the logical constants all take the form of their elimination rule and for which soundness and completeness are established

    Siegener Beiträge zur Geschichte und Philosophie der Mathematik 2022

    Get PDF
    Die im nunmehr vorliegenden sechzehnten Band von SieB - Siegener Beiträge zur Geschichte und Philosophie der Mathematik - vereinten Aufsätze dokumentieren jene Pluralität von Themen, Perspektiven und Methoden das große Oberthema Geschichte und Philosophie der Mathematik betreffend, die in den vorangehenden Bänden bereits ein Anliegen der Reihe war. Die Siegener Beiträge bieten ein Forum für den Diskurs im Bereich von Philosophie und Geschichte der Mathematik. Dabei stehen die folgenden inhaltlichen Aspekte im Zentrum: 1. Philosophie und Geschichte der Mathematik sollen einander wechselseitig fruchtbar irritieren: Ohne Bezug auf die real existierende Mathematik und ihre Geschichte läuft das philosophische Fragen nach der Mathematik leer, ohne Bezug auf die systematische Reflexion über Mathematik wird ein Bemühen um die Mathematikgeschichte blind. 2. Geschichte ermöglicht ein Kontingenzbewusstsein, philosophische Reflexion fordert Kontextualisierungen heraus. Damit stellen sich u. a. Fragen nach der Rolle der Mathematik für die Wissenschaftsgeschichte, aber auch nach einer gesellschaftlichen Rolle der Mathematik und deren historischer Bedingtheit.Inhaltsverzeichnis: Harald Boehme: Von Theodoros bis Speusippos. Zur Entdeckung des Inkommensurablen sowie der Seiten- und Diagonalzahlen Jasmin Özel: Diagrammatisches Denken bei Euklid Christian Hugo Hoffmann: Der Hauptsatz in der Ars conjectandi: Interpretationen von Bernoullis Beiträgen zu den Anfängen der mathematischen Wahrscheinlichkeitstheorie Jens Lemanski: Schopenhauers Logikdiagramme in den Mathematiklehrbüchern Adolph Diesterwegs Dolf Rami: Frege über Merkmale von Begriffen Daniel Koenig: Der Raum als Reihenbegriff – Ernst Cassirers Deutung der Geometrieentwicklung des 19. Jahrhunderts Renate Tobies: Zum 100-jährigen Jubiläum des Ernst Abbe-Gedächtnispreises Štefan Porubský: Štefan Schwarz und die Entstehung der Halbgruppentheorie Stephan Berendonk: Ein dialektischer Weg zur Summe der Kubikzahlen Felicitas Pielsticker & Ingo Witzke: Devilish prime factorization – fundamental theorem of arithmeti

    From transformational grammar to constraint-based approaches

    Get PDF
    Synopsis: This book introduces formal grammar theories that play a role in current linguistic theorizing (Phrase Structure Grammar, Transformational Grammar/Government & Binding, Generalized Phrase Structure Grammar, Lexical Functional Grammar, Categorial Grammar, Head-​Driven Phrase Structure Grammar, Construction Grammar, Tree Adjoining Grammar). The key assumptions are explained and it is shown how the respective theory treats arguments and adjuncts, the active/passive alternation, local reorderings, verb placement, and fronting of constituents over long distances. The analyses are explained with German as the object language. The second part of the book compares these approaches with respect to their predictions regarding language acquisition and psycholinguistic plausibility. The nativism hypothesis, which assumes that humans posses genetically determined innate language-specific knowledge, is critically examined and alternative models of language acquisition are discussed. The second part then addresses controversial issues of current theory building such as the question of flat or binary branching structures being more appropriate, the question whether constructions should be treated on the phrasal or the lexical level, and the question whether abstract, non-visible entities should play a role in syntactic analyses. It is shown that the analyses suggested in the respective frameworks are often translatable into each other. The book closes with a chapter showing how properties common to all languages or to certain classes of languages can be captured.This book is a new edition of http://langsci-press.org/catalog/book/25, http://langsci-press.org/catalog/book/195, http://langsci-press.org/catalog/book/255 , and http://langsci-press.org/catalog/book/287.Fifth revised and extended editio

    A computational framework of human causal generalization

    Get PDF
    How do people decide how general a causal relationship is, in terms of the entities or situations it applies to? How can people make these difficult judgments in a fast, efficient way? To address these questions, I designed a novel online experiment interface that systematically measures how people generalize causal relationships, and developed a computational modeling framework that combines program induction (about the hidden causal laws) with non-parametric category inference (about their domains of influence) to account for unique patterns in human causal generalization. In particular, by introducing adaptor grammars to standard Bayesian-symbolic models, this framework formalizes conceptual bootstrapping as a general online inference algorithm that gives rise to compositional causal concepts. Chapter 2 investigates one-shot causal generalization, where I find that participants’ inferences are shaped by the order of the generalization questions they are asked. Chapter 3 looks into few-shot cases, and finds an asymmetry in the formation of causal categories: participants preferentially identify causal laws with features of the agent objects rather than recipients, but this asymmetry disappears when visual cues to causal agency are challenged. The proposed modeling approach can explain both the generalizationorder effect and the causal asymmetry, outperforming a naïve Bayesian account while providing a computationally plausible mechanism for real-world causal generalization. Chapter 4 further extends this framework with adaptor grammars, using a dynamic conceptual repertoire that is enriched over time, allowing the model to cache and later reuse elements of earlier insights. This model predicts systematically different learned concepts when the same evidence is processed in different orders, and across four experiments people’s learning outcomes indeed closely resembled this model’s, differing significantly from alternative accounts
    • …
    corecore