2,090 research outputs found

    A foundation for synthesising programming language semantics

    Get PDF
    Programming or scripting languages used in real-world systems are seldom designed with a formal semantics in mind from the outset. Therefore, the first step for developing well-founded analysis tools for these systems is to reverse-engineer a formal semantics. This can take months or years of effort. Could we automate this process, at least partially? Though desirable, automatically reverse-engineering semantics rules from an implementation is very challenging, as found by Krishnamurthi, Lerner and Elberty. They propose automatically learning desugaring translation rules, mapping the language whose semantics we seek to a simplified, core version, whose semantics are much easier to write. The present thesis contains an analysis of their challenge, as well as the first steps towards a solution. Scaling methods with the size of the language is very difficult due to state space explosion, so this thesis proposes an incremental approach to learning the translation rules. I present a formalisation that both clarifies the informal description of the challenge by Krishnamurthi et al, and re-formulates the problem, shifting the focus to the conditions for incremental learning. The central definition of the new formalisation is the desugaring extension problem, i.e. extending a set of established translation rules by synthesising new ones. In a synthesis algorithm, the choice of search space is important and non-trivial, as it needs to strike a good balance between expressiveness and efficiency. The rest of the thesis focuses on defining search spaces for translation rules via typing rules. Two prerequisites are required for comparing search spaces. The first is a series of benchmarks, a set of source and target languages equipped with intended translation rules between them. The second is an enumerative synthesis algorithm for efficiently enumerating typed programs. I show how algebraic enumeration techniques can be applied to enumerating well-typed translation rules, and discuss the properties expected from a type system for ensuring that typed programs be efficiently enumerable. The thesis presents and empirically evaluates two search spaces. A baseline search space yields the first practical solution to the challenge. The second search space is based on a natural heuristic for translation rules, limiting the usage of variables so that they are used exactly once. I present a linear type system designed to efficiently enumerate translation rules, where this heuristic is enforced. Through informal analysis and empirical comparison to the baseline, I then show that using linear types can speed up the synthesis of translation rules by an order of magnitude

    Cyclic proof systems for modal fixpoint logics

    Get PDF
    This thesis is about cyclic and ill-founded proof systems for modal fixpoint logics, with and without explicit fixpoint quantifiers.Cyclic and ill-founded proof-theory allow proofs with infinite branches or paths, as long as they satisfy some correctness conditions ensuring the validity of the conclusion. In this dissertation we design a few cyclic and ill-founded systems: a cyclic one for the weak Grzegorczyk modal logic K4Grz, based on our explanation of the phenomenon of cyclic companionship; and ill-founded and cyclic ones for the full computation tree logic CTL* and the intuitionistic linear-time temporal logic iLTL. All systems are cut-free, and the cyclic ones for K4Grz and iLTL have fully finitary correctness conditions.Lastly, we use a cyclic system for the modal mu-calculus to obtain a proof of the uniform interpolation property for the logic which differs from the original, automata-based one

    Evaluating Symbolic AI as a Tool to Understand Cell Signalling

    Get PDF
    The diverse and highly complex nature of modern phosphoproteomics research produces a high volume of data. Chemical phosphoproteomics especially, is amenable to a variety of analytical approaches. In this thesis we evaluate novel Symbolic AI based algorithms as potential tools in the analysis of cell signalling. Initially we developed a first order deductive, logic-based model. This allowed us to identify previously unreported inhibitor-kinase relationships which could offer novel therapeutic targets for further investigation. Following this we made use of the probabilistic reasoning of ProbLog to augment the aforementioned Prolog based model with an intuitively calculated degree of belief. This allowed us to rank previous associations while also further increasing our confidence in already established predictions. Finally we applied our methodology to a Saccharomyces cerevisiae gene perturbation, phosphoproteomics dataset. In this context we were able to confirm the majority of ground truths, i.e. gene deletions as having taken place as intended. For the remaining deletions, again using a purely symbolic based approach we were able to provide predictions on the rewiring of kinase based signalling networks following kinase encoding gene deletions. The explainable, human readable and white-box nature of this approach were highlighted, however its brittleness due to missing, inconsistent or conflicting background knowledge was also examined

    Handling of Past and Future with Phenesthe+

    Get PDF
    Writing temporal logic formulae for properties that combine instantaneous events with overlapping temporal phenomena of some duration is difficult in classical temporal logics. To address this issue, in previous work we introduced a new temporal logic with intuitive temporal modalities specifically tailored for the representation of both instantaneous and durative phenomena. We also provided an implementation of a complex event processing system, Phenesthe, based on this logic, that has been applied and tested on a real maritime surveillance scenario. In this work, we extend our temporal logic with two extra modalities to increase its expressive power for handling future formulae. We compare the expressive power of different fragments of our logic with Linear Temporal Logic and dyadic first-order logic. Furthermore, we define correctness criteria for stream processors that use our language. Last but not least, we evaluate empirically the performance of Phenesthe+, our extended implementation, and show that the increased expressive power does not affect efficiency significantly

    Ratio und similitudo: die vernunftkonforme Argumentation im Dialogus des Petrus Alfonsi

    Full text link
    Anders als in religionspolemischen Werken früherer Autoren, die auf die exegetische Diskussion zentriert sind, argumentiert Petrus Alfonsi in seinem Dialogus (um 1110 verfasst) gleichermaßen auf der Grundlage von auctoritas (der Bibel) und von ratio. In diesem Beitrag wird diskutiert, wie Petrus Alfonsi die vernunftbasierte Argumentation begrifflich fasst und umsetzt. An einer Stelle präzisiert Petrus Alfonsi drei Quellen der rationalen Erkenntnis. Die Aussage wird anhand einer genauen Lektüre und durch die Heranziehung der Quelle, das Werk des jüdischen Philosophen Saadia Gaon, Emunoth we-Deoth, interpretiert. Petrus Alfonsi unterscheidet darin die spontane Erkenntnis durch die Sinne, die deduktive Argumentation auf der Grundlage von allgemein anerkannten Prämissen (necessariae rationes) und die similitudo, die sich als die evidenzbasierte Argumentation verstehen lässt. Im Dialogus argumentiert Petrus Alfonsi nur selten auf der Grundlage von Prämissen, immer wieder findet sich eine Argumentation, die auf beobachtbaren Phänomenen basiert. Häufig legt Petrus Erkenntnisse der Naturphilosophie dar, die er durch Naturbespiele erläutert. Für dieses Verfahren setzt er auch den Begriff similitudo ein

    Generalising weighted model counting

    Get PDF
    Given a formula in propositional or (finite-domain) first-order logic and some non-negative weights, weighted model counting (WMC) is a function problem that asks to compute the sum of the weights of the models of the formula. Originally used as a flexible way of performing probabilistic inference on graphical models, WMC has found many applications across artificial intelligence (AI), machine learning, and other domains. Areas of AI that rely on WMC include explainable AI, neural-symbolic AI, probabilistic programming, and statistical relational AI. WMC also has applications in bioinformatics, data mining, natural language processing, prognostics, and robotics. In this work, we are interested in revisiting the foundations of WMC and considering generalisations of some of the key definitions in the interest of conceptual clarity and practical efficiency. We begin by developing a measure-theoretic perspective on WMC, which suggests a new and more general way of defining the weights of an instance. This new representation can be as succinct as standard WMC but can also expand as needed to represent less-structured probability distributions. We demonstrate the performance benefits of the new format by developing a novel WMC encoding for Bayesian networks. We then show how existing WMC encodings for Bayesian networks can be transformed into this more general format and what conditions ensure that the transformation is correct (i.e., preserves the answer). Combining the strengths of the more flexible representation with the tricks used in existing encodings yields further efficiency improvements in Bayesian network probabilistic inference. Next, we turn our attention to the first-order setting. Here, we argue that the capabilities of practical model counting algorithms are severely limited by their inability to perform arbitrary recursive computations. To enable arbitrary recursion, we relax the restrictions that typically accompany domain recursion and generalise circuits (used to express a solution to a model counting problem) to graphs that are allowed to have cycles. These improvements enable us to find efficient solutions to counting fundamental structures such as injections and bijections that were previously unsolvable by any available algorithm. The second strand of this work is concerned with synthetic data generation. Testing algorithms across a wide range of problem instances is crucial to ensure the validity of any claim about one algorithm’s superiority over another. However, benchmarks are often limited and fail to reveal differences among the algorithms. First, we show how random instances of probabilistic logic programs (that typically use WMC algorithms for inference) can be generated using constraint programming. We also introduce a new constraint to control the independence structure of the underlying probability distribution and provide a combinatorial argument for the correctness of the constraint model. This model allows us to, for the first time, experimentally investigate inference algorithms on more than just a handful of instances. Second, we introduce a random model for WMC instances with a parameter that influences primal treewidth—the parameter most commonly used to characterise the difficulty of an instance. We show that the easy-hard-easy pattern with respect to clause density is different for algorithms based on dynamic programming and algebraic decision diagrams than for all other solvers. We also demonstrate that all WMC algorithms scale exponentially with respect to primal treewidth, although at differing rates

    Comparing EvenB, {log}\{log\} and Why3 Models of Sparse Sets

    Full text link
    Many representations for sets are available in programming languages libraries. The paper focuses on sparse sets used, e.g., in some constraint solvers for representing integer variable domains which are finite sets of values, as an alternative to range sequence. We propose in this paper verified implementations of sparse sets, in three deductive formal verification tools, namely EventB, {log}\{log\} and Why3. Furthermore, we draw some comparisons regarding specifications and proofs

    Explanations as Programs in Probabilistic Logic Programming

    Full text link
    The generation of comprehensible explanations is an essential feature of modern artificial intelligence systems. In this work, we consider probabilistic logic programming, an extension of logic programming which can be useful to model domains with relational structure and uncertainty. Essentially, a program specifies a probability distribution over possible worlds (i.e., sets of facts). The notion of explanation is typically associated with that of a world, so that one often looks for the most probable world as well as for the worlds where the query is true. Unfortunately, such explanations exhibit no causal structure. In particular, the chain of inferences required for a specific prediction (represented by a query) is not shown. In this paper, we propose a novel approach where explanations are represented as programs that are generated from a given query by a number of unfolding-like transformations. Here, the chain of inferences that proves a given query is made explicit. Furthermore, the generated explanations are minimal (i.e., contain no irrelevant information) and can be parameterized w.r.t. a specification of visible predicates, so that the user may hide uninteresting details from explanations.Comment: Published as: Vidal, G. (2022). Explanations as Programs in Probabilistic Logic Programming. In: Hanus, M., Igarashi, A. (eds) Functional and Logic Programming. FLOPS 2022. Lecture Notes in Computer Science, vol 13215. Springer, Cham. The final authenticated publication is available online at https://doi.org/10.1007/978-3-030-99461-7_1
    • …
    corecore