38,036 research outputs found

    Towards meta-interpretive learning of programming language semantics

    Get PDF
    We introduce a new application for inductive logic programming: learning the semantics of programming languages from example evaluations. In this short paper, we explored a simplified task in this domain using the Metagol meta-interpretive learning system. We highlighted the challenging aspects of this scenario, including abstracting over function symbols, nonterminating examples, and learning non-observed predicates, and proposed extensions to Metagol helpful for overcoming these challenges, which may prove useful in other domains.Comment: ILP 2019, to appea

    Sciduction: Combining Induction, Deduction, and Structure for Verification and Synthesis

    Full text link
    Even with impressive advances in automated formal methods, certain problems in system verification and synthesis remain challenging. Examples include the verification of quantitative properties of software involving constraints on timing and energy consumption, and the automatic synthesis of systems from specifications. The major challenges include environment modeling, incompleteness in specifications, and the complexity of underlying decision problems. This position paper proposes sciduction, an approach to tackle these challenges by integrating inductive inference, deductive reasoning, and structure hypotheses. Deductive reasoning, which leads from general rules or concepts to conclusions about specific problem instances, includes techniques such as logical inference and constraint solving. Inductive inference, which generalizes from specific instances to yield a concept, includes algorithmic learning from examples. Structure hypotheses are used to define the class of artifacts, such as invariants or program fragments, generated during verification or synthesis. Sciduction constrains inductive and deductive reasoning using structure hypotheses, and actively combines inductive and deductive reasoning: for instance, deductive techniques generate examples for learning, and inductive reasoning is used to guide the deductive engines. We illustrate this approach with three applications: (i) timing analysis of software; (ii) synthesis of loop-free programs, and (iii) controller synthesis for hybrid systems. Some future applications are also discussed

    Learning explanatory logical rules in non-linear domains: a neuro-symbolic approach

    Get PDF
    Deep neural networks, despite their capabilities, are constrained by the need for large-scale training data, and often fall short in generalisation and interpretability. Inductive logic programming (ILP) presents an intriguing solution with its data-efficient learning of first-order logic rules. However, ILP grapples with challenges, notably the handling of non-linearity in continuous domains. With the ascent of neuro-symbolic ILP, there’s a drive to mitigate these challenges, synergising deep learning with relational ILP models to enhance interpretability and create logical decision boundaries. In this research, we introduce a neuro-symbolic ILP framework, grounded on differentiable Neural Logic networks, tailored for non-linear rule extraction in mixed discrete-continuous spaces. Our methodology consists of a neuro-symbolic approach, emphasising the extraction of non-linear functions from mixed domain data. Our preliminary findings showcase our architecture’s capability to identify non-linear functions from continuous data, offering a new perspective in neural-symbolic research and underlining the adaptability of ILP-based frameworks for regression challenges in continuous scenarios

    Structural Resolution with Co-inductive Loop Detection

    Get PDF
    A way to combine co-SLD style loop detection with structural resolution was found and is introduced in this work, to extend structural resolution with co-induction. In particular, we present the operational semantics, called co-inductive structural resolution, of this novel combination and prove its soundness with respect to the greatest complete Herbrand model.Comment: In Proceedings CoALP-Ty'16, arXiv:1709.0419

    Using Inductive Logic Programming to globally approximate Neural Networks for preference learning: challenges and preliminary results

    Get PDF
    In this paper we explore the use of Answer Set Programming (ASP), and in particular the state-of-the-art Inductive Logic Programming (ILP) system ILASP, as a method to explain black-box models, e.g. Neural Networks (NN), when they are used to learn user preferences. To this aim, we created a dataset of users preferences over a set of recipes, trained a set of NNs on these data, and performed preliminary experiments that investigate how ILASP can globally approximate these NNs. Since computational time required for training ILASP on high dimensional feature spaces is very high, we focused on the problem of making global approximation more scalable. In particular we experimented with the use of Principal Component Analysis (PCA) to reduce the dimensionality of the dataset while trying to keep our explanations transparent
    corecore