97 research outputs found

    The umbilical cord of finite model theory

    Full text link
    Model theory was born and developed as a part of mathematical logic. It has various application domains but is not beholden to any of them. A priori, the research area known as finite model theory would be just a part of model theory but didn't turn out that way. There is one application domain -- relational database management -- that finite model theory had been beholden to during a substantial early period when databases provided the motivation and were the main application target for finite model theory. Arguably, finite model theory was motivated even more by complexity theory. But the subject of this paper is how relational database theory influenced finite model theory. This is NOT a scholarly history of the subject with proper credits to all participants. My original intent was to cover just the developments that I witnessed or participated in. The need to make the story coherent forced me to cover some additional developments.Comment: To be published in the Logic in Computer Science column of the February 2023 issue of the Bulletin of the European Association for Theoretical Computer Scienc

    Text analysis and computers

    Full text link
    Content: Erhard Mergenthaler: Computer-assisted content analysis (3-32); Udo Kelle: Computer-aided qualitative data analysis: an overview (33-63); Christian Mair: Machine-readable text corpora and the linguistic description of danguages (64-75); Jürgen Krause: Principles of content analysis for information retrieval systems (76-99); Conference Abstracts (100-131)

    Achieving Highly Reliable Embedded Software: An Empirical Evaluation of Different Approaches

    Full text link

    Editorial

    Get PDF

    Editorial

    Get PDF

    Editorial

    Get PDF

    Editorial

    Get PDF

    Editorial

    Get PDF

    Efficient instance and hypothesis space revision in Meta-Interpretive Learning

    Get PDF
    Inductive Logic Programming (ILP) is a form of Machine Learning. The goal of ILP is to induce hypotheses, as logic programs, that generalise training examples. ILP is characterised by a high expressivity, generalisation ability and interpretability. Meta-Interpretive Learning (MIL) is a state-of-the-art sub-field of ILP. However, current MIL approaches have limited efficiency: the sample and learning complexity respectively are polynomial and exponential in the number of clauses. My thesis is that improvements over the sample and learning complexity can be achieved in MIL through instance and hypothesis space revision. Specifically, we investigate 1) methods that revise the instance space, 2) methods that revise the hypothesis space and 3) methods that revise both the instance and the hypothesis spaces for achieving more efficient MIL. First, we introduce a method for building training sets with active learning in Bayesian MIL. Instances are selected maximising the entropy. We demonstrate this method can reduce the sample complexity and supports efficient learning of agent strategies. Second, we introduce a new method for revising the MIL hypothesis space with predicate invention. Our method generates predicates bottom-up from the background knowledge related to the training examples. We demonstrate this method is complete and can reduce the learning and sample complexity. Finally, we introduce a new MIL system called MIGO for learning optimal two-player game strategies. MIGO learns from playing: its training sets are built from the sequence of actions it chooses. Moreover, MIGO revises its hypothesis space with Dependent Learning: it first solves simpler tasks and can reuse any learned solution for solving more complex tasks. We demonstrate MIGO significantly outperforms both classical and deep reinforcement learning. The methods presented in this thesis open exciting perspectives for efficiently learning theories with MIL in a wide range of applications including robotics, modelling of agent strategies and game playing.Open Acces
    • …
    corecore