67 research outputs found

    Stacked structure learning for lifted relational neural networks

    Get PDF
    Lifted Relational Neural Networks (LRNNs) describe relational domains using weighted first-order rules which act as templates for constructing feed-forward neural networks. While previous work has shown that using LRNNs can lead to state-of-the-art results in various ILP tasks, these results depended on hand-crafted rules. In this paper, we extend the framework of LRNNs with structure learning, thus enabling a fully automated learning process. Similarly to many ILP methods, our structure learning algorithm proceeds in an iterative fashion by top-down searching through the hypothesis space of all possible Horn clauses, considering the predicates that occur in the training examples as well as invented soft concepts entailed by the best weighted rules found so far. In the experiments, we demonstrate the ability to automatically induce useful hierarchical soft concepts leading to deep LRNNs with a competitive predictive power

    Learning predictive categories using lifted relational neural networks

    Get PDF
    Lifted relational neural networks (LRNNs) are a flexible neural-symbolic framework based on the idea of lifted modelling. In this paper we show how LRNNs can be easily used to specify declaratively and solve learning problems in which latent categories of entities, properties and relations need to be jointly induced

    Lifted relational neural networks: efficient learning of latent relational structures

    Get PDF
    We propose a method to combine the interpretability and expressive power of firstorder logic with the effectiveness of neural network learning. In particular, we introduce a lifted framework in which first-order rules are used to describe the structure of a given problem setting. These rules are then used as a template for constructing a number of neural networks, one for each training and testing example. As the different networks corresponding to different examples share their weights, these weights can be efficiently learned using stochastic gradient descent. Our framework provides a flexible way for implementing and combining a wide variety of modelling constructs. In particular, the use of first-order logic allows for a declarative specification of latent relational structures, which can then be efficiently discovered in a given data set using neural network learning. Experiments on 78 relational learning benchmarks clearly demonstrate the effectiveness of the framework

    Learning Relevant Reasoning Patterns with Neuro-Logic Programming

    Get PDF
    Tato práce demonstruje schopnosti vylepšeného neuro-logického frameworku podchytit různé úlohy umělé inteligence, které jsou založeny na různorodých metodách uvažování. Základadem k tomuto frameworku je stávající engine nazvaný Lifted Relational Neural Networks. V práci popisujeme nejčastější metody strojového uvažování používané ve statistických a symbolických metodách a také jak mohou být jednotlivé vzorce uvažování zakódovány do podoby navrženého neuro-logického programování. Dále se blíže zaměřujeme na schopnosti vyjadřování, které vzniknou kombinací obou přístupů. Na vybraných příkladech z herního prostředí ilustrujeme, jak tento společný neuro-logický přístup rozšiřuje schopnosti již existujísích metod uvažování pracovat nad relačními strukturami při zachování výhod neurálního učení.This thesis demonstrates the capability of an enhanced neuro-logic programming framework to capture diverse artificial intelligence tasks based on different reasoning patterns. The enhanced framework is building on existing engine called Lifted Relational Neural Networks. We describe common reasoning patterns used in statistical and symbolic methods and demonstrate how each particular pattern may be captured from the perspective of the proposed neuro-logic programming framework. We discuss the patterns in context of learning and reasoning and further focus more closely on abilities that arise from combination of both approaches. On selected examples from simple game environments, we illustrate how this joint neuro-logic programming approach broadens the scope of existing reasoning patterns through the ability to represent and reason with relational information while keeping the benefits of neural learning
    corecore