2,417 research outputs found
SkILL - a Stochastic Inductive Logic Learner
Probabilistic Inductive Logic Programming (PILP) is a rel- atively unexplored
area of Statistical Relational Learning which extends classic Inductive Logic
Programming (ILP). This work introduces SkILL, a Stochastic Inductive Logic
Learner, which takes probabilistic annotated data and produces First Order
Logic theories. Data in several domains such as medicine and bioinformatics
have an inherent degree of uncer- tainty, that can be used to produce models
closer to reality. SkILL can not only use this type of probabilistic data to
extract non-trivial knowl- edge from databases, but it also addresses
efficiency issues by introducing a novel, efficient and effective search
strategy to guide the search in PILP environments. The capabilities of SkILL
are demonstrated in three dif- ferent datasets: (i) a synthetic toy example
used to validate the system, (ii) a probabilistic adaptation of a well-known
biological metabolism ap- plication, and (iii) a real world medical dataset in
the breast cancer domain. Results show that SkILL can perform as well as a
deterministic ILP learner, while also being able to incorporate probabilistic
knowledge that would otherwise not be considered
Improving Candidate Quality of Probabilistic Logic Models
Many real-world phenomena exhibit both relational structure and uncertainty. Probabilistic Inductive Logic Programming (PILP) uses Inductive Logic Programming (ILP) extended with probabilistic facts to produce meaningful and interpretable models for real-world phenomena. This merge between First Order Logic (FOL) theories and uncertainty makes PILP a very adequate tool for knowledge representation and extraction. However, this flexibility is coupled with a problem (inherited from ILP) of exponential search space growth and so, often, only a subset of all possible models is explored due to limited resources. Furthermore, the probabilistic evaluation of FOL theories, coming from the underlying probabilistic logic language and its solver, is also computationally demanding. This work introduces a prediction-based pruning strategy, which can reduce the search space based on the probabilistic evaluation of models, and a safe pruning criterion, which guarantees that the optimal model is not pruned away, as well as two alternative more aggressive criteria that do not provide this guarantee. Experiments performed using three benchmarks from different areas show that prediction pruning is effective in (i) maintaining predictive accuracy for all criteria and experimental settings; (ii) reducing the execution time when using some of the more aggressive criteria, compared to using no pruning; and (iii) selecting better candidate models in limited resource settings, also when compared to using no pruning
Efficient instance and hypothesis space revision in Meta-Interpretive Learning
Inductive Logic Programming (ILP) is a form of Machine Learning. The goal of ILP is to induce hypotheses, as logic programs, that generalise training examples. ILP is characterised by a high expressivity, generalisation ability and interpretability. Meta-Interpretive Learning (MIL) is a state-of-the-art sub-field of ILP. However, current MIL approaches have limited efficiency: the sample and learning complexity respectively are polynomial and exponential in the number of clauses. My thesis is that improvements over the sample and learning complexity can be achieved in MIL through instance and hypothesis space revision. Specifically, we investigate 1) methods that revise the instance space, 2) methods that revise the hypothesis space and 3) methods that revise both the instance and the hypothesis spaces for achieving more efficient MIL.
First, we introduce a method for building training sets with active learning in Bayesian MIL. Instances are selected maximising the entropy. We demonstrate this method can reduce the sample complexity and supports efficient learning of agent strategies. Second, we introduce a new method for revising the MIL hypothesis space with predicate invention. Our method generates predicates bottom-up from the background knowledge related to the training examples. We demonstrate this method is complete and can reduce the learning and sample complexity. Finally, we introduce a new MIL system called MIGO for learning optimal two-player game strategies. MIGO learns from playing: its training sets are built from the sequence of actions it chooses. Moreover, MIGO revises its hypothesis space with Dependent Learning: it first solves simpler tasks and can reuse any learned solution for solving more complex tasks. We demonstrate MIGO significantly outperforms both classical and deep reinforcement learning. The methods presented in this thesis open exciting perspectives for efficiently learning theories with MIL in a wide range of applications including robotics, modelling of agent strategies and game playing.Open Acces
GLIB: Efficient Exploration for Relational Model-Based Reinforcement Learning via Goal-Literal Babbling
We address the problem of efficient exploration for transition model learning
in the relational model-based reinforcement learning setting without extrinsic
goals or rewards. Inspired by human curiosity, we propose goal-literal babbling
(GLIB), a simple and general method for exploration in such problems. GLIB
samples relational conjunctive goals that can be understood as specific,
targeted effects that the agent would like to achieve in the world, and plans
to achieve these goals using the transition model being learned. We provide
theoretical guarantees showing that exploration with GLIB will converge almost
surely to the ground truth model. Experimentally, we find GLIB to strongly
outperform existing methods in both prediction and planning on a range of
tasks, encompassing standard PDDL and PPDDL planning benchmarks and a robotic
manipulation task implemented in the PyBullet physics simulator. Video:
https://youtu.be/F6lmrPT6TOY Code: https://git.io/JIsTBComment: AAAI 202
The Five Tribes of Machine-Learning: A Brief Overview
This paper reviews recent advances in automated computer-based learning capabilities. It briefly describes and examines the strengths and weaknesses of the five principal algorithmic approaches to machine-learning, namely: connectionism; evolutionism; Bayesianism; analogism; and, symbolism. While each of these approaches can demonstrate some degree of learning, a learning capability that is comparable with human learning is still in its infancy and will likely require the combination of multiple algorithmic approaches. However, the current state reached in machine-learning suggests that Artificial General Intelligence and even Artificial Superintelligence may indeed be eventually feasible
Human-like Few-Shot Learning via Bayesian Reasoning over Natural Language
A core tension in models of concept learning is that the model must carefully
balance the tractability of inference against the expressivity of the
hypothesis class. Humans, however, can efficiently learn a broad range of
concepts. We introduce a model of inductive learning that seeks to be
human-like in that sense. It implements a Bayesian reasoning process where a
language model first proposes candidate hypotheses expressed in natural
language, which are then re-weighed by a prior and a likelihood. By estimating
the prior from human data, we can predict human judgments on learning problems
involving numbers and sets, spanning concepts that are generative,
discriminative, propositional, and higher-order.Comment: NeurIPS 2023 ora
- …