6,821 research outputs found
kLog: A Language for Logical and Relational Learning with Kernels
We introduce kLog, a novel approach to statistical relational learning.
Unlike standard approaches, kLog does not represent a probability distribution
directly. It is rather a language to perform kernel-based learning on
expressive logical and relational representations. kLog allows users to specify
learning problems declaratively. It builds on simple but powerful concepts:
learning from interpretations, entity/relationship data modeling, logic
programming, and deductive databases. Access by the kernel to the rich
representation is mediated by a technique we call graphicalization: the
relational representation is first transformed into a graph --- in particular,
a grounded entity/relationship diagram. Subsequently, a choice of graph kernel
defines the feature space. kLog supports mixed numerical and symbolic data, as
well as background knowledge in the form of Prolog or Datalog programs as in
inductive logic programming systems. The kLog framework can be applied to
tackle the same range of tasks that has made statistical relational learning so
popular, including classification, regression, multitask learning, and
collective classification. We also report about empirical comparisons, showing
that kLog can be either more accurate, or much faster at the same level of
accuracy, than Tilde and Alchemy. kLog is GPLv3 licensed and is available at
http://klog.dinfo.unifi.it along with tutorials
Meta-Learning by the Baldwin Effect
The scope of the Baldwin effect was recently called into question by two
papers that closely examined the seminal work of Hinton and Nowlan. To this
date there has been no demonstration of its necessity in empirically
challenging tasks. Here we show that the Baldwin effect is capable of evolving
few-shot supervised and reinforcement learning mechanisms, by shaping the
hyperparameters and the initial parameters of deep learning algorithms.
Furthermore it can genetically accommodate strong learning biases on the same
set of problems as a recent machine learning algorithm called MAML "Model
Agnostic Meta-Learning" which uses second-order gradients instead of evolution
to learn a set of reference parameters (initial weights) that can allow rapid
adaptation to tasks sampled from a distribution. Whilst in simple cases MAML is
more data efficient than the Baldwin effect, the Baldwin effect is more general
in that it does not require gradients to be backpropagated to the reference
parameters or hyperparameters, and permits effectively any number of gradient
updates in the inner loop. The Baldwin effect learns strong learning dependent
biases, rather than purely genetically accommodating fixed behaviours in a
learning independent manner
Reinforcement Learning: A Survey
This paper surveys the field of reinforcement learning from a
computer-science perspective. It is written to be accessible to researchers
familiar with machine learning. Both the historical basis of the field and a
broad selection of current work are summarized. Reinforcement learning is the
problem faced by an agent that learns behavior through trial-and-error
interactions with a dynamic environment. The work described here has a
resemblance to work in psychology, but differs considerably in the details and
in the use of the word ``reinforcement.'' The paper discusses central issues of
reinforcement learning, including trading off exploration and exploitation,
establishing the foundations of the field via Markov decision theory, learning
from delayed reinforcement, constructing empirical models to accelerate
learning, making use of generalization and hierarchy, and coping with hidden
state. It concludes with a survey of some implemented systems and an assessment
of the practical utility of current methods for reinforcement learning.Comment: See http://www.jair.org/ for any accompanying file
Learning by stochastic serializations
Complex structures are typical in machine learning. Tailoring learning
algorithms for every structure requires an effort that may be saved by defining
a generic learning procedure adaptive to any complex structure. In this paper,
we propose to map any complex structure onto a generic form, called
serialization, over which we can apply any sequence-based density estimator. We
then show how to transfer the learned density back onto the space of original
structures. To expose the learning procedure to the structural particularities
of the original structures, we take care that the serializations reflect
accurately the structures' properties. Enumerating all serializations is
infeasible. We propose an effective way to sample representative serializations
from the complete set of serializations which preserves the statistics of the
complete set. Our method is competitive or better than state of the art
learning algorithms that have been specifically designed for given structures.
In addition, since the serialization involves sampling from a combinatorial
process it provides considerable protection from overfitting, which we clearly
demonstrate on a number of experiments.Comment: Submission to NeurIPS 201
- …