10 research outputs found
Shift Aggregate Extract Networks
We introduce an architecture based on deep hierarchical decompositions to
learn effective representations of large graphs. Our framework extends classic
R-decompositions used in kernel methods, enabling nested "part-of-part"
relations. Unlike recursive neural networks, which unroll a template on input
graphs directly, we unroll a neural network template over the decomposition
hierarchy, allowing us to deal with the high degree variability that typically
characterize social network graphs. Deep hierarchical decompositions are also
amenable to domain compression, a technique that reduces both space and time
complexity by exploiting symmetries. We show empirically that our approach is
competitive with current state-of-the-art graph classification methods,
particularly when dealing with social network datasets
Learning and Interpreting Multi-Multi-Instance Learning Networks
We introduce an extension of the multi-instance learning problem where
examples are organized as nested bags of instances (e.g., a document could be
represented as a bag of sentences, which in turn are bags of words). This
framework can be useful in various scenarios, such as text and image
classification, but also supervised learning over graphs. As a further
advantage, multi-multi instance learning enables a particular way of
interpreting predictions and the decision function. Our approach is based on a
special neural network layer, called bag-layer, whose units aggregate bags of
inputs of arbitrary size. We prove theoretically that the associated class of
functions contains all Boolean functions over sets of sets of instances and we
provide empirical evidence that functions of this kind can be actually learned
on semi-synthetic datasets. We finally present experiments on text
classification, on citation graphs, and social graph data, which show that our
model obtains competitive results with respect to accuracy when compared to
other approaches such as convolutional networks on graphs, while at the same
time it supports a general approach to interpret the learnt model, as well as
explain individual predictions.Comment: JML
Learning from Structured Data with Kernels, Neural Networks and Logic
Real-world data often have a complex structure that can be naturally represented with graphs or logic. This thesis has four main contributions that address some challenges that arise when learning from structured data.
First, we introduce graph invariant kernels (GIKs), a framework that upgrades the Weisfeiler-Lehman and other graph kernels to effectively exploit high-dimensional and continuous vertex attributes. Graphs are first decomposed into subgraphs. Vertices of the subgraphs are then compared by a kernel that combines the similarity of their labels and the similarity of their structural role using a suitable vertex invariant. By changing this invariant we obtain a family of graph kernels that includes generalizations of Weisfeiler-Lehman, NSPDK, and propagation kernels. We demonstrate empirically that these kernels obtain state-of-the-art results on relational datasets.
Second, we introduce shift aggregate extract networks (SAEN) an architecture based on deep hierarchical decompositions to learn effective representations of large graphs. Our framework extends classic R-decompositions used in kernel methods, enabling nested part-of-part relations. Unlike recursive neural networks, which unroll a template on input graphs directly, we unroll a neural network template over the decomposition hierarchy, allowing us to deal with the high degree variability that characterize social network graphs. Deep hierarchical decompositions are also amenable to domain compression, a technique that reduces both space and time complexity by exploiting symmetries. We show empirically that our approach is competitive with current state- of-the-art graph classification methods, particularly when dealing with social network datasets.
Third, we introduce kProbLog as a declarative logical language for machine learning. kProbLog is a simple algebraic extension of Prolog with facts and rules annotated by semiring labels. It allows to elegantly combine algebraic expressions with logic programs. We introduce the semantics of kProbLog, its inference algorithm, its implementation and provide convergence guarantees for a fragment of the language. We provide several code examples to illustrate its potential for a wide range of machine learning techniques. In particular, we show the encodings of state-of-the-art graph
kernels such as Weisfeiler-Lehman graph kernels, propagation kernels and an instance of GIKs. However, kProbLog is not limited to kernel methods and it can concisely express declarative formulations of tensor-based algorithms such as matrix factorization and energy-based models, and it can exploit semirings of dual numbers to perform automatic differentiation. Furthermore, experiments show that kProbLog is not only of theoretical interest, but can also be applied to real-world datasets. At the technical level, kProbLog extends aProbLog (an algebraic Prolog) by allowing multiple semirings to coexist in a single program and by introducing meta-functions for manipulating algebraic values.
Fourth, we provide a mathematical analysis of the preimage problem of the Weisfeiler- Lehman subtree kernel and show how to morph new graphs from graph datasets with a simple technique that employs off-the-shelf solvers. Preliminary results show that this technique is amenable for future constructive machine learning applications such as de novo synthesis of small molecules.status: publishe