7 research outputs found
Inference, Learning, and Population Size: Projectivity for SRL Models
A subtle difference between propositional and relational data is that in many
relational models, marginal probabilities depend on the population or domain
size. This paper connects the dependence on population size to the classic
notion of projectivity from statistical theory: Projectivity implies that
relational predictions are robust with respect to changes in domain size. We
discuss projectivity for a number of common SRL systems, and identify syntactic
fragments that are guaranteed to yield projective models. The syntactic
conditions are restrictive, which suggests that projectivity is difficult to
achieve in SRL, and care must be taken when working with different domain
sizes
Computing Multi-Relational Sufficient Statistics for Large Databases
Databases contain information about which relationships do and do not hold
among entities. To make this information accessible for statistical analysis
requires computing sufficient statistics that combine information from
different database tables. Such statistics may involve any number of {\em
positive and negative} relationships. With a naive enumeration approach,
computing sufficient statistics for negative relationships is feasible only for
small databases. We solve this problem with a new dynamic programming algorithm
that performs a virtual join, where the requisite counts are computed without
materializing join tables. Contingency table algebra is a new extension of
relational algebra, that facilitates the efficient implementation of this
M\"obius virtual join operation. The M\"obius Join scales to large datasets
(over 1M tuples) with complex schemas. Empirical evaluation with seven
benchmark datasets showed that information about the presence and absence of
links can be exploited in feature selection, association rule mining, and
Bayesian network learning.Comment: 11pages, 8 figures, 8 tables, CIKM'14,November 3--7, 2014, Shanghai,
Chin
Schema Independent Relational Learning
Learning novel concepts and relations from relational databases is an
important problem with many applications in database systems and machine
learning. Relational learning algorithms learn the definition of a new relation
in terms of existing relations in the database. Nevertheless, the same data set
may be represented under different schemas for various reasons, such as
efficiency, data quality, and usability. Unfortunately, the output of current
relational learning algorithms tends to vary quite substantially over the
choice of schema, both in terms of learning accuracy and efficiency. This
variation complicates their off-the-shelf application. In this paper, we
introduce and formalize the property of schema independence of relational
learning algorithms, and study both the theoretical and empirical dependence of
existing algorithms on the common class of (de) composition schema
transformations. We study both sample-based learning algorithms, which learn
from sets of labeled examples, and query-based algorithms, which learn by
asking queries to an oracle. We prove that current relational learning
algorithms are generally not schema independent. For query-based learning
algorithms we show that the (de) composition transformations influence their
query complexity. We propose Castor, a sample-based relational learning
algorithm that achieves schema independence by leveraging data dependencies. We
support the theoretical results with an empirical study that demonstrates the
schema dependence/independence of several algorithms on existing benchmark and
real-world datasets under (de) compositions
Transforming Graph Representations for Statistical Relational Learning
Relational data representations have become an increasingly important topic
due to the recent proliferation of network datasets (e.g., social, biological,
information networks) and a corresponding increase in the application of
statistical relational learning (SRL) algorithms to these domains. In this
article, we examine a range of representation issues for graph-based relational
data. Since the choice of relational data representation for the nodes, links,
and features can dramatically affect the capabilities of SRL algorithms, we
survey approaches and opportunities for relational representation
transformation designed to improve the performance of these algorithms. This
leads us to introduce an intuitive taxonomy for data representation
transformations in relational domains that incorporates link transformation and
node transformation as symmetric representation tasks. In particular, the
transformation tasks for both nodes and links include (i) predicting their
existence, (ii) predicting their label or type, (iii) estimating their weight
or importance, and (iv) systematically constructing their relevant features. We
motivate our taxonomy through detailed examples and use it to survey and
compare competing approaches for each of these tasks. We also discuss general
conditions for transforming links, nodes, and features. Finally, we highlight
challenges that remain to be addressed