22,986 research outputs found
Recommended from our members
Neural Relational Learning Through Semi-Propositionalization of Bottom Clauses
Relational learning can be described as the task of learning first-order logic rules from examples. It has enabled a number of new machine learning applications, e.g. graph mining and link analysis in social networks. The CILP++ system is a neural-symbolic system which can perform efficient relational learning, by being able to process first-order logic knowledge into a neural network. CILP++ relies on BCP, a recently discovered propositionalization algorithm, to perform relational learning. However, efficient knowledge extraction from such networks is an open issue and features generated by BCP do not have an independent relational description, which prevents sound knowledge extraction from such networks. We present a methodology for generating independent propositional features for BCP by using semi-propositionalization of bottom clauses. Empirical results obtained in comparison with the original version of BCP show that this approach has comparable accuracy and runtimes, while allowing proper relational knowledge representation of features for knowledge extraction from CILP++ networks
Recommended from our members
Category-based Inductive Learning in Shared NeMuS
One of the main objectives of cognitive science is to use abstraction to create models that represent accurately the cognitive processes that constitute learning, such as categorisation. Relational knowledge is important in this task, since it is through the reasoning processes of induction and analogy that the mind creates categories (it later estabilishes causal relations between them by using induction and abduction), and analogies exemplify crucial properties of relational processing, like structure-consistent mapping[2]. Given the complexity of the task, no model today has accomplished it com- pletely. The associacionist/connectionist approach represents those processes through associations between different informations. That is done by using artifi- cial neural networks. However, it faces a great obstacle: the idea (called proposi- tional fixation) that neural networks could not represent relational knowledge. A recent attempt to tackle the symbolic extraction from artificial neural networks was proposed in [1] The cognitive agent Amao uses a shared Neural Multi-Space (Shared NeMuS) of coded first-order expressions to model the various aspects of logical formulae as separate spaces, with importance vectors of different sizes. Amao [4] uses inverse unification as the generalization mechanism for learning from a set of logically connected expressions of the Herbrand Base (HB). Here We present an experiment to use such learning mechanism to model a simple version of train set from Michalski’s train problem[3
Extracting Relational Triples Based on Graph Recursive Neural Network via Dynamic Feedback Forest Algorithm
Extracting relational triples (subject, predicate, object) from text enables
the transformation of unstructured text data into structured knowledge. The
named entity recognition (NER) and the relation extraction (RE) are two
foundational subtasks in this knowledge generation pipeline. The integration of
subtasks poses a considerable challenge due to their disparate nature. This
paper presents a novel approach that converts the triple extraction task into a
graph labeling problem, capitalizing on the structural information of
dependency parsing and graph recursive neural networks (GRNNs). To integrate
subtasks, this paper proposes a dynamic feedback forest algorithm that connects
the representations of subtasks by inference operations during model training.
Experimental results demonstrate the effectiveness of the proposed method
Graph Neural Networks with Generated Parameters for Relation Extraction
Recently, progress has been made towards improving relational reasoning in
machine learning field. Among existing models, graph neural networks (GNNs) is
one of the most effective approaches for multi-hop relational reasoning. In
fact, multi-hop relational reasoning is indispensable in many natural language
processing tasks such as relation extraction. In this paper, we propose to
generate the parameters of graph neural networks (GP-GNNs) according to natural
language sentences, which enables GNNs to process relational reasoning on
unstructured text inputs. We verify GP-GNNs in relation extraction from text.
Experimental results on a human-annotated dataset and two distantly supervised
datasets show that our model achieves significant improvements compared to
baselines. We also perform a qualitative analysis to demonstrate that our model
could discover more accurate relations by multi-hop relational reasoning
Long-tail Relation Extraction via Knowledge Graph Embeddings and Graph Convolution Networks
We propose a distance supervised relation extraction approach for
long-tailed, imbalanced data which is prevalent in real-world settings. Here,
the challenge is to learn accurate "few-shot" models for classes existing at
the tail of the class distribution, for which little data is available.
Inspired by the rich semantic correlations between classes at the long tail and
those at the head, we take advantage of the knowledge from data-rich classes at
the head of the distribution to boost the performance of the data-poor classes
at the tail. First, we propose to leverage implicit relational knowledge among
class labels from knowledge graph embeddings and learn explicit relational
knowledge using graph convolution networks. Second, we integrate that
relational knowledge into relation extraction model by coarse-to-fine
knowledge-aware attention mechanism. We demonstrate our results for a
large-scale benchmark dataset which show that our approach significantly
outperforms other baselines, especially for long-tail relations.Comment: To be published in NAACL 201
Interpreting Embedding Models of Knowledge Bases: A Pedagogical Approach
Knowledge bases are employed in a variety of applications from natural
language processing to semantic web search; alas, in practice their usefulness
is hurt by their incompleteness. Embedding models attain state-of-the-art
accuracy in knowledge base completion, but their predictions are notoriously
hard to interpret. In this paper, we adapt "pedagogical approaches" (from the
literature on neural networks) so as to interpret embedding models by
extracting weighted Horn rules from them. We show how pedagogical approaches
have to be adapted to take upon the large-scale relational aspects of knowledge
bases and show experimentally their strengths and weaknesses.Comment: presented at 2018 ICML Workshop on Human Interpretability in Machine
Learning (WHI 2018), Stockholm, Swede
Neural Snowball for Few-Shot Relation Learning
Knowledge graphs typically undergo open-ended growth of new relations. This
cannot be well handled by relation extraction that focuses on pre-defined
relations with sufficient training data. To address new relations with few-shot
instances, we propose a novel bootstrapping approach, Neural Snowball, to learn
new relations by transferring semantic knowledge about existing relations. More
specifically, we use Relational Siamese Networks (RSN) to learn the metric of
relational similarities between instances based on existing relations and their
labeled data. Afterwards, given a new relation and its few-shot instances, we
use RSN to accumulate reliable instances from unlabeled corpora; these
instances are used to train a relation classifier, which can further identify
new facts of the new relation. The process is conducted iteratively like a
snowball. Experiments show that our model can gather high-quality instances for
better few-shot relation learning and achieves significant improvement compared
to baselines. Codes and datasets are released on
https://github.com/thunlp/Neural-Snowball.Comment: Accepted by AAAI202
- …