44,545 research outputs found
Metaphor as categorisation: a connectionist implementation
A key issue for models of metaphor comprehension is to explain how in some metaphorical comparison , only some features of B are transferred to A. The features of B that are transferred to A depend both on A and on B. This is the central thrust of Black's well known interaction theory of metaphor comprehension (1979). However, this theory is somewhat abstract, and it is not obvious how it may be implemented in terms of mental representations and processes. In this paper we describe a simple computational model of on-line metaphor comprehension which combines Black's interaction theory with the idea that metaphor comprehension is a type of categorisation process (Glucksberg & Keysar, 1990, 1993). The model is based on a distributed connectionist network depicting semantic memory (McClelland & Rumelhart, 1986). The network learns feature-based information about various concepts. A metaphor is comprehended by applying a representation of the first term A to the network storing knowledge of the second term B, in an attempt to categorise it as an exemplar of B. The output of this network is a representation of A transformed by the knowledge of B. We explain how this process embodies an interaction of knowledge between the two terms of the metaphor, how it accords with the contemporary theory of metaphor stating that comprehension for literal and metaphorical comparisons is carried out by identical mechanisms (Gibbs, 1994), and how it accounts for both existing empirical evidence (Glucksberg, McGlone, & Manfredi, 1997) and generates new predictions. In this model, the distinction between literal and metaphorical language is one of degree, not of kind
Multi-task Neural Network for Non-discrete Attribute Prediction in Knowledge Graphs
Many popular knowledge graphs such as Freebase, YAGO or DBPedia maintain a
list of non-discrete attributes for each entity. Intuitively, these attributes
such as height, price or population count are able to richly characterize
entities in knowledge graphs. This additional source of information may help to
alleviate the inherent sparsity and incompleteness problem that are prevalent
in knowledge graphs. Unfortunately, many state-of-the-art relational learning
models ignore this information due to the challenging nature of dealing with
non-discrete data types in the inherently binary-natured knowledge graphs. In
this paper, we propose a novel multi-task neural network approach for both
encoding and prediction of non-discrete attribute information in a relational
setting. Specifically, we train a neural network for triplet prediction along
with a separate network for attribute value regression. Via multi-task
learning, we are able to learn representations of entities, relations and
attributes that encode information about both tasks. Moreover, such attributes
are not only central to many predictive tasks as an information source but also
as a prediction target. Therefore, models that are able to encode, incorporate
and predict such information in a relational learning context are highly
attractive as well. We show that our approach outperforms many state-of-the-art
methods for the tasks of relational triplet classification and attribute value
prediction.Comment: Accepted at CIKM 201
Does William Shakespeare REALLY Write Hamlet? Knowledge Representation Learning with Confidence
Knowledge graphs (KGs), which could provide essential relational information
between entities, have been widely utilized in various knowledge-driven
applications. Since the overall human knowledge is innumerable that still grows
explosively and changes frequently, knowledge construction and update
inevitably involve automatic mechanisms with less human supervision, which
usually bring in plenty of noises and conflicts to KGs. However, most
conventional knowledge representation learning methods assume that all triple
facts in existing KGs share the same significance without any noises. To
address this problem, we propose a novel confidence-aware knowledge
representation learning framework (CKRL), which detects possible noises in KGs
while learning knowledge representations with confidence simultaneously.
Specifically, we introduce the triple confidence to conventional
translation-based methods for knowledge representation learning. To make triple
confidence more flexible and universal, we only utilize the internal structural
information in KGs, and propose three kinds of triple confidences considering
both local and global structural information. In experiments, We evaluate our
models on knowledge graph noise detection, knowledge graph completion and
triple classification. Experimental results demonstrate that our
confidence-aware models achieve significant and consistent improvements on all
tasks, which confirms the capability of CKRL modeling confidence with
structural information in both KG noise detection and knowledge representation
learning.Comment: 8 page
Iteratively Learning Embeddings and Rules for Knowledge Graph Reasoning
Reasoning is essential for the development of large knowledge graphs,
especially for completion, which aims to infer new triples based on existing
ones. Both rules and embeddings can be used for knowledge graph reasoning and
they have their own advantages and difficulties. Rule-based reasoning is
accurate and explainable but rule learning with searching over the graph always
suffers from efficiency due to huge search space. Embedding-based reasoning is
more scalable and efficient as the reasoning is conducted via computation
between embeddings, but it has difficulty learning good representations for
sparse entities because a good embedding relies heavily on data richness. Based
on this observation, in this paper we explore how embedding and rule learning
can be combined together and complement each other's difficulties with their
advantages. We propose a novel framework IterE iteratively learning embeddings
and rules, in which rules are learned from embeddings with proper pruning
strategy and embeddings are learned from existing triples and new triples
inferred by rules. Evaluations on embedding qualities of IterE show that rules
help improve the quality of sparse entity embeddings and their link prediction
results. We also evaluate the efficiency of rule learning and quality of rules
from IterE compared with AMIE+, showing that IterE is capable of generating
high quality rules more efficiently. Experiments show that iteratively learning
embeddings and rules benefit each other during learning and prediction.Comment: This paper is accepted by WWW'1
Interaction Embeddings for Prediction and Explanation in Knowledge Graphs
Knowledge graph embedding aims to learn distributed representations for
entities and relations, and is proven to be effective in many applications.
Crossover interactions --- bi-directional effects between entities and
relations --- help select related information when predicting a new triple, but
haven't been formally discussed before. In this paper, we propose CrossE, a
novel knowledge graph embedding which explicitly simulates crossover
interactions. It not only learns one general embedding for each entity and
relation as most previous methods do, but also generates multiple triple
specific embeddings for both of them, named interaction embeddings. We evaluate
embeddings on typical link prediction tasks and find that CrossE achieves
state-of-the-art results on complex and more challenging datasets. Furthermore,
we evaluate embeddings from a new perspective --- giving explanations for
predicted triples, which is important for real applications. In this work, an
explanation for a triple is regarded as a reliable closed-path between the head
and the tail entity. Compared to other baselines, we show experimentally that
CrossE, benefiting from interaction embeddings, is more capable of generating
reliable explanations to support its predictions.Comment: This paper is accepted by WSDM201
- …