28,502 research outputs found
Learning Word Representations from Relational Graphs
Attributes of words and relations between two words are central to numerous
tasks in Artificial Intelligence such as knowledge representation, similarity
measurement, and analogy detection. Often when two words share one or more
attributes in common, they are connected by some semantic relations. On the
other hand, if there are numerous semantic relations between two words, we can
expect some of the attributes of one of the words to be inherited by the other.
Motivated by this close connection between attributes and relations, given a
relational graph in which words are inter- connected via numerous semantic
relations, we propose a method to learn a latent representation for the
individual words. The proposed method considers not only the co-occurrences of
words as done by existing approaches for word representation learning, but also
the semantic relations in which two words co-occur. To evaluate the accuracy of
the word representations learnt using the proposed method, we use the learnt
word representations to solve semantic word analogy problems. Our experimental
results show that it is possible to learn better word representations by using
semantic semantics between words.Comment: AAAI 201
Neural-Symbolic Relational Reasoning on Graph Models: Effective Link Inference and Computation from Knowledge Bases
The recent developments and growing interest in neural-symbolic models has
shown that hybrid approaches can offer richer models for Artificial
Intelligence. The integration of effective relational learning and reasoning
methods is one of the key challenges in this direction, as neural learning and
symbolic reasoning offer complementary characteristics that can benefit the
development of AI systems. Relational labelling or link prediction on knowledge
graphs has become one of the main problems in deep learning-based natural
language processing research. Moreover, other fields which make use of
neural-symbolic techniques may also benefit from such research endeavours.
There have been several efforts towards the identification of missing facts
from existing ones in knowledge graphs. Two lines of research try and predict
knowledge relations between two entities by considering all known facts
connecting them or several paths of facts connecting them. We propose a
neural-symbolic graph neural network which applies learning over all the paths
by feeding the model with the embedding of the minimal subset of the knowledge
graph containing such paths. By learning to produce representations for
entities and facts corresponding to word embeddings, we show how the model can
be trained end-to-end to decode these representations and infer relations
between entities in a multitask approach. Our contribution is two-fold: a
neural-symbolic methodology leverages the resolution of relational inference in
large graphs, and we also demonstrate that such neural-symbolic model is shown
more effective than path-based approachesComment: Under review: ICANN 202
DeepWalk: Online Learning of Social Representations
We present DeepWalk, a novel approach for learning latent representations of
vertices in a network. These latent representations encode social relations in
a continuous vector space, which is easily exploited by statistical models.
DeepWalk generalizes recent advancements in language modeling and unsupervised
feature learning (or deep learning) from sequences of words to graphs. DeepWalk
uses local information obtained from truncated random walks to learn latent
representations by treating walks as the equivalent of sentences. We
demonstrate DeepWalk's latent representations on several multi-label network
classification tasks for social networks such as BlogCatalog, Flickr, and
YouTube. Our results show that DeepWalk outperforms challenging baselines which
are allowed a global view of the network, especially in the presence of missing
information. DeepWalk's representations can provide scores up to 10%
higher than competing methods when labeled data is sparse. In some experiments,
DeepWalk's representations are able to outperform all baseline methods while
using 60% less training data. DeepWalk is also scalable. It is an online
learning algorithm which builds useful incremental results, and is trivially
parallelizable. These qualities make it suitable for a broad class of real
world applications such as network classification, and anomaly detection.Comment: 10 pages, 5 figures, 4 table
- …