4,550 research outputs found
Learning Word Representations from Relational Graphs
Attributes of words and relations between two words are central to numerous
tasks in Artificial Intelligence such as knowledge representation, similarity
measurement, and analogy detection. Often when two words share one or more
attributes in common, they are connected by some semantic relations. On the
other hand, if there are numerous semantic relations between two words, we can
expect some of the attributes of one of the words to be inherited by the other.
Motivated by this close connection between attributes and relations, given a
relational graph in which words are inter- connected via numerous semantic
relations, we propose a method to learn a latent representation for the
individual words. The proposed method considers not only the co-occurrences of
words as done by existing approaches for word representation learning, but also
the semantic relations in which two words co-occur. To evaluate the accuracy of
the word representations learnt using the proposed method, we use the learnt
word representations to solve semantic word analogy problems. Our experimental
results show that it is possible to learn better word representations by using
semantic semantics between words.Comment: AAAI 201
A Review on Computing Semantic Similarity of Concepts in Knowledge Graphs
Semantic similarity is a metric defined over a set of documents or terms, where the idea of distance between them is based on the likeness of their meaning or semantic content as opposed to similarity which can be estimated regarding their syntactical representation (e.g. their string format). One of the drawbacks of conventional knowledge-based approaches (e.g. path or lch) in addressing such task is that the semantic similarity of any two concepts with the same path length is the same (uniform distance problem).To propose a weighted path length (wpath) method to combine both path length and IC in measuring the semantic similarity between concepts. The IC of two concepts� LCS is used to weight their shortest path length so that those concept pairs having same path length can have different semantic similarity score if they have different LCS
- …