21 research outputs found

    Leveraging multilingual descriptions for link prediction: Initial experiments

    Get PDF
    In most Knowledge Graphs (KGs), textual descriptions ofentities are provided in multiple natural languages. Additional informa-tion that is not explicitly represented in the structured part of the KGmight be available in these textual descriptions. Link prediction modelswhich make use of entity descriptions usually consider only one language.However, descriptions given in multiple languages may provide comple-mentary information which should be taken into consideration for thetasks such as link prediction. In this poster paper, the benefits of mul-tilingual embeddings for incorporating multilingual entity descriptionsinto the task of link prediction in KGs are investigate

    Semantic entity enrichment by leveraging multilingual descriptions for link prediction

    Get PDF
    Most Knowledge Graphs (KGs) contain textual descriptions of entities in various natural languages. These descriptions of entities provide valuable information that may not be explicitly represented in the structured part of the KG. Based on this fact, some link prediction methods which make use of the information presented in the textual descriptions of entities have been proposed to learn representations of (monolingual) KGs. However, these methods use entity descriptions in only one language and ignore the fact that descriptions given in different languages may provide complementary information and thereby also additional semantics. In this position paper, the problem of effectively leveraging multilingual entity descriptions for the purpose of link prediction in KGs will be discussed along with potential solutions to the problem

    Leveraging literals for knowledge graph embeddings

    Get PDF
    Nowadays, Knowledge Graphs (KGs) have become invaluable for various applications such as named entity recognition, entity linking, question answering. However, there is a huge computational and storage cost associated with these KG-based applications. Therefore, there arises the necessity of transforming the high dimensional KGs into low dimensional vector spaces, i.e., learning representations for the KGs. Since a KG represents facts in the form of interrelations between entities and also using attributes of entities, the semantics present in both forms should be preserved while transforming the KG into a vector space. Hence, the main focus of this thesis is to deal with the multimodality and multilinguality of literals when utilizing them for the representation learning of KGs. The other task is to extract benchmark datasets with a high level of difficulty for tasks such as link prediction and triple classification. These datasets could be used for evaluating both kind of KG Embeddings, those using literals and those which do not include literals
    corecore