1,163 research outputs found
Local Embeddings for Relational Data Integration
Deep learning based techniques have been recently used with promising results
for data integration problems. Some methods directly use pre-trained embeddings
that were trained on a large corpus such as Wikipedia. However, they may not
always be an appropriate choice for enterprise datasets with custom vocabulary.
Other methods adapt techniques from natural language processing to obtain
embeddings for the enterprise's relational data. However, this approach blindly
treats a tuple as a sentence, thus losing a large amount of contextual
information present in the tuple.
We propose algorithms for obtaining local embeddings that are effective for
data integration tasks on relational databases. We make four major
contributions. First, we describe a compact graph-based representation that
allows the specification of a rich set of relationships inherent in the
relational world. Second, we propose how to derive sentences from such a graph
that effectively "describe" the similarity across elements (tokens, attributes,
rows) in the two datasets. The embeddings are learned based on such sentences.
Third, we propose effective optimization to improve the quality of the learned
embeddings and the performance of integration tasks. Finally, we propose a
diverse collection of criteria to evaluate relational embeddings and perform an
extensive set of experiments validating them against multiple baseline methods.
Our experiments show that our framework, EmbDI, produces meaningful results for
data integration tasks such as schema matching and entity resolution both in
supervised and unsupervised settings.Comment: Accepted to SIGMOD 2020 as Creating Embeddings of Heterogeneous
Relational Datasets for Data Integration Tasks. Code can be found at
https://gitlab.eurecom.fr/cappuzzo/embd
Vermeidung von ReprÀsentationsheterogenitÀten in realweltlichen Wissensgraphen
Knowledge graphs are repositories providing factual knowledge about entities. They are a great source of knowledge to support modern AI applications for Web search, question answering, digital assistants, and online shopping. The advantages of machine learning techniques and the Web's growth have led to colossal knowledge graphs with billions of facts about hundreds of millions of entities collected from a large variety of sources. While integrating independent knowledge sources promises rich information, it inherently leads to heterogeneities in representation due to a large variety of different conceptualizations. Thus, real-world knowledge graphs are threatened in their overall utility. Due to their sheer size, they are hardly manually curatable anymore. Automatic and semi-automatic methods are needed to cope with these vast knowledge repositories. We first address the general topic of representation heterogeneity by surveying the problem throughout various data-intensive fields: databases, ontologies, and knowledge graphs. Different techniques for automatically resolving heterogeneity issues are presented and discussed, while several open problems are identified. Next, we focus on entity heterogeneity. We show that automatic matching techniques may run into quality problems when working in a multi-knowledge graph scenario due to incorrect transitive identity links. We present four techniques that can be used to improve the quality of arbitrary entity matching tools significantly. Concerning relation heterogeneity, we show that synonymous relations in knowledge graphs pose several difficulties in querying. Therefore, we resolve these heterogeneities with knowledge graph embeddings and by Horn rule mining. All methods detect synonymous relations in knowledge graphs with high quality. Furthermore, we present a novel technique for avoiding heterogeneity issues at query time using implicit knowledge storage. We show that large neural language models are a valuable source of knowledge that is queried similarly to knowledge graphs already solving several heterogeneity issues internally.Wissensgraphen sind eine wichtige Datenquelle von EntitĂ€tswissen. Sie unterstĂŒtzen viele moderne KI-Anwendungen. Dazu gehören unter anderem Websuche, die automatische Beantwortung von Fragen, digitale Assistenten und Online-Shopping. Neue Errungenschaften im maschinellen Lernen und das auĂerordentliche Wachstum des Internets haben zu riesigen Wissensgraphen gefĂŒhrt. Diese umfassen hĂ€ufig Milliarden von Fakten ĂŒber Hunderte von Millionen von EntitĂ€ten; hĂ€ufig aus vielen verschiedenen Quellen. WĂ€hrend die Integration unabhĂ€ngiger Wissensquellen zu einer groĂen Informationsvielfalt fĂŒhren kann, fĂŒhrt sie inhĂ€rent zu HeterogenitĂ€ten in der WissensreprĂ€sentation. Diese HeterogenitĂ€t in den Daten gefĂ€hrdet den praktischen Nutzen der Wissensgraphen. Durch ihre GröĂe lassen sich die Wissensgraphen allerdings nicht mehr manuell bereinigen. DafĂŒr werden heutzutage hĂ€ufig automatische und halbautomatische Methoden benötigt. In dieser Arbeit befassen wir uns mit dem Thema ReprĂ€sentationsheterogenitĂ€t. Wir klassifizieren HeterogenitĂ€t entlang verschiedener Dimensionen und erlĂ€utern HeterogenitĂ€tsprobleme in Datenbanken, Ontologien und Wissensgraphen. Weiterhin geben wir einen knappen Ăberblick ĂŒber verschiedene Techniken zur automatischen Lösung von HeterogenitĂ€tsproblemen. Im nĂ€chsten Kapitel beschĂ€ftigen wir uns mit EntitĂ€tsheterogenitĂ€t. Wir zeigen Probleme auf, die in einem Multi-Wissensgraphen-Szenario aufgrund von fehlerhaften transitiven Links entstehen. Um diese Probleme zu lösen stellen wir vier Techniken vor, mit denen sich die QualitĂ€t beliebiger Entity-Alignment-Tools deutlich verbessern lĂ€sst. Wir zeigen, dass RelationsheterogenitĂ€t in Wissensgraphen zu Problemen bei der Anfragenbeantwortung fĂŒhren kann. Daher entwickeln wir verschiedene Methoden um synonyme Relationen zu finden. Eine der Methoden arbeitet mit hochdimensionalen Wissensgrapheinbettungen, die andere mit einem Rule Mining Ansatz. Beide Methoden können synonyme Relationen in Wissensgraphen mit hoher QualitĂ€t erkennen. DarĂŒber hinaus stellen wir eine neuartige Technik zur Vermeidung von HeterogenitĂ€tsproblemen vor, bei der wir eine implizite WissensreprĂ€sentation verwenden. Wir zeigen, dass groĂe neuronale Sprachmodelle eine wertvolle Wissensquelle sind, die Ă€hnlich wie Wissensgraphen angefragt werden können. Im Sprachmodell selbst werden bereits viele der HeterogenitĂ€tsprobleme aufgelöst, so dass eine Anfrage heterogener Wissensgraphen möglich wird
From Word to Sense Embeddings: A Survey on Vector Representations of Meaning
Over the past years, distributed semantic representations have proved to be
effective and flexible keepers of prior knowledge to be integrated into
downstream applications. This survey focuses on the representation of meaning.
We start from the theoretical background behind word vector space models and
highlight one of their major limitations: the meaning conflation deficiency,
which arises from representing a word with all its possible meanings as a
single vector. Then, we explain how this deficiency can be addressed through a
transition from the word level to the more fine-grained level of word senses
(in its broader acceptation) as a method for modelling unambiguous lexical
meaning. We present a comprehensive overview of the wide range of techniques in
the two main branches of sense representation, i.e., unsupervised and
knowledge-based. Finally, this survey covers the main evaluation procedures and
applications for this type of representation, and provides an analysis of four
of its important aspects: interpretability, sense granularity, adaptability to
different domains and compositionality.Comment: 46 pages, 8 figures. Published in Journal of Artificial Intelligence
Researc
Multi-behavior Recommendation with SVD Graph Neural Networks
Graph Neural Networks (GNNs) has been extensively employed in the field of
recommender systems, offering users personalized recommendations and yielding
remarkable outcomes. Recently, GNNs incorporating contrastive learning have
demonstrated promising performance in handling sparse data problem of
recommendation system. However, existing contrastive learning methods still
have limitations in addressing the cold-start problem and resisting noise
interference especially for multi-behavior recommendation. To mitigate the
aforementioned issues, the present research posits a GNNs based multi-behavior
recommendation model MB-SVD that utilizes Singular Value Decomposition (SVD)
graphs to enhance model performance. In particular, MB-SVD considers user
preferences under different behaviors, improving recommendation effectiveness
while better addressing the cold-start problem. Our model introduces an
innovative methodology, which subsume multi-behavior contrastive learning
paradigm to proficiently discern the intricate interconnections among
heterogeneous manifestations of user behavior and generates SVD graphs to
automate the distillation of crucial multi-behavior self-supervised information
for robust graph augmentation. Furthermore, the SVD based framework reduces the
embedding dimensions and computational load. Thorough experimentation showcases
the remarkable performance of our proposed MB-SVD approach in multi-behavior
recommendation endeavors across diverse real-world datasets
- âŠ