3 research outputs found
Local Embeddings for Relational Data Integration
Deep learning based techniques have been recently used with promising results
for data integration problems. Some methods directly use pre-trained embeddings
that were trained on a large corpus such as Wikipedia. However, they may not
always be an appropriate choice for enterprise datasets with custom vocabulary.
Other methods adapt techniques from natural language processing to obtain
embeddings for the enterprise's relational data. However, this approach blindly
treats a tuple as a sentence, thus losing a large amount of contextual
information present in the tuple.
We propose algorithms for obtaining local embeddings that are effective for
data integration tasks on relational databases. We make four major
contributions. First, we describe a compact graph-based representation that
allows the specification of a rich set of relationships inherent in the
relational world. Second, we propose how to derive sentences from such a graph
that effectively "describe" the similarity across elements (tokens, attributes,
rows) in the two datasets. The embeddings are learned based on such sentences.
Third, we propose effective optimization to improve the quality of the learned
embeddings and the performance of integration tasks. Finally, we propose a
diverse collection of criteria to evaluate relational embeddings and perform an
extensive set of experiments validating them against multiple baseline methods.
Our experiments show that our framework, EmbDI, produces meaningful results for
data integration tasks such as schema matching and entity resolution both in
supervised and unsupervised settings.Comment: Accepted to SIGMOD 2020 as Creating Embeddings of Heterogeneous
Relational Datasets for Data Integration Tasks. Code can be found at
https://gitlab.eurecom.fr/cappuzzo/embd