1,734 research outputs found
MinoanER: Schema-Agnostic, Non-Iterative, Massively Parallel Resolution of Web Entities
Entity Resolution (ER) aims to identify different descriptions in various
Knowledge Bases (KBs) that refer to the same entity. ER is challenged by the
Variety, Volume and Veracity of entity descriptions published in the Web of
Data. To address them, we propose the MinoanER framework that simultaneously
fulfills full automation, support of highly heterogeneous entities, and massive
parallelization of the ER process. MinoanER leverages a token-based similarity
of entities to define a new metric that derives the similarity of neighboring
entities from the most important relations, as they are indicated only by
statistics. A composite blocking method is employed to capture different
sources of matching evidence from the content, neighbors, or names of entities.
The search space of candidate pairs for comparison is compactly abstracted by a
novel disjunctive blocking graph and processed by a non-iterative, massively
parallel matching algorithm that consists of four generic, schema-agnostic
matching rules that are quite robust with respect to their internal
configuration. We demonstrate that the effectiveness of MinoanER is comparable
to existing ER tools over real KBs exhibiting low Variety, but it outperforms
them significantly when matching KBs with high Variety.Comment: Presented at EDBT 2001
Heterogeneous Entity Matching with Complex Attribute Associations using BERT and Neural Networks
Across various domains, data from different sources such as Baidu Baike and
Wikipedia often manifest in distinct forms. Current entity matching
methodologies predominantly focus on homogeneous data, characterized by
attributes that share the same structure and concise attribute values. However,
this orientation poses challenges in handling data with diverse formats.
Moreover, prevailing approaches aggregate the similarity of attribute values
between corresponding attributes to ascertain entity similarity. Yet, they
often overlook the intricate interrelationships between attributes, where one
attribute may have multiple associations. The simplistic approach of pairwise
attribute comparison fails to harness the wealth of information encapsulated
within entities.To address these challenges, we introduce a novel entity
matching model, dubbed Entity Matching Model for Capturing Complex Attribute
Relationships(EMM-CCAR),built upon pre-trained models. Specifically, this model
transforms the matching task into a sequence matching problem to mitigate the
impact of varying data formats. Moreover, by introducing attention mechanisms,
it identifies complex relationships between attributes, emphasizing the degree
of matching among multiple attributes rather than one-to-one correspondences.
Through the integration of the EMM-CCAR model, we adeptly surmount the
challenges posed by data heterogeneity and intricate attribute
interdependencies. In comparison with the prevalent DER-SSM and Ditto
approaches, our model achieves improvements of approximately 4% and 1% in F1
scores, respectively. This furnishes a robust solution for addressing the
intricacies of attribute complexity in entity matching
Entity reconciliation in big data sources: A systematic mapping study
The entity reconciliation (ER) problem aroused much interest as a research topic in todayâs Big Dataera, full of big and open heterogeneous data sources. This problem poses when relevant information ona topic needs to be obtained using methods based on: (i) identifying records that represent the samereal world entity, and (ii) identifying those records that are similar but do not correspond to the samereal-world entity. ER is an operational intelligence process, whereby organizations can unify differentand heterogeneous data sources in order to relate possible matches of non-obvious entities. Besides, thecomplexity that the heterogeneity of data sources involves, the large number of records and differencesamong languages, for instance, must be added. This paper describes a Systematic Mapping Study (SMS) ofjournal articles, conferences and workshops published from 2010 to 2017 to solve the problem describedbefore, first trying to understand the state-of-the-art, and then identifying any gaps in current research.Eleven digital libraries were analyzed following a systematic, semiautomatic and rigorous process thathas resulted in 61 primary studies. They represent a great variety of intelligent proposals that aim tosolve ER. The conclusion obtained is that most of the research is based on the operational phase asopposed to the design phase, and most studies have been tested on real-world data sources, where a lotof them are heterogeneous, but just a few apply to industry. There is a clear trend in research techniquesbased on clustering/blocking and graphs, although the level of automation of the proposals is hardly evermentioned in the research work.Ministerio de EconomĂa y Competitividad TIN2013-46928-C3-3-RMinisterio de EconomĂa y Competitividad TIN2016-76956-C3-2-RMinisterio de EconomĂa y Competitividad TIN2015-71938-RED
End-to-End Entity Resolution for Big Data: A Survey
One of the most important tasks for improving data quality and the
reliability of data analytics results is Entity Resolution (ER). ER aims to
identify different descriptions that refer to the same real-world entity, and
remains a challenging problem. While previous works have studied specific
aspects of ER (and mostly in traditional settings), in this survey, we provide
for the first time an end-to-end view of modern ER workflows, and of the novel
aspects of entity indexing and matching methods in order to cope with more than
one of the Big Data characteristics simultaneously. We present the basic
concepts, processing steps and execution strategies that have been proposed by
different communities, i.e., database, semantic Web and machine learning, in
order to cope with the loose structuredness, extreme diversity, high speed and
large scale of entity descriptions used by real-world applications. Finally, we
provide a synthetic discussion of the existing approaches, and conclude with a
detailed presentation of open research directions
STEM: stacked threshold-based entity matching for knowledge base generation
One of the major issues encountered in the generation of knowledge bases is the integration of data coming from
a collection of heterogeneous data sources. A key essential task when integrating data instances is the entity matching. Entity
matching is based on the definition of a similarity measure among entities and on the classification of the entity pair as a match
if the similarity exceeds a certain threshold. This parameter introduces a trade-off between the precision and the recall of the
algorithm, as higher values of the threshold lead to higher precision and lower recall, and lower values lead to higher recall
and lower precision. In this paper, we propose a stacking approach for threshold-based classifiers. It runs several instances of
classifiers corresponding to different thresholds and use their predictions as a feature vector for a supervised learner. We show that
this approach is able to break the trade-off between the precision and recall of the algorithm, increasing both at the same time and
enhancing the overall performance of the algorithm. We also show that this hybrid approach performs better and is less dependent
on the amount of available training data with respect to a supervised learning approach that directly uses propertiesâ similarity
values. In order to test the generality of the claim, we have run experimental tests using two different threshold-based classifiers
on two different data sets. Finally, we show a concrete use case describing the implementation of the proposed approach in the
generation of the 3cixty Nice knowledge base
Named Entity Resolution in Personal Knowledge Graphs
Entity Resolution (ER) is the problem of determining when two entities refer
to the same underlying entity. The problem has been studied for over 50 years,
and most recently, has taken on new importance in an era of large,
heterogeneous 'knowledge graphs' published on the Web and used widely in
domains as wide ranging as social media, e-commerce and search. This chapter
will discuss the specific problem of named ER in the context of personal
knowledge graphs (PKGs). We begin with a formal definition of the problem, and
the components necessary for doing high-quality and efficient ER. We also
discuss some challenges that are expected to arise for Web-scale data. Next, we
provide a brief literature review, with a special focus on how existing
techniques can potentially apply to PKGs. We conclude the chapter by covering
some applications, as well as promising directions for future research.Comment: To appear as a book chapter by the same name in an upcoming (Oct.
2023) book `Personal Knowledge Graphs (PKGs): Methodology, tools and
applications' edited by Tiwari et a
How does prompt engineering affect ChatGPT performance on unsupervised entity resolution?
Entity Resolution (ER) is the problem of semi-automatically determining when
two entities refer to the same underlying entity, with applications ranging
from healthcare to e-commerce. Traditional ER solutions required considerable
manual expertise, including feature engineering, as well as identification and
curation of training data. In many instances, such techniques are highly
dependent on the domain. With recent advent in large language models (LLMs),
there is an opportunity to make ER much more seamless and domain-independent.
However, it is also well known that LLMs can pose risks, and that the quality
of their outputs can depend on so-called prompt engineering. Unfortunately, a
systematic experimental study on the effects of different prompting methods for
addressing ER, using LLMs like ChatGPT, has been lacking thus far. This paper
aims to address this gap by conducting such a study. Although preliminary in
nature, our results show that prompting can significantly affect the quality of
ER, although it affects some metrics more than others, and can also be dataset
dependent
Mining complex trees for hidden fruit : a graphâbased computational solution to detect latent criminal networks : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Technology at Massey University, Albany, New Zealand.
The detection of crime is a complex and difficult endeavour. Public and private organisations â focusing on law enforcement, intelligence, and compliance â commonly apply the rational isolated actor approach premised on observability and materiality. This is manifested largely as conducting entity-level risk management sourcing âleadsâ from reactive covert human intelligence sources and/or proactive sources by applying simple rules-based models. Focusing on discrete observable and material actors simply ignores that criminal activity exists within a complex system deriving its fundamental structural fabric from the complex interactions between actors - with those most unobservable likely to be both criminally proficient and influential. The graph-based computational solution developed to detect latent criminal networks is a response to the inadequacy of the rational isolated actor approach that ignores the connectedness and complexity of criminality.
The core computational solution, written in the R language, consists of novel entity resolution, link discovery, and knowledge discovery technology. Entity resolution enables the fusion of multiple datasets with high accuracy (mean F-measure of 0.986 versus competitors 0.872), generating a graph-based expressive view of the problem. Link discovery is comprised of link prediction and link inference, enabling the high-performance detection (accuracy of ~0.8 versus relevant published models ~0.45) of unobserved relationships such as identity fraud. Knowledge discovery uses the fused graph generated and applies the âGraphExtractâ algorithm to create a set of subgraphs representing latent functional criminal groups, and a mesoscopic graph representing how this set of criminal groups are interconnected. Latent knowledge is generated from a range of metrics including the âSuper-brokerâ metric and attitude prediction.
The computational solution has been evaluated on a range of datasets that mimic an applied setting, demonstrating a scalable (tested on ~18 million node graphs) and performant (~33 hours runtime on a non-distributed platform) solution that successfully detects relevant latent functional criminal groups in around 90% of cases sampled and enables the contextual understanding of the broader criminal system through the mesoscopic graph and associated metadata. The augmented data assets generated provide a multi-perspective systems view of criminal activity that enable advanced informed decision making across the microscopic mesoscopic macroscopic spectrum
- âŠ