29 research outputs found
An Expert-Driven Approach for Collaborative Knowledge Acquisition
Knowledge management is key for any organization. Huge amount of data is made available to organizations by pervasive technologies such as smart mobile devices. However, the knowledge to use such data is still missing, and organizations typically fail to exploit it.
This paper proposes an architectural design that aims at addressing such problem. It focuses on knowledge management for collaborative systems in which complex and multicausal situations are presented to interacting actors on a large geographical area with possible low connectivity.VIII Workshop Innovación en Sistemas de Software (WISS).Red de Universidades con Carreras en Informática (RedUNCI
An Expert-Driven Approach for Collaborative Knowledge Acquisition
Knowledge management is key for any organization. Huge amount of data is made available to organizations by pervasive technologies such as smart mobile devices. However, the knowledge to use such data is still missing, and organizations typically fail to exploit it.
This paper proposes an architectural design that aims at addressing such problem. It focuses on knowledge management for collaborative systems in which complex and multicausal situations are presented to interacting actors on a large geographical area with possible low connectivity.VIII Workshop Innovación en Sistemas de Software (WISS).Red de Universidades con Carreras en Informática (RedUNCI
On transformation of query scheduling strategies in distributed and heterogeneous database systems
This work considers a problem of optimal query processing in heterogeneous and distributed database systems. A global query sub- mitted at a local site is decomposed into a number of queries processed at the remote sites. The partial results returned by the queries are in- tegrated at a local site. The paper addresses a problem of an optimal scheduling of queries that minimizes time spend on data integration of the partial results into the final answer. A global data model defined in this work provides a unified view of the heterogeneous data structures located at the remote sites and a system of operations is defined to ex- press the complex data integration procedures. This work shows that the transformations of an entirely simultaneous query processing strate- gies into a hybrid (simultaneous/sequential) strategy may in some cases lead to significantly faster data integration. We show how to detect such cases, what conditions must be satisfied to transform the schedules, and how to transform the schedules into the more efficient ones
Measuring discord among multidimensional data sources
Data integration is a classical problem in databases, typically decomposed into schema matching, entity matching and record merging. To solve the latter, it is mostly assumed that ground truth can be determined, either as master data or from user feedback. However, in many cases, this is not the case because firstly the merging processes cannot be accurate enough, and also the data gathering processes in the different sources are simply imperfect and cannot provide high quality data. Instead of enforcing consistency, we propose to evaluate how concordant or discordant sources are as a measure of trustworthiness (the more discordant are the sources, the less we can trust their data). Thus, we define the discord measurement problem in which given a set of uncertain raw observations or aggregate results (such as case/hospitalization/death data relevant to COVID-19) and information on the alignment of different data (for example, cases and deaths), we wish to assess whether the different sources are concordant, or if not, measure how discordant they are.The work of Alberto Abelló has been done under project PID2020- 117191RB-I00 funded by MCIN/ AEI /10.13039/501100011033. The work of James Cheney was supported by ERC Consolidator Grant Skye (grant number 682315).Peer ReviewedPostprint (published version
Believe It or Not: Adding Belief Annotations to Databases
We propose a database model that allows users to annotate data with belief
statements. Our motivation comes from scientific database applications where a
community of users is working together to assemble, revise, and curate a shared
data repository. As the community accumulates knowledge and the database
content evolves over time, it may contain conflicting information and members
can disagree on the information it should store. For example, Alice may believe
that a tuple should be in the database, whereas Bob disagrees. He may also
insert the reason why he thinks Alice believes the tuple should be in the
database, and explain what he thinks the correct tuple should be instead.
We propose a formal model for Belief Databases that interprets users'
annotations as belief statements. These annotations can refer both to the base
data and to other annotations. We give a formal semantics based on a fragment
of multi-agent epistemic logic and define a query language over belief
databases. We then prove a key technical result, stating that every belief
database can be encoded as a canonical Kripke structure. We use this structure
to describe a relational representation of belief databases, and give an
algorithm for translating queries over the belief database into standard
relational queries. Finally, we report early experimental results with our
prototype implementation on synthetic data.Comment: 17 pages, 10 figure
How and Why is An Answer (Still) Correct? Maintaining Provenance in Dynamic Knowledge Graphs
Knowledge graphs (KGs) have increasingly become the backbone of many critical
knowledge-centric applications. Most large-scale KGs used in practice are
automatically constructed based on an ensemble of extraction techniques applied
over diverse data sources. Therefore, it is important to establish the
provenance of results for a query to determine how these were computed.
Provenance is shown to be useful for assigning confidence scores to the
results, for debugging the KG generation itself, and for providing answer
explanations. In many such applications, certain queries are registered as
standing queries since their answers are needed often. However, KGs keep
continuously changing due to reasons such as changes in the source data,
improvements to the extraction techniques, refinement/enrichment of
information, and so on. This brings us to the issue of efficiently maintaining
the provenance polynomials of complex graph pattern queries for dynamic and
large KGs instead of having to recompute them from scratch each time the KG is
updated. Addressing these issues, we present HUKA which uses provenance
polynomials for tracking the derivation of query results over knowledge graphs
by encoding the edges involved in generating the answer. More importantly, HUKA
also maintains these provenance polynomials in the face of updates---insertions
as well as deletions of facts---to the underlying KG. Experimental results over
large real-world KGs such as YAGO and DBpedia with various benchmark SPARQL
query workloads reveals that HUKA can be almost 50 times faster than existing
systems for provenance computation on dynamic KGs
An Expert-Driven Approach for Collaborative Knowledge Acquisition
Knowledge management is key for any organization. Huge amount of data is made available to organizations by pervasive technologies such as smart mobile devices. However, the knowledge to use such data is still missing, and organizations typically fail to exploit it.
This paper proposes an architectural design that aims at addressing such problem. It focuses on knowledge management for collaborative systems in which complex and multicausal situations are presented to interacting actors on a large geographical area with possible low connectivity.VIII Workshop Innovación en Sistemas de Software (WISS).Red de Universidades con Carreras en Informática (RedUNCI
Dataset search: a survey
Generating value from data requires the ability to find, access and make
sense of datasets. There are many efforts underway to encourage data sharing
and reuse, from scientific publishers asking authors to submit data alongside
manuscripts to data marketplaces, open data portals and data communities.
Google recently beta released a search service for datasets, which allows users
to discover data stored in various online repositories via keyword queries.
These developments foreshadow an emerging research field around dataset search
or retrieval that broadly encompasses frameworks, methods and tools that help
match a user data need against a collection of datasets. Here, we survey the
state of the art of research and commercial systems in dataset retrieval. We
identify what makes dataset search a research field in its own right, with
unique challenges and methods and highlight open problems. We look at
approaches and implementations from related areas dataset search is drawing
upon, including information retrieval, databases, entity-centric and tabular
search in order to identify possible paths to resolve these open problems as
well as immediate next steps that will take the field forward.Comment: 20 pages, 153 reference