945 research outputs found
Dynamic Provenance for SPARQL Update
While the Semantic Web currently can exhibit provenance information by using
the W3C PROV standards, there is a "missing link" in connecting PROV to storing
and querying for dynamic changes to RDF graphs using SPARQL. Solving this
problem would be required for such clear use-cases as the creation of version
control systems for RDF. While some provenance models and annotation techniques
for storing and querying provenance data originally developed with databases or
workflows in mind transfer readily to RDF and SPARQL, these techniques do not
readily adapt to describing changes in dynamic RDF datasets over time. In this
paper we explore how to adapt the dynamic copy-paste provenance model of
Buneman et al. [2] to RDF datasets that change over time in response to SPARQL
updates, how to represent the resulting provenance records themselves as RDF in
a manner compatible with W3C PROV, and how the provenance information can be
defined by reinterpreting SPARQL updates. The primary contribution of this
paper is a semantic framework that enables the semantics of SPARQL Update to be
used as the basis for a 'cut-and-paste' provenance model in a principled
manner.Comment: Pre-publication version of ISWC 2014 pape
DEEP: a provenance-aware executable document system
The concept of executable documents is attracting growing interest from both academics and publishers since it is a promising technology for the dissemination of scientific results. Provenance is a kind of metadata that provides a rich description of the derivation history of data products starting from their original sources. It has been used in many different e-Science domains and has shown great potential in enabling reproducibility of scientific results. However, while both executable documents and provenance are aimed at enhancing the dissemination of scientific results, little has been done to explore the integration of both techniques. In this paper, we introduce the design and development of DEEP, an executable document environment that generates scientific results dynamically and interactively, and also records the provenance for these results in the document. In this system, provenance is exposed to users via an interface that provides them with an alternative way of navigating the executable document. In addition, we make use of the provenance to offer a document rollback facility to users and help to manage the system's dynamic resources
A Typed Model for Linked Data
The term Linked Data is used to describe ubiquitous and emerging semi-structured data formats on the Web. URIs in Linked Data allow diverse data sources to link to each other, forming a Web of Data. A calculus which models concurrent queries and updates over Linked Data is presented. The calculus exhibits operations essential for declaring rich atomic actions. The operations recover emergent structure in the loosely structured Web of Data. The calculus is executable due to its operational semantics. A light type system ensures that URIs with a distinguished role are used consistently. The main theorem verifies that the light type system and operational semantics work at the same level of granularity, so are compatible. Examples show that a range of existing and emerging standards are captured. Data formats include RDF, named graphs and feeds. The primitives of the calculus model SPARQL Query and the Atom Publishing Protocol. The subtype system is based on RDFS, which improves interoperability. Examples focuss on the SPARQL Update proposal for which a fine grained operational semantics is developed. Further potential high level languages are outlined for exploiting Linked Data
Recommended from our members
DKA-robo: dynamically updating time-invalid knowledge bases using robots
In this paper we present the DKA-robo framework, where a mobile agent is used to update those statements of a knowledge base that have lost validity in time. Managing the dynamic information of knowledge bases constitutes a key issue in many real-world scenarios, because constantly reevaluating data requires efforts in terms of knowledge acquisition and representation. Our solution to such a problem is to use RDF and SPARQL to represent and manage the time-validity of information, combined with an agent acting as a mobile sensor which updates the outdated statements in the knowledge base, therefore always guaranteeing time-valid results against user queries. This demo shows the implementation of our approach in the working environment of our research lab, where a robot is used to sense temperature, humidity, wifi- signal and number of people on demand, updating the lab knowledge base with time-valid information
Local Type Checking for Linked Data Consumers
The Web of Linked Data is the cumulation of over a decade of work by the Web
standards community in their effort to make data more Web-like. We provide an
introduction to the Web of Linked Data from the perspective of a Web developer
that would like to build an application using Linked Data. We identify a
weakness in the development stack as being a lack of domain specific scripting
languages for designing background processes that consume Linked Data. To
address this weakness, we design a scripting language with a simple but
appropriate type system. In our proposed architecture some data is consumed
from sources outside of the control of the system and some data is held
locally. Stronger type assumptions can be made about the local data than
external data, hence our type system mixes static and dynamic typing.
Throughout, we relate our work to the W3C recommendations that drive Linked
Data, so our syntax is accessible to Web developers.Comment: In Proceedings WWV 2013, arXiv:1308.026
How and Why is An Answer (Still) Correct? Maintaining Provenance in Dynamic Knowledge Graphs
Knowledge graphs (KGs) have increasingly become the backbone of many critical
knowledge-centric applications. Most large-scale KGs used in practice are
automatically constructed based on an ensemble of extraction techniques applied
over diverse data sources. Therefore, it is important to establish the
provenance of results for a query to determine how these were computed.
Provenance is shown to be useful for assigning confidence scores to the
results, for debugging the KG generation itself, and for providing answer
explanations. In many such applications, certain queries are registered as
standing queries since their answers are needed often. However, KGs keep
continuously changing due to reasons such as changes in the source data,
improvements to the extraction techniques, refinement/enrichment of
information, and so on. This brings us to the issue of efficiently maintaining
the provenance polynomials of complex graph pattern queries for dynamic and
large KGs instead of having to recompute them from scratch each time the KG is
updated. Addressing these issues, we present HUKA which uses provenance
polynomials for tracking the derivation of query results over knowledge graphs
by encoding the edges involved in generating the answer. More importantly, HUKA
also maintains these provenance polynomials in the face of updates---insertions
as well as deletions of facts---to the underlying KG. Experimental results over
large real-world KGs such as YAGO and DBpedia with various benchmark SPARQL
query workloads reveals that HUKA can be almost 50 times faster than existing
systems for provenance computation on dynamic KGs
On Reasoning with RDF Statements about Statements using Singleton Property Triples
The Singleton Property (SP) approach has been proposed for representing and
querying metadata about RDF triples such as provenance, time, location, and
evidence. In this approach, one singleton property is created to uniquely
represent a relationship in a particular context, and in general, generates a
large property hierarchy in the schema. It has become the subject of important
questions from Semantic Web practitioners. Can an existing reasoner recognize
the singleton property triples? And how? If the singleton property triples
describe a data triple, then how can a reasoner infer this data triple from the
singleton property triples? Or would the large property hierarchy affect the
reasoners in some way? We address these questions in this paper and present our
study about the reasoning aspects of the singleton properties. We propose a
simple mechanism to enable existing reasoners to recognize the singleton
property triples, as well as to infer the data triples described by the
singleton property triples. We evaluate the effect of the singleton property
triples in the reasoning processes by comparing the performance on RDF datasets
with and without singleton properties. Our evaluation uses as benchmark the
LUBM datasets and the LUBM-SP datasets derived from LUBM with temporal
information added through singleton properties
Recommended from our members
Update of time-invalid information in Knowledge Bases through Mobile Agents
In this paper, we investigate the use of a mobile, autonomous agent to update knowledge bases containing statements that lose validity with time. This constitutes a key issue in terms of knowledge acquisition and representation, because dynamic data need to be constantly re-evaluated to allow reasoning. We focus on the way to represent the time- validity of statements in a knowledge base, and on the use of a mobile agent to update time-invalid statements while planning for āinformation freshnessā as the main objective. We propose to use Semantic Web standards, namely the RDF model and the SPARQL query language, to represent time-validity of information and decide how long this will be considered valid. Using such a representation, a plan is created for the agent to update the knowledge, focusing mostly on guaranteeing the time-validity of the information collected. To show the feasibility of our approach and discuss its limitations, we test its implementation on scenarios in the working environment of our research lab, where an autonomous robot is used to sense temperature, humidity, wifi signal and number of people on demand, updating the knowledge base with time- valid information
Estimating Fire Weather Indices via Semantic Reasoning over Wireless Sensor Network Data Streams
Wildfires are frequent, devastating events in Australia that regularly cause
significant loss of life and widespread property damage. Fire weather indices
are a widely-adopted method for measuring fire danger and they play a
significant role in issuing bushfire warnings and in anticipating demand for
bushfire management resources. Existing systems that calculate fire weather
indices are limited due to low spatial and temporal resolution. Localized
wireless sensor networks, on the other hand, gather continuous sensor data
measuring variables such as air temperature, relative humidity, rainfall and
wind speed at high resolutions. However, using wireless sensor networks to
estimate fire weather indices is a challenge due to data quality issues, lack
of standard data formats and lack of agreement on thresholds and methods for
calculating fire weather indices. Within the scope of this paper, we propose a
standardized approach to calculating Fire Weather Indices (a.k.a. fire danger
ratings) and overcome a number of the challenges by applying Semantic Web
Technologies to the processing of data streams from a wireless sensor network
deployed in the Springbrook region of South East Queensland. This paper
describes the underlying ontologies, the semantic reasoning and the Semantic
Fire Weather Index (SFWI) system that we have developed to enable domain
experts to specify and adapt rules for calculating Fire Weather Indices. We
also describe the Web-based mapping interface that we have developed, that
enables users to improve their understanding of how fire weather indices vary
over time within a particular region.Finally, we discuss our evaluation results
that indicate that the proposed system outperforms state-of-the-art techniques
in terms of accuracy, precision and query performance.Comment: 20pages, 12 figure
RDF Querying
Reactive Web systems, Web services, and Web-based publish/
subscribe systems communicate events as XML messages, and in
many cases require composite event detection: it is not sufficient to react
to single event messages, but events have to be considered in relation to
other events that are received over time.
Emphasizing language design and formal semantics, we describe the
rule-based query language XChangeEQ for detecting composite events.
XChangeEQ is designed to completely cover and integrate the four complementary
querying dimensions: event data, event composition, temporal
relationships, and event accumulation. Semantics are provided as
model and fixpoint theories; while this is an established approach for rule
languages, it has not been applied for event queries before
- ā¦