4,240 research outputs found
Identification of Design Principles
This report identifies those design principles for a (possibly new) query and transformation
language for the Web supporting inference that are considered essential. Based upon these
design principles an initial strawman is selected. Scenarios for querying the Semantic Web
illustrate the design principles and their reflection in the initial strawman, i.e., a first draft of
the query language to be designed and implemented by the REWERSE working group I4
On Reasoning with RDF Statements about Statements using Singleton Property Triples
The Singleton Property (SP) approach has been proposed for representing and
querying metadata about RDF triples such as provenance, time, location, and
evidence. In this approach, one singleton property is created to uniquely
represent a relationship in a particular context, and in general, generates a
large property hierarchy in the schema. It has become the subject of important
questions from Semantic Web practitioners. Can an existing reasoner recognize
the singleton property triples? And how? If the singleton property triples
describe a data triple, then how can a reasoner infer this data triple from the
singleton property triples? Or would the large property hierarchy affect the
reasoners in some way? We address these questions in this paper and present our
study about the reasoning aspects of the singleton properties. We propose a
simple mechanism to enable existing reasoners to recognize the singleton
property triples, as well as to infer the data triples described by the
singleton property triples. We evaluate the effect of the singleton property
triples in the reasoning processes by comparing the performance on RDF datasets
with and without singleton properties. Our evaluation uses as benchmark the
LUBM datasets and the LUBM-SP datasets derived from LUBM with temporal
information added through singleton properties
Temporal Reasoning for RDF(S): A Markov Logic based Approach
In this work, we propose a formalism that is suitable to carry out temporal reasoning
for probabilistic knowledge bases. In particular, we focus on detecting
erroneous statements by exploiting temporal relations of facts. Therefore, we rely
on RDF(S) and its associating entailment rules which provide a data representation model as well as a basic logical expressiveness. Moreover, we use Allen 19s interval algebra to express the relations of facts based on their associated temporal information. We carry out reasoning by transforming the statements and constraints to Markov Logic and compute the most probable consistent state (MAP inference)
with respect to the defined constraints. Moreover, we evaluate the proposed approach
in order to demonstrate its practicality and flexibility
A Reasoner for Calendric and Temporal Data
Calendric and temporal data are omnipresent in countless
Web and Semantic Web applications and Web services. Calendric and
temporal data are probably more than any other data a subject to
interpretation, in almost any case depending on some cultural, legal,
professional, and/or locational context. On the current Web, calendric
and temporal data can hardly be interpreted by computers. This article
contributes to the Semantic Web, an endeavor aiming at enhancing
the current Web with well-defined meaning and to enable computers to
meaningfully process data. The contribution is a reasoner for calendric
and temporal data. This reasoner is part of CaTTS, a type language for
calendar definitions. The reasoner is based on a \theory reasoning" approach
using constraint solving techniques. This reasoner complements
general purpose \axiomatic reasoning" approaches for the Semantic Web
as widely used with ontology languages like OWL or RDF
Time-Aware Probabilistic Knowledge Graphs
The emergence of open information extraction as a tool for constructing and expanding knowledge graphs has aided the growth of temporal data, for instance, YAGO, NELL and Wikidata. While YAGO and Wikidata maintain the valid time of facts, NELL records the time point at which a fact is retrieved from some Web corpora. Collectively, these knowledge graphs (KG) store facts extracted from Wikipedia and other sources. Due to the imprecise nature of the extraction tools that are used to build and expand KG, such as NELL, the facts in the KG are weighted (a confidence value representing the correctness of a fact). Additionally, NELL can be considered as a transaction time KG because every fact is associated with extraction date. On the other hand, YAGO and Wikidata use the valid time model because they maintain facts together with their validity time (temporal scope). In this paper, we propose a bitemporal model (that combines transaction and valid time models) for maintaining and querying bitemporal probabilistic knowledge graphs. We study coalescing and scalability of marginal and MAP inference. Moreover, we show that complexity of reasoning tasks in atemporal probabilistic KG carry over to the bitemporal setting. Finally, we report our evaluation results of the proposed model
State-of-the-art on evolution and reactivity
This report starts by, in Chapter 1, outlining aspects of querying and updating resources on
the Web and on the Semantic Web, including the development of query and update languages
to be carried out within the Rewerse project.
From this outline, it becomes clear that several existing research areas and topics are of
interest for this work in Rewerse. In the remainder of this report we further present state of
the art surveys in a selection of such areas and topics. More precisely: in Chapter 2 we give
an overview of logics for reasoning about state change and updates; Chapter 3 is devoted to briefly describing existing update languages for the Web, and also for updating logic programs;
in Chapter 4 event-condition-action rules, both in the context of active database systems and
in the context of semistructured data, are surveyed; in Chapter 5 we give an overview of some relevant rule-based agents frameworks
Knowledge-infused and Consistent Complex Event Processing over Real-time and Persistent Streams
Emerging applications in Internet of Things (IoT) and Cyber-Physical Systems
(CPS) present novel challenges to Big Data platforms for performing online
analytics. Ubiquitous sensors from IoT deployments are able to generate data
streams at high velocity, that include information from a variety of domains,
and accumulate to large volumes on disk. Complex Event Processing (CEP) is
recognized as an important real-time computing paradigm for analyzing
continuous data streams. However, existing work on CEP is largely limited to
relational query processing, exposing two distinctive gaps for query
specification and execution: (1) infusing the relational query model with
higher level knowledge semantics, and (2) seamless query evaluation across
temporal spaces that span past, present and future events. These allow
accessible analytics over data streams having properties from different
disciplines, and help span the velocity (real-time) and volume (persistent)
dimensions. In this article, we introduce a Knowledge-infused CEP (X-CEP)
framework that provides domain-aware knowledge query constructs along with
temporal operators that allow end-to-end queries to span across real-time and
persistent streams. We translate this query model to efficient query execution
over online and offline data streams, proposing several optimizations to
mitigate the overheads introduced by evaluating semantic predicates and in
accessing high-volume historic data streams. The proposed X-CEP query model and
execution approaches are implemented in our prototype semantic CEP engine,
SCEPter. We validate our query model using domain-aware CEP queries from a
real-world Smart Power Grid application, and experimentally analyze the
benefits of our optimizations for executing these queries, using event streams
from a campus-microgrid IoT deployment.Comment: 34 pages, 16 figures, accepted in Future Generation Computer Systems,
October 27, 201
A Reasoner for Calendric and Temporal Data
Calendric and temporal data are omnipresent in countless
Web and Semantic Web applications and Web services. Calendric and
temporal data are probably more than any other data a subject to
interpretation, in almost any case depending on some cultural, legal,
professional, and/or locational context. On the current Web, calendric
and temporal data can hardly be interpreted by computers. This article
contributes to the Semantic Web, an endeavor aiming at enhancing
the current Web with well-defined meaning and to enable computers to
meaningfully process data. The contribution is a reasoner for calendric
and temporal data. This reasoner is part of CaTTS, a type language for
calendar definitions. The reasoner is based on a "theory reasoning" approach
using constraint solving techniques. This reasoner complements
general purpose "axiomatic reasoning" approaches for the Semantic Web
as widely used with ontology languages like OWL or RDF
- …