7,664 research outputs found
Expressive Stream Reasoning with Laser
An increasing number of use cases require a timely extraction of non-trivial
knowledge from semantically annotated data streams, especially on the Web and
for the Internet of Things (IoT). Often, this extraction requires expressive
reasoning, which is challenging to compute on large streams. We propose Laser,
a new reasoner that supports a pragmatic, non-trivial fragment of the logic
LARS which extends Answer Set Programming (ASP) for streams. At its core, Laser
implements a novel evaluation procedure which annotates formulae to avoid the
re-computation of duplicates at multiple time points. This procedure, combined
with a judicious implementation of the LARS operators, is responsible for
significantly better runtimes than the ones of other state-of-the-art systems
like C-SPARQL and CQELS, or an implementation of LARS which runs on the ASP
solver Clingo. This enables the application of expressive logic-based reasoning
to large streams and opens the door to a wider range of stream reasoning use
cases.Comment: 19 pages, 5 figures. Extended version of accepted paper at ISWC 201
Subset reasoning for event-based systems
In highly dynamic domains such as the Internet of Things (IoT), smart industries, smart manufacturing, pervasive health or social media, data is being continuously generated. By combining this generated data with background knowledge and performing expressive reasoning upon this combination, meaningful decisions can be made. Furthermore, this continuously generated data typically originates from multiple heterogeneous sources. Ontologies are ideal for modeling the domain and facilitates the integration of heterogeneous produced data with background knowledge. Furthermore, expressive ontology reasoning allows to infer implicit facts and enables intelligent decision making. The data produced in these domains is often volatile. Time-critical systems, such as IoT Nurse Call systems, require timely processing of the produced IoT data. However, there is still a mismatch between volatile data and expressive ontology reasoning, since the incoming data frequency is often higher than the reasoning time. For this reason, we present an approximation technique that allows to extract a subset of data to speed-up the reasoning process. We demonstrate this technique in a Nurse Call proof of concept where the locations of the nurses are tracked and the most suited nurse is selected when the patient launches a call and in an extension of an existing benchmark. We managed to speed up the reasoning process up to 10 times for small datasets and up to more than 1000 times for large datasets
The Vadalog System: Datalog-based Reasoning for Knowledge Graphs
Over the past years, there has been a resurgence of Datalog-based systems in
the database community as well as in industry. In this context, it has been
recognized that to handle the complex knowl\-edge-based scenarios encountered
today, such as reasoning over large knowledge graphs, Datalog has to be
extended with features such as existential quantification. Yet, Datalog-based
reasoning in the presence of existential quantification is in general
undecidable. Many efforts have been made to define decidable fragments. Warded
Datalog+/- is a very promising one, as it captures PTIME complexity while
allowing ontological reasoning. Yet so far, no implementation of Warded
Datalog+/- was available. In this paper we present the Vadalog system, a
Datalog-based system for performing complex logic reasoning tasks, such as
those required in advanced knowledge graphs. The Vadalog system is Oxford's
contribution to the VADA research programme, a joint effort of the universities
of Oxford, Manchester and Edinburgh and around 20 industrial partners. As the
main contribution of this paper, we illustrate the first implementation of
Warded Datalog+/-, a high-performance Datalog+/- system utilizing an aggressive
termination control strategy. We also provide a comprehensive experimental
evaluation.Comment: Extended version of VLDB paper
<https://doi.org/10.14778/3213880.3213888
Temporal datalog with existential quantification
Existential rules, also known as tuple-generating
dependencies (TGDs) or Datalog± rules, are heavily studied in the communities of Knowledge
Representation and Reasoning, Semantic Web,
and Databases, due to their rich modelling capabilities. In this paper we consider TGDs in
the temporal setting, by introducing and studying DatalogMTL∃—an extension of metric temporal Datalog (DatalogMTL) obtained by allowing for existential rules in programs. We show that
DatalogMTL∃
is undecidable even in the restricted
cases of guarded and weakly-acyclic programs. To
address this issue we introduce uniform semantics
which, on the one hand, is well-suited for modelling temporal knowledge as it prevents from unintended value invention and, on the other hand,
provides decidability of reasoning; in particular, it
becomes 2-ExpSpace-complete for weakly-acyclic
programs but remains undecidable for guarded programs. We provide an implementation for the decidable case and demonstrate its practical feasibility. Thus we obtain an expressive, yet decidable,
rule-language and a system which is suitable for
complex temporal reasoning with existential rules
Deep Bilateral Learning for Real-Time Image Enhancement
Performance is a critical challenge in mobile image processing. Given a
reference imaging pipeline, or even human-adjusted pairs of images, we seek to
reproduce the enhancements and enable real-time evaluation. For this, we
introduce a new neural network architecture inspired by bilateral grid
processing and local affine color transforms. Using pairs of input/output
images, we train a convolutional neural network to predict the coefficients of
a locally-affine model in bilateral space. Our architecture learns to make
local, global, and content-dependent decisions to approximate the desired image
transformation. At runtime, the neural network consumes a low-resolution
version of the input image, produces a set of affine transformations in
bilateral space, upsamples those transformations in an edge-preserving fashion
using a new slicing node, and then applies those upsampled transformations to
the full-resolution image. Our algorithm processes high-resolution images on a
smartphone in milliseconds, provides a real-time viewfinder at 1080p
resolution, and matches the quality of state-of-the-art approximation
techniques on a large class of image operators. Unlike previous work, our model
is trained off-line from data and therefore does not require access to the
original operator at runtime. This allows our model to learn complex,
scene-dependent transformations for which no reference implementation is
available, such as the photographic edits of a human retoucher.Comment: 12 pages, 14 figures, Siggraph 201
- …