1,791 research outputs found
On Reasoning with RDF Statements about Statements using Singleton Property Triples
The Singleton Property (SP) approach has been proposed for representing and
querying metadata about RDF triples such as provenance, time, location, and
evidence. In this approach, one singleton property is created to uniquely
represent a relationship in a particular context, and in general, generates a
large property hierarchy in the schema. It has become the subject of important
questions from Semantic Web practitioners. Can an existing reasoner recognize
the singleton property triples? And how? If the singleton property triples
describe a data triple, then how can a reasoner infer this data triple from the
singleton property triples? Or would the large property hierarchy affect the
reasoners in some way? We address these questions in this paper and present our
study about the reasoning aspects of the singleton properties. We propose a
simple mechanism to enable existing reasoners to recognize the singleton
property triples, as well as to infer the data triples described by the
singleton property triples. We evaluate the effect of the singleton property
triples in the reasoning processes by comparing the performance on RDF datasets
with and without singleton properties. Our evaluation uses as benchmark the
LUBM datasets and the LUBM-SP datasets derived from LUBM with temporal
information added through singleton properties
Linked Data - the story so far
The term âLinked Dataâ refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertionsâ the Web of Data. In this article, the authors present the concept and technical principles of Linked Data, and situate these within the broader context of related technological developments. They describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked Data community as it moves forward
Dynamic Provenance for SPARQL Update
While the Semantic Web currently can exhibit provenance information by using
the W3C PROV standards, there is a "missing link" in connecting PROV to storing
and querying for dynamic changes to RDF graphs using SPARQL. Solving this
problem would be required for such clear use-cases as the creation of version
control systems for RDF. While some provenance models and annotation techniques
for storing and querying provenance data originally developed with databases or
workflows in mind transfer readily to RDF and SPARQL, these techniques do not
readily adapt to describing changes in dynamic RDF datasets over time. In this
paper we explore how to adapt the dynamic copy-paste provenance model of
Buneman et al. [2] to RDF datasets that change over time in response to SPARQL
updates, how to represent the resulting provenance records themselves as RDF in
a manner compatible with W3C PROV, and how the provenance information can be
defined by reinterpreting SPARQL updates. The primary contribution of this
paper is a semantic framework that enables the semantics of SPARQL Update to be
used as the basis for a 'cut-and-paste' provenance model in a principled
manner.Comment: Pre-publication version of ISWC 2014 pape
A web-based approach to engineering adaptive collaborative applications
Current methods employed to develop collaborative applications have to make
decisions and speculate about the environment in which the application will operate
within, the network infrastructure that will be used and the device type the application
will operate on. These decisions and assumptions about the environment in which
collaborative applications were designed to work are not ideal. These methods produce
collaborative applications that are characterised as being inflexible, working on
homogeneous networks and single platforms, requiring pre-existing knowledge of the
data and information types they need to use and having a rigid choice of architecture.
On the other hand, future collaborative applications are required to be flexible; to work
in highly heterogeneous environments; be adaptable to work on different networks and
on a range of device types. This research investigates the role that the Web and its
various pervasive technologies along with a component-based Grid middleware can
play to address these concerns. The aim is to develop an approach to building adaptive
collaborative applications that can operate on heterogeneous and changing
environments. This work proposes a four-layer model that developers can use to build
adaptive collaborative applications. The four-layer model is populated with Web
technologies such as Scalable Vector Graphics (SVG), the Resource Description
Framework (RDF), Protocol and RDF Query Language (SPARQL) and Gridkit, a
middleware infrastructure, based on the Open Overlays concept. The Middleware layer
(the first layer of the four-layer model) addresses network and operating system
heterogeneity, the Group Communication layer enables collaboration and data sharing,
while the Knowledge Representation layer proposes an interoperable RDF data
modelling language and a flexible storage facility with an adaptive architecture for
heterogeneous data storage. And finally there is the Presentation and Interaction layer
which proposes a framework (Oea) for scalable and adaptive user interfaces. The four layer
model has been successfully used to build a collaborative application, called
Wildfurt that overcomes challenges facing collaborative applications. This research has
demonstrated new applications for cutting-edge Web technologies in the area of
building collaborative applications. SVG has been used for developing superior
adaptive and scalable user interfaces that can operate on different device types. RDF
and RDFS, have also been used to design and model collaborative applications
providing a mechanism to define classes and properties and the relationships between
them. A flexible and adaptable storage facility that is able to change its architecture
based on the surrounding environments and requirements has also been achieved by
combining the RDF technology with the Open Overlays middleware, Gridkit
A General Framework for Representing, Reasoning and Querying with Annotated Semantic Web Data
We describe a generic framework for representing and reasoning with annotated
Semantic Web data, a task becoming more important with the recent increased
amount of inconsistent and non-reliable meta-data on the web. We formalise the
annotated language, the corresponding deductive system and address the query
answering problem. Previous contributions on specific RDF annotation domains
are encompassed by our unified reasoning formalism as we show by instantiating
it on (i) temporal, (ii) fuzzy, and (iii) provenance annotations. Moreover, we
provide a generic method for combining multiple annotation domains allowing to
represent, e.g. temporally-annotated fuzzy RDF. Furthermore, we address the
development of a query language -- AnQL -- that is inspired by SPARQL,
including several features of SPARQL 1.1 (subqueries, aggregates, assignment,
solution modifiers) along with the formal definitions of their semantics
Open ebusiness ontology usage: investigating community implementation of goodrelations
The GoodRelations Ontology is experiencing the first stages of mainstream adoption, with its appeal to a range of enterprises as the eCommerce ontology of choice to promote its offerings and product catalogue. As adoption increases, so too does the need to critically review and analyze current implementation of the ontology to better assist future usage and uptake. To comprehensively understand the implementation approaches, usage patterns, instance data and model coverage, data was collected from 105 different web based sources that have published their business and product-related information using the GoodRelations Ontology. This paper analyses the ontology usage in terms of data instantiation, and conceptual coverage using a SPARQL queries to evaluate quality, usefulness and inference provisioning. Experimental results highlight that early publishers of structured eCommerce data benefit more due to structured data being more readily search engine indexable, but the lack of available product ontologies and product master datasheets is impeding the creation of a semantically interlinked eCommerce Web
Programming patterns and development guidelines for Semantic Sensor Grids (SemSorGrid4Env)
The web of Linked Data holds great potential for the creation of semantic applications that can combine self-describing structured data from many sources including sensor networks. Such applications build upon the success of an earlier generation of 'rapidly developed' applications that utilised RESTful APIs. This deliverable details experience, best practice, and design patterns for developing high-level web-based APIs in support of semantic web applications and mashups for sensor grids. Its main contributions are a proposal for combining Linked Data with RESTful application development summarised through a set of design principles; and the application of these design principles to Semantic Sensor Grids through the development of a High-Level API for Observations. These are supported by implementations of the High-Level API for Observations in software, and example semantic mashups that utilise the API
CoMMA Corporate Memory Management through Agents Corporate Memory Management through Agents: The CoMMA project final report
This document is the final report of the CoMMA project. It gives an overview of the different search activities that have been achieved through the project. First, a description of the general requirements is proposed through the definition of two scenarios. Then it shows the different technical aspects of the projects and the solution that has been proposed and implemented
- âŠ