4,330 research outputs found
Recommended from our members
XSPARQL: Traveling between the XML and RDF worlds - and avoiding the XSLT Pilgrimage
Supporting SPARQL Update Queries in RDF-XML Integration
The Web of Data encourages organizations and companies to publish their data
according to the Linked Data practices and offer SPARQL endpoints. On the other
hand, the dominant standard for information exchange is XML. The SPARQL2XQuery
Framework focuses on the automatic translation of SPARQL queries in XQuery
expressions in order to access XML data across the Web. In this paper, we
outline our ongoing work on supporting update queries in the RDF-XML
integration scenario.Comment: 13th International Semantic Web Conference (ISWC '14
Survey over Existing Query and Transformation Languages
A widely acknowledged obstacle for realizing the vision of the Semantic Web is the inability
of many current Semantic Web approaches to cope with data available in such diverging
representation formalisms as XML, RDF, or Topic Maps. A common query language is the first
step to allow transparent access to data in any of these formats. To further the understanding
of the requirements and approaches proposed for query languages in the conventional as well
as the Semantic Web, this report surveys a large number of query languages for accessing
XML, RDF, or Topic Maps. This is the first systematic survey to consider query languages from
all these areas. From the detailed survey of these query languages, a common classification
scheme is derived that is useful for understanding and differentiating languages within and
among all three areas
A pragmatic approach to semantic repositories benchmarking
The aim of this paper is to benchmark various semantic repositories in order to evaluate their deployment in a commercial image retrieval and browsing application. We adopt a two-phase approach for evaluating the target semantic repositories: analytical parameters such as query language and reasoning support are used to select the pool of the target repositories, and practical parameters such as load and query response times are used to select the best match to application requirements. In addition to utilising a widely accepted benchmark for OWL repositories (UOBM), we also use a real-life dataset from the target application, which provides us with the opportunity of consolidating our findings. A distinctive advantage of this benchmarking study is that the essential requirements for the target system such as the semantic expressivity and data scalability are clearly defined, which allows us to claim contribution to the benchmarking methodology for this class of applications
The Semantic Automated Discovery and Integration (SADI) Web service Design-Pattern, API and Reference Implementation
Background. 
The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community.

Description. 
SADI – Semantic Automated Discovery and Integration – is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services “stack”, SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers.

Conclusions.
SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behavior we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in static triple-stores, thus facilitating the intersection of Web services and Semantic Web technologies
Implementation and Deployment of a Library of the High-level Application Programming Interfaces (SemSorGrid4Env)
The high-level API service is designed to support rapid development of thin web applications and mashups beyond the state of the art in GIS, while maintaining compatibility with existing tools and expectations. It provides a fully configurable API, while maintaining a separation of concerns between domain experts, service administrators and mashup developers. It adheres to REST and Linked Data principles, and provides a novel bridge between standards-based (OGC O&M) and Semantic Web approaches. This document discusses the background motivations for the HLAPI (including experiences gained from any previously implemented versions), before moving onto specific details of the final implementation, including configuration and deployment instructions, as well as a full tutorial to assist mashup developers with using the exposed observation data
Discovering Links for Metadata Enrichment on Computer Science Papers
At the very beginning of compiling a bibliography, usually only basic
information, such as title, authors and publication date of an item are known.
In order to gather additional information about a specific item, one typically
has to search the library catalog or use a web search engine. This look-up
procedure implies a manual effort for every single item of a bibliography. In
this technical report we present a proof of concept which utilizes Linked Data
technology for the simple enrichment of sparse metadata sets. This is done by
discovering owl:sameAs links be- tween an initial set of computer science
papers and resources from external data sources like DBLP, ACM and the Semantic
Web Conference Corpus. In this report, we demonstrate how the link discovery
tool Silk is used to detect additional information and to enrich an initial set
of records in the computer science domain. The pros and cons of silk as link
discovery tool are summarized in the end.Comment: 22 pages, 4 figures, 7 listings, presented at SWIB1
- …