42 research outputs found

    morph-LDP: an R2RML-based Linked Data Platform implementation

    Get PDF
    The W3C Linked Data Platform (LDP) candidate recom- mendation defines a standard HTTP-based protocol for read/write Linked Data. The W3C R2RML recommendation defines a language to map re- lational databases (RDBs) and RDF. This paper presents morph-LDP, a novel system that combines these two W3C standardization initiatives to expose relational data as read/write Linked Data for LDP-aware ap- plications, whilst allowing legacy applications to continue using their relational databases

    Simplified SPARQL REST API - CRUD on JSON Object Graphs via URI Paths

    Full text link
    Within the Semantic Web community, SPARQL is one of the predominant languages to query and update RDF knowledge. However, the complexity of SPARQL, the underlying graph structure and various encodings are common sources of confusion for Semantic Web novices. In this paper we present a general purpose approach to convert any given SPARQL endpoint into a simple to use REST API. To lower the initial hurdle, we represent the underlying graph as an interlinked view of nested JSON objects that can be traversed by the API path.Comment: 5 pages, 2 figures, ESWC 2018 demo pape

    Formalisation and experiences of R2RML-based SPARQL to SQL query translation using Morph

    Get PDF
    R2RML is used to specify transformations of data available in relational databases into materialised or virtual RDF datasets. SPARQL queries evaluated against virtual datasets are translated into SQL queries according to the R2RML mappings, so that they can be evaluated over the underlying relational database engines. In this paper we describe an extension of a well-known algorithm for SPARQL to SQL translation, originally formalised for RDBMS-backed triple stores, that takes into account R2RML mappings. We present the result of our implementation using queries from a synthetic benchmark and from three real use cases, and show that SPARQL queries can be in general evaluated as fast as the SQL queries that would have been generated by SQL experts if no R2RML mappings had been used

    Provenance : from long-term preservation to query federation and grid reasoning

    Get PDF

    Supporting Virtualisation Management through an Object Mapping Declarative Language Framework

    Get PDF
    Due to the inevitably vast scale of virtualised cloud computing systems, management of the numerous physical and virtual components that make up their underlying infrastructure may become unwieldy. Many software packages that have historically been installed on desktops / workstations for years are slowly but surely being ported to cloud computing. The virtualisation management problems that are apparent today are only set to worsen as cloud computing systems become ever more pervasive. Backing cloud computing systems are equally elaborate database systems, many platforms of which have made extensive use of distributed computing and virtualisation for years. The more recent emergence of virtualised big data systems with similarly vast scale problems has escalated the urgent requirement for creative management of the numerous physical and virtual components. The thesis will initially synopsise previous investigatory research concerning these emerging problems and studies the current disposition of virtualisation management including the associated concepts, strategies and technologies. The thesis then continues, to describe the structure and operation of an object mapping declarative language to support the management of these numerous physical and virtual components. The ultimate aim is to develop a Virtualisation Management System (VMS), a software framework that is fully extensible in nature and which combines the rich capability of an imperative assembly with the concise simplicity of a declarative language. It is through this declarative language that human interaction and decision making may be richly yet concisely specified before being converted through object mapping to the comparable imperative assembly for execution. It is also through parsing this declarative language that autonomic algorithms may be able to integrate with and operate the VMS through a suitably defined plug-in based mechanism. The thesis will ultimately demonstrate via scenarios both basic and complex that the VMS is able to specify, observe, regulate and adapt its virtualisation management domain to the changing disposition of the numerous physical and virtual components that constitute cloud computing and big data systems

    Modernization of a legacy system:event streaming with Apache Kafka and Spring Boot

    Get PDF
    Abstract. In this thesis, we will design, implement, and evaluate a brand new replacement, the Watcher, for a legacy system built over two decades ago. The Watcher is able to track changes in our PDM system, and notify users of the changes by email or as a push notifcation using SSE. Functional requirements for the new system come from the legacy system including the possibility to create subscriptions with a wide range of options to flter out redundant data traffc. The Watcher will also be able to carry out all operations than the predecessor with increased performance and effciency. The main focus is on scalability, maintainability, and fault tolerance. The reason for building a new system is mainly the cost of maintainability and further development of the legacy system as well as features removed due to obsolete technologies. In the literature review, we go through the theory of the technologies related to the project. We create a REST API with Spring Boot for interactions between users and the system, implement powerful event streaming and processing environment using Apache Kafka, and build a message service responsible for providing information via scheduled emails or SSE. In the end, we will use Docker to containerize all the services. In the project design, we present functional as well as technical requirements that we use later on to evaluate the project’s success. We also compare the legacy system to the new one using metrics such as speed and ease of the installation process. In the end, we discuss the project’s future including steps before going to production such as automatic testing, and further development for years to come such as orchestration.Legacy-palvelun uudistaminen : reaaliaikajärjestelmä Apache Kafkaa ja Spring Bootia hyödyntäen. Tiivistelmä. Tässä työssä suunnittelemme, toteutamme ja arvioimme uuden järjestelmän, joka tulee korvaamaan yli kaksi vuosikymmentä sitten luodun legacy-järjestelmän. Tämä uusi järjestelmä, "the Watcher", kykenee seuraamaan muutoksia meidän PDM järjestelmässämme, ja ilmoittamaan muutoksista käyttäjille sähköpostilla, sekä push-ilmoituksilla. Hyödynnämme tässä työssä toiminnallisia vaatimuksia, jotka ovat määritelty jo vanhalle järjestelmälle. Esimerkiksi tilausten luominen käyttäen useita suodattimia vähentäen samalla tarpeetonta dataliikennettä. "The Watcher"kykenee suoriutumaan kaikista tehtävistä joista vanha järjestelmäkin, sekä lisäominaisuutena se tarjoaa paremman suorituskyvyn, sekä tehokkuuden. Pääpainona järjestelmässä on skaalautuvuus, ylläpidettävyys ja vikasietoisuus. Kirjallisuuskatsauksessa käymme läpi projektiin liittyvien teknologioiden teorian. Toteutamme Spring Boot ohjelmointikehyksen avulla REST-rajapinnan, jonka välityksellä käyttäjät voivat kommunikoida järjestelmän kanssa. Rakennamme myös tehokkaan ympäristön datan käsittelyyn ja reaaliaikaiseen viestintään käyttäen Apache Kafkaa. Viimeiseksi luomme viestipalvelun, joka vastaa käyttäjien informoimisesta hyödyntäen SSE:ksi kutsuttua teknologiaa, sekä lähettämällä sähköpostiviestejä käyttäjien toivomana ajankohtana. Lopuksi vielä sijoitamme kaikki palvelut kontteihin Dockerin avulla. Projektin suunnitteluosiossa esittelemme niin toiminnalliset, kuin teknisetkin vaatimukset, joiden avulla arvioimme myöhemmin projektin onnistumista. Vertaamme myös vanhaa ja uutta järjestelmää käyttäen metriikoita kuten nopeus ja asennusprosessin yksinkertaisuus. Lopussa keskustelemme projektin tulevaisuudesta sisältäen vaiheet jotka tulisi suorittaa ennen kuin järjestelmä voidaan ottaa tuotantokäyttöön kuten automaattinen testaus, sekä toiminnallisuuksien kehitys tulevina vuosina

    Enabling data-intensive biomedical studies

    Get PDF
    The constantly increasing size and complexity of datasets involved in biomedical projects is deeply transforming approaches to their solution. Large scale studies require specifically designed computational frameworks that are capable of fulfilling many diverse requirements, the most important of which can be summarized in the fundamental properties of scalability, reproducibility and traceability. Although in recent years several new technologies have emerged that help deal with the issues raised by data-intensive research projects, applying them to the construction of a computational solution for the specific problem at hand is far from trivial, as no one-size-fits-all recipe exists for such a task. This work describes a methodology for approaching this new class of studies through several examples of solutions applied to concrete research problems

    Design of a reference architecture for an IoT sensor network

    Get PDF
    corecore