8 research outputs found

    Open Data Consumption Through the Generation of Disposable Web APIs

    Get PDF
    The ever-growing amount of information in today’s world has led to the publication of more and more open data, i.e., that which is available in a free and reusable manner, on the Web. Open data is considered highly valuable in situational scenarios, in which thematic data is required for a short life cycle by a small group of consumers with specific needs. In this context, data consumers (developers or data scientists) need mechanisms with which to easily assess whether the data is adequate for their purpose. SPARQL endpoints have become very useful for the consumption of open data, but we argue that its steep learning curve hampers open data reuse in situational scenarios. In order to overcome this pitfall, in this paper, we coin the term disposable Web APIs as an alternative mechanism for the consumption of open data in situational scenarios. Disposable Web APIs are created on-the-fly to be used temporarily by a user to consume open data. In this paper we specifically describe an approach with which to leverage semantic information from data sources so as to automatically generate easy-to-use disposable Web APIs that can be used to access open data in a situational scenario, thus avoiding the complexity and learning curve of SPARQL and the effort of manually processing the data. We have conducted several experiments to discover whether non-experienced users find it easier to use our disposable Web API or a SPARQL endpoint to access open data. The results of the experiments led us to conclude that, in a situational scenario, it is easier and faster to use the Web API than the corresponding SPARQL endpoint in order to consume open data.This work was supported in part by the Access@City coordinated Research Project through the Spanish Ministry of Science, Innovation and Universities under Grant TIN2016-78103-C2-1-R and Grant TIN2016-78103-C2-2-R; in part by the Plataforma intensiva en datos proveedora de servicios inteligentes de movilidad (MoviDA) Project through Rey Juan Carlos University; and in part by the Recolección y publicación de datos abiertos para la reactivación del sector turístico postCOVID-19 (UAPOSTCOVID19-10) Project through the Consejo Social of the University of Alicante. The work of César González-Mora was supported in part by the Generalitat Valenciana, and in part by the European Social Fund under Grant ACIF/2019/044

    grlc Makes GitHub Taste Like Linked Data APIs

    Get PDF

    Using nanopublications as a distributed ledger of digital truth

    Get PDF
    With the increase in volume of research publications, it is very difficult for researchers to keep abreast of all work in their area. Additionally, the claims in classical publications are not machine-readable making it challenging to retrieve, integrate, and link prior work. Several semantic publishing approaches have been proposed to address these challenges, including Research Object, Executable Paper, Micropublications, and Nanopublications. Nanopublications are a granular way of publishing research-based claims, their associated provenance, and publication information (metadata of the nanopublication) in a machine-readable form. To date, over 10 million nanopublications have been published, covering a wide range of topics, predominantly in the life sciences. Nanopublications are immutable, decentralised/distributed, uniformly structured, granular level, and authentic. These features of nanopublications allow them to be used as a Distributed Ledger of Digital Truth. Such a ledger enables detecting conflicting claims and generating the timeline of discussion on a particular topic. However, the inability to identify all nanopublications related to a given topic prevent existing nanopublications forming a ledger. In this dissertation, we make the following contributions: (i) Identify quality issues regarding misuse of authorship properties and linkrot which impact on the quality of the digital ledger. We argue that the Nanopub community needs to be developed a set of guidelines for publishing nanopublications. (ii) Provide a framework for generating a timeline of discourse over a collection of nanopublications by retrieving and combining nanopublications on a particular topic to provide interoperability between them. (iii) Detect contradictory claims between nanopublications automatically highlighting the conflicts and provide explanations based on the provenance information in the nanopublications. Through these contributions, we show that nanopublications can form a distributed ledger of digital truth, providing key benefits such as citability, timelines of discourse, and conflict detection, to users of the ledger
    corecore