3,161 research outputs found

    A schema-based P2P network to enable publish-subscribe for multimedia content in open hypermedia systems

    No full text
    Open Hypermedia Systems (OHS) aim to provide efficient dissemination, adaptation and integration of hyperlinked multimedia resources. Content available in Peer-to-Peer (P2P) networks could add significant value to OHS provided that challenges for efficient discovery and prompt delivery of rich and up-to-date content are successfully addressed. This paper proposes an architecture that enables the operation of OHS over a P2P overlay network of OHS servers based on semantic annotation of (a) peer OHS servers and of (b) multimedia resources that can be obtained through the link services of the OHS. The architecture provides efficient resource discovery. Semantic query-based subscriptions over this P2P network can enable access to up-to-date content, while caching at certain peers enables prompt delivery of multimedia content. Advanced query resolution techniques are employed to match different parts of subscription queries (subqueries). These subscriptions can be shared among different interested peers, thus increasing the efficiency of multimedia content dissemination

    Decentralized provenance-aware publishing with nanopublications

    Get PDF
    Publication and archival of scientific results is still commonly considered the responsability of classical publishing companies. Classical forms of publishing, however, which center around printed narrative articles, no longer seem well-suited in the digital age. In particular, there exist currently no efficient, reliable, and agreed-upon methods for publishing scientific datasets, which have become increasingly important for science. In this article, we propose to design scientific data publishing as a web-based bottom-up process, without top-down control of central authorities such as publishing companies. Based on a novel combination of existing concepts and technologies, we present a server network to decentrally store and archive data in the form of nanopublications, an RDF-based format to represent scientific data. We show how this approach allows researchers to publish, retrieve, verify, and recombine datasets of nanopublications in a reliable and trustworthy manner, and we argue that this architecture could be used as a low-level data publication layer to serve the Semantic Web in general. Our evaluation of the current network shows that this system is efficient and reliable

    The simplicity project: easing the burden of using complex and heterogeneous ICT devices and services

    Get PDF
    As of today, to exploit the variety of different "services", users need to configure each of their devices by using different procedures and need to explicitly select among heterogeneous access technologies and protocols. In addition to that, users are authenticated and charged by different means. The lack of implicit human computer interaction, context-awareness and standardisation places an enormous burden of complexity on the shoulders of the final users. The IST-Simplicity project aims at leveraging such problems by: i) automatically creating and customizing a user communication space; ii) adapting services to user terminal characteristics and to users preferences; iii) orchestrating network capabilities. The aim of this paper is to present the technical framework of the IST-Simplicity project. This paper is a thorough analysis and qualitative evaluation of the different technologies, standards and works presented in the literature related to the Simplicity system to be developed

    Impactful learning: exploring the value of informal learning experiences to improve the learning potential of international research projects.

    Get PDF
    The Horizon 2020 (H2020) is the largest EU funded research programme, which supports mobility of international researchers through secondments to engage in collaborative research activities to enhance individual and collective research capacity within the EU. This paper explores how an analysis of secondees’ informal learning experiences can highlight opportunities for increasing individual and collective learning capacity of an international partnership and achievement of project objectives. A thematic analysis method (Miles and Huberman, 1994), was applied to 19 secondee’s individual learning reports. The main findings discussed three themes elicited through secondees informal learning, including living and working in a host country and developing an academic career. The paper outlines practice, policy and research implications for improving the learning potential of international research projects

    An Interoperable Access Control System based on Self-Sovereign Identities

    Get PDF
    The extreme growth of the World Wide Web in the last decade together with recent scandals related to theft or abusive use of personal information have left users unsatisfied withtheir digital identity providers and concerned about their online privacy. Self-SovereignIdentity (SSI) is a new identity management paradigm which gives back control over personal information to its rightful owner - the individual. However, adoption of SSI on theWeb is complicated by the high overhead costs for the service providers due to the lackinginteroperability of the various emerging SSI solutions. In this work, we propose an AccessControl System based on Self-Sovereign Identities with a semantically modelled AccessControl Logic. Our system relies on the Web Access Control authorization rules usedin the Solid project and extends them to additionally express requirements on VerifiableCredentials, i.e., digital credentials adhering to a standardized data model. Moreover,the system achieves interoperability across multiple DID Methods and types of VerifiableCredentials allowing for incremental extensibility of the supported SSI technologies bydesign. A Proof-of-Concept prototype is implemented and its performance as well as multiple system design choices are evaluated: The End-to-End latency of the authorizationprocess takes between 2-5 seconds depending on the used DID Methods and can theoretically be further optimized to 1.5-3 seconds. Evaluating the potential interoperabilityachieved by the system shows that multiple DID Methods and different types of VerifiableCredentials can be supported. Lastly, multiple approaches for modelling required Verifiable Credentials are compared and the suitability of the SHACL language for describingthe RDF graphs represented by the required Linked Data credentials is shown

    Transparent Personal Data Processing: The Road Ahead

    Get PDF
    The European General Data Protection Regulation defines a set of obligations for personal data controllers and processors. Primary obligations include: obtaining explicit consent from the data subject for the processing of personal data, providing full transparency with respect to the processing, and enabling data rectification and erasure (albeit only in certain circumstances). At the core of any transparency architecture is the logging of events in relation to the processing and sharing of personal data. The logs should enable verification that data processors abide by the access and usage control policies that have been associated with the data based on the data subject's consent and the applicable regulations. In this position paper, we: (i) identify the requirements that need to be satisfied by such a transparency architecture, (ii) examine the suitability of existing logging mechanisms in light of said requirements, and (iii) present a number of open challenges and opportunities

    Theory and Practice of Data Citation

    Full text link
    Citations are the cornerstone of knowledge propagation and the primary means of assessing the quality of research, as well as directing investments in science. Science is increasingly becoming "data-intensive", where large volumes of data are collected and analyzed to discover complex patterns through simulations and experiments, and most scientific reference works have been replaced by online curated datasets. Yet, given a dataset, there is no quantitative, consistent and established way of knowing how it has been used over time, who contributed to its curation, what results have been yielded or what value it has. The development of a theory and practice of data citation is fundamental for considering data as first-class research objects with the same relevance and centrality of traditional scientific products. Many works in recent years have discussed data citation from different viewpoints: illustrating why data citation is needed, defining the principles and outlining recommendations for data citation systems, and providing computational methods for addressing specific issues of data citation. The current panorama is many-faceted and an overall view that brings together diverse aspects of this topic is still missing. Therefore, this paper aims to describe the lay of the land for data citation, both from the theoretical (the why and what) and the practical (the how) angle.Comment: 24 pages, 2 tables, pre-print accepted in Journal of the Association for Information Science and Technology (JASIST), 201
    • 

    corecore