1,335 research outputs found

    Trusty URIs: Verifiable, Immutable, and Permanent Digital Artifacts for Linked Data

    Get PDF
    To make digital resources on the web verifiable, immutable, and permanent, we propose a technique to include cryptographic hash values in URIs. We call them trusty URIs and we show how they can be used for approaches like nanopublications to make not only specific resources but their entire reference trees verifiable. Digital artifacts can be identified not only on the byte level but on more abstract levels such as RDF graphs, which means that resources keep their hash values even when presented in a different format. Our approach sticks to the core principles of the web, namely openness and decentralized architecture, is fully compatible with existing standards and protocols, and can therefore be used right away. Evaluation of our reference implementations shows that these desired properties are indeed accomplished by our approach, and that it remains practical even for very large files.Comment: Small error corrected in the text (table data was correct) on page 13: "All average values are below 0.8s (0.03s for batch mode). Using Java in batch mode even requires only 1ms per file.

    Making Digital Artifacts on the Web Verifiable and Reliable

    Get PDF
    The current Web has no general mechanisms to make digital artifacts --- such as datasets, code, texts, and images --- verifiable and permanent. For digital artifacts that are supposed to be immutable, there is moreover no commonly accepted method to enforce this immutability. These shortcomings have a serious negative impact on the ability to reproduce the results of processes that rely on Web resources, which in turn heavily impacts areas such as science where reproducibility is important. To solve this problem, we propose trusty URIs containing cryptographic hash values. We show how trusty URIs can be used for the verification of digital artifacts, in a manner that is independent of the serialization format in the case of structured data files such as nanopublications. We demonstrate how the contents of these files become immutable, including dependencies to external digital artifacts and thereby extending the range of verifiability to the entire reference tree. Our approach sticks to the core principles of the Web, namely openness and decentralized architecture, and is fully compatible with existing standards and protocols. Evaluation of our reference implementations shows that these design goals are indeed accomplished by our approach, and that it remains practical even for very large files.Comment: Extended version of conference paper: arXiv:1401.577

    REST ja SOAP pohjaisien web-palveluiden käyttö ja suorituskyky

    Get PDF
    REST and SOAP are web service technologies for solving the message delivery problem. The choice between the two is not clear and comparison is difficult. This thesis tries to do the comparison and ease the choice with the recommendations. Also the aim of this work is to research REST as a replacement for SOAP for Seitatech Payment solution. The definitions of SOAP and REST and the usage of both is described. The definition studies are used to do the comparison in a conceptual and feature level. In addition, practical tests about the performance of each technologies is made. A simple test setup is created using Seitatech provided web service platform. Afterwards, the test results are analysed. The test results show REST to outperform SOAP in terms of bandwidth usage and message processing performance. During the test cases, performance issues was discovered when message size grows, which indicates parser issues in Seitatech platform. The comparison provided results of characteristic differences between SOAP and REST. The recommendation of REST is made in most common cases, as it is less complex, less burdening and easier to develop and use than SOAP. SOAP should only be chosen if particular functionality, such as security options, is required.Web-palveluiden kehityksessä viestien kuljetus järjestelmässä on merkittävä ongelma. REST ja SOAP ovat teknologioita, mitkä vastaavat tähän ongelmaan. Valinta näiden teknologioiden kesken on vaikea, sillä REST ja SOAP ovat tyyliltään erilaisia ja haastavia verrata keskenään. Tämä työ pyrkii tekemään vertailun näiden teknologioiden kesken ja helpottamaan tätä valintaa. Tämän työn tarkoitus on myös tutkia REST pohjaisen web-palvelun potentiaalia korvaamaan SOAP pohjaista palvelua. Tässä työssä käydään läpi REST ja SOAP teknologioiden määritelmät. Näiden määritelmien avulla vertaillaan järjestelmiä keskenään sekä määritelmä, että toiminnallisuustasolla. Määritelmävertailun lisäksi suoritetaan käytännön testejä, millä pyritään löytämään mahdolliset suorituskykyerot. Näitä testejä varten Seitatech on tarjonnut alustan, mitä muokkaamalla testit saadaan suoritettua. Käytännöntestit osoittavat REST arkkitehtuurin suoriutuvan paremmin sekä viestien prosessoinnissa että kaistankäytössä. Testien aikana saatiin myös tietoa Seitatechin alustasta, missä huomattiin ongelmia viestien käsittelyssä kun viestien koko kasvoi suureksi. Vertailun lopputuloksena osoitettiin REST pohjaisen järjestelmän sopeutuvan paremmin yleisimmissä tilanteissa. Suorituskyvyn lisäksi REST määritellään yksinkertaisemmaksi ja helpommaksi kehittää ja käyttää, kun taas SOAP on yleisesti rajoitetumpi ja raskaampi viestien siirtoon. SOAP kuitenkin tarjoaa laajemmat työkalut ja laajennukset, jolloin se voi olla soveltuvampi ratkaisu esimerkiksi turvallisuutta ja luotettavuutta vaativissa järjestelmissä

    From Artifacts to Aggregations: Modeling Scientific Life Cycles on the Semantic Web

    Full text link
    In the process of scientific research, many information objects are generated, all of which may remain valuable indefinitely. However, artifacts such as instrument data and associated calibration information may have little value in isolation; their meaning is derived from their relationships to each other. Individual artifacts are best represented as components of a life cycle that is specific to a scientific research domain or project. Current cataloging practices do not describe objects at a sufficient level of granularity nor do they offer the globally persistent identifiers necessary to discover and manage scholarly products with World Wide Web standards. The Open Archives Initiative's Object Reuse and Exchange data model (OAI-ORE) meets these requirements. We demonstrate a conceptual implementation of OAI-ORE to represent the scientific life cycles of embedded networked sensor applications in seismology and environmental sciences. By establishing relationships between publications, data, and contextual research information, we illustrate how to obtain a richer and more realistic view of scientific practices. That view can facilitate new forms of scientific research and learning. Our analysis is framed by studies of scientific practices in a large, multi-disciplinary, multi-university science and engineering research center, the Center for Embedded Networked Sensing (CENS).Comment: 28 pages. To appear in the Journal of the American Society for Information Science and Technology (JASIST

    Report of the Stanford Linked Data Workshop

    No full text
    The Stanford University Libraries and Academic Information Resources (SULAIR) with the Council on Library and Information Resources (CLIR) conducted at week-long workshop on the prospects for a large scale, multi-national, multi-institutional prototype of a Linked Data environment for discovery of and navigation among the rapidly, chaotically expanding array of academic information resources. As preparation for the workshop, CLIR sponsored a survey by Jerry Persons, Chief Information Architect emeritus of SULAIR that was published originally for workshop participants as background to the workshop and is now publicly available. The original intention of the workshop was to devise a plan for such a prototype. However, such was the diversity of knowledge, experience, and views of the potential of Linked Data approaches that the workshop participants turned to two more fundamental goals: building common understanding and enthusiasm on the one hand and identifying opportunities and challenges to be confronted in the preparation of the intended prototype and its operation on the other. In pursuit of those objectives, the workshop participants produced:1. a value statement addressing the question of why a Linked Data approach is worth prototyping;2. a manifesto for Linked Libraries (and Museums and Archives and …);3. an outline of the phases in a life cycle of Linked Data approaches;4. a prioritized list of known issues in generating, harvesting & using Linked Data;5. a workflow with notes for converting library bibliographic records and other academic metadata to URIs;6. examples of potential “killer apps” using Linked Data: and7. a list of next steps and potential projects.This report includes a summary of the workshop agenda, a chart showing the use of Linked Data in cultural heritage venues, and short biographies and statements from each of the participants

    PROV-N: The Provenance Notation

    Get PDF
    Para dar ejemplos del modelo de datos PROV, se presenta la notación PROV (PROV-N) que está dirigida al uso humano, PROV-N permite serializaciones de instancias PROV para que se creen de una manera compacta. PROV-N facilita la asignación del modelo de datos PROV a una sintaxis concreta, y se utiliza como base para una semántica formal de PROV. El propósito de este documento es definir la notación PROV-N.W3

    Linked Research on the Decentralised Web

    Get PDF
    This thesis is about research communication in the context of the Web. I analyse literature which reveals how researchers are making use of Web technologies for knowledge dissemination, as well as how individuals are disempowered by the centralisation of certain systems, such as academic publishing platforms and social media. I share my findings on the feasibility of a decentralised and interoperable information space where researchers can control their identifiers whilst fulfilling the core functions of scientific communication: registration, awareness, certification, and archiving. The contemporary research communication paradigm operates under a diverse set of sociotechnical constraints, which influence how units of research information and personal data are created and exchanged. Economic forces and non-interoperable system designs mean that researcher identifiers and research contributions are largely shaped and controlled by third-party entities; participation requires the use of proprietary systems. From a technical standpoint, this thesis takes a deep look at semantic structure of research artifacts, and how they can be stored, linked and shared in a way that is controlled by individual researchers, or delegated to trusted parties. Further, I find that the ecosystem was lacking a technical Web standard able to fulfill the awareness function of research communication. Thus, I contribute a new communication protocol, Linked Data Notifications (published as a W3C Recommendation) which enables decentralised notifications on the Web, and provide implementations pertinent to the academic publishing use case. So far we have seen decentralised notifications applied in research dissemination or collaboration scenarios, as well as for archival activities and scientific experiments. Another core contribution of this work is a Web standards-based implementation of a clientside tool, dokieli, for decentralised article publishing, annotations and social interactions. dokieli can be used to fulfill the scholarly functions of registration, awareness, certification, and archiving, all in a decentralised manner, returning control of research contributions and discourse to individual researchers. The overarching conclusion of the thesis is that Web technologies can be used to create a fully functioning ecosystem for research communication. Using the framework of Web architecture, and loosely coupling the four functions, an accessible and inclusive ecosystem can be realised whereby users are able to use and switch between interoperable applications without interfering with existing data. Technical solutions alone do not suffice of course, so this thesis also takes into account the need for a change in the traditional mode of thinking amongst scholars, and presents the Linked Research initiative as an ongoing effort toward researcher autonomy in a social system, and universal access to human- and machine-readable information. Outcomes of this outreach work so far include an increase in the number of individuals self-hosting their research artifacts, workshops publishing accessible proceedings on the Web, in-the-wild experiments with open and public peer-review, and semantic graphs of contributions to conference proceedings and journals (the Linked Open Research Cloud). Some of the future challenges include: addressing the social implications of decentralised Web publishing, as well as the design of ethically grounded interoperable mechanisms; cultivating privacy aware information spaces; personal or community-controlled on-demand archiving services; and further design of decentralised applications that are aware of the core functions of scientific communication
    corecore