8,742 research outputs found

    Critique of Architectures for Long-Term Digital Preservation

    Get PDF
    Evolving technology and fading human memory threaten the long-term intelligibility of many kinds of documents. Furthermore, some records are susceptible to improper alterations that make them untrustworthy. Trusted Digital Repositories (TDRs) and Trustworthy Digital Objects (TDOs) seem to be the only broadly applicable digital preservation methodologies proposed. We argue that the TDR approach has shortfalls as a method for long-term digital preservation of sensitive information. Comparison of TDR and TDO methodologies suggests differentiating near-term preservation measures from what is needed for the long term. TDO methodology addresses these needs, providing for making digital documents durably intelligible. It uses EDP standards for a few file formats and XML structures for text documents. For other information formats, intelligibility is assured by using a virtual computer. To protect sensitive information—content whose inappropriate alteration might mislead its readers, the integrity and authenticity of each TDO is made testable by embedded public-key cryptographic message digests and signatures. Key authenticity is protected recursively in a social hierarchy. The proper focus for long-term preservation technology is signed packages that each combine a record collection with its metadata and that also bind context—Trustworthy Digital Objects.

    The Semantic Grid: A future e-Science infrastructure

    No full text
    e-Science offers a promising vision of how computer and communication technology can support and enhance the scientific process. It does this by enabling scientists to generate, analyse, share and discuss their insights, experiments and results in an effective manner. The underlying computer infrastructure that provides these facilities is commonly referred to as the Grid. At this time, there are a number of grid applications being developed and there is a whole raft of computer technologies that provide fragments of the necessary functionality. However there is currently a major gap between these endeavours and the vision of e-Science in which there is a high degree of easy-to-use and seamless automation and in which there are flexible collaborations and computations on a global scale. To bridge this practice–aspiration divide, this paper presents a research agenda whose aim is to move from the current state of the art in e-Science infrastructure, to the future infrastructure that is needed to support the full richness of the e-Science vision. Here the future e-Science research infrastructure is termed the Semantic Grid (Semantic Grid to Grid is meant to connote a similar relationship to the one that exists between the Semantic Web and the Web). In particular, we present a conceptual architecture for the Semantic Grid. This architecture adopts a service-oriented perspective in which distinct stakeholders in the scientific process, represented as software agents, provide services to one another, under various service level agreements, in various forms of marketplace. We then focus predominantly on the issues concerned with the way that knowledge is acquired and used in such environments since we believe this is the key differentiator between current grid endeavours and those envisioned for the Semantic Grid

    Developing front-end Web 2.0 technologies to access services, content and things in the future Internet

    Get PDF
    The future Internet is expected to be composed of a mesh of interoperable web services accessible from all over the web. This approach has not yet caught on since global user?service interaction is still an open issue. This paper states one vision with regard to next-generation front-end Web 2.0 technology that will enable integrated access to services, contents and things in the future Internet. In this paper, we illustrate how front-ends that wrap traditional services and resources can be tailored to the needs of end users, converting end users into prosumers (creators and consumers of service-based applications). To do this, we propose an architecture that end users without programming skills can use to create front-ends, consult catalogues of resources tailored to their needs, easily integrate and coordinate front-ends and create composite applications to orchestrate services in their back-end. The paper includes a case study illustrating that current user-centred web development tools are at a very early stage of evolution. We provide statistical data on how the proposed architecture improves these tools. This paper is based on research conducted by the Service Front End (SFE) Open Alliance initiative

    Standardisation of Provenance Systems in Service Oriented Architectures --- White Paper

    No full text
    This White Paper presents provenance in computer systems as a mechanism by which business and e-science can undertake compliance validation and analysis of their past processes. We discuss an open approach that can bring benefits to application owners, IT providers, auditors and reviewers. In order to capitalise on such benefits, we make specific recommendations to move forward a standardisation activity in this domain

    B2B Infrastructures in the Process of Drug Discovery and Healthcare

    No full text
    In this paper we describe a demonstration of an innovative B2B infrastructure which can be used to support collaborations in the pharmaceutical industry to achieve the drug discovery goal. Based on experience gained in a wide range of collaborative projects in the areas of grid technology, semantics and data management we show future work and new topics in B2B infrastructures which arise when considering the use of patient records in the process of drug discovery and in healthcare applications

    LO-FAT: Low-Overhead Control Flow ATtestation in Hardware

    Full text link
    Attacks targeting software on embedded systems are becoming increasingly prevalent. Remote attestation is a mechanism that allows establishing trust in embedded devices. However, existing attestation schemes are either static and cannot detect control-flow attacks, or require instrumentation of software incurring high performance overheads. To overcome these limitations, we present LO-FAT, the first practical hardware-based approach to control-flow attestation. By leveraging existing processor hardware features and commonly-used IP blocks, our approach enables efficient control-flow attestation without requiring software instrumentation. We show that our proof-of-concept implementation based on a RISC-V SoC incurs no processor stalls and requires reasonable area overhead.Comment: Authors' pre-print version to appear in DAC 2017 proceeding

    Architecture for Provenance Systems

    No full text
    This document covers the logical and process architectures of provenance systems. The logical architecture identifies key roles and their interactions, whereas the process architecture discusses distribution and security. A fundamental aspect of our presentation is its technology-independent nature, which makes it reusable: the principles that are exposed in this document may be applied to different technologies
    • 

    corecore