9,093 research outputs found

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    Maintaining consistency in distributed systems

    Get PDF
    In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability

    Enabling the Autonomic Management of Federated Identity Providers

    Get PDF
    The autonomic management of federated authorization infrastructures (federations) is seen as a means for improving the monitoring and use of a service provider’s resources. However, federations are comprised of independent management domains with varying scopes of control and data ownership. The focus of this paper is on the autonomic management of federated identity providers by service providers located in other domains, when the identity providers have been diagnosed as the source of abuse. In particular, we describe how an autonomic controller, external to the domain of the identity provider, exercises control over the issuing of privilege attributes. The paper presents a conceptual design and implementation of an effector for an identity provider that is capable of enabling cross-domain autonomic management. The implementation of an effector for a SimpleSAMLphp identity provider is evaluated by demonstrating how an autonomic controller, together with the effector, is capable of responding to malicious abuse

    BlogForever D3.2: Interoperability Prospects

    Get PDF
    This report evaluates the interoperability prospects of the BlogForever platform. Therefore, existing interoperability models are reviewed, a Delphi study to identify crucial aspects for the interoperability of web archives and digital libraries is conducted, technical interoperability standards and protocols are reviewed regarding their relevance for BlogForever, a simple approach to consider interoperability in specific usage scenarios is proposed, and a tangible approach to develop a succession plan that would allow a reliable transfer of content from the current digital archive to other digital repositories is presented

    Rethinking De-Perimeterisation: Problem Analysis And Solutions

    Get PDF
    For businesses, the traditional security approach is the hard-shell model: an organisation secures all its assets using a fixed security border, trusting the inside, and distrusting the outside. However, as technologies and business processes change, this model looses its attractiveness. In a networked world, “inside” and “outside” can no longer be clearly distinguished. The Jericho Forum - an industry consortium part of the Open Group – coined this process deperimeterisation and suggested an approach aimed at securing data rather than complete systems and infrastructures. We do not question the reality of de-perimeterisation; however, we believe that the existing analysis of the exact problem, as well as the usefulness of the proposed solutions have fallen short: first, there is no linear process of blurring boundaries, in which security mechanisms are placed at lower and lower levels, until they only surround data. To the contrary, we experience a cyclic process of connecting and disconnecting of systems. As conditions change, the basic trade-off between accountability and business opportunities is made (and should be made) every time again. Apart from that, data level security has several limitations to start with, and there is a big potential for solving security problems differently: by rearranging the responsibilities between businesses and individuals. The results of this analysis can be useful for security professionals who need to trade off different security mechanisms for their organisations and their information systems
    • 

    corecore