3,407 research outputs found

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    Basis Token Consistency: A Practical Mechanism for Strong Web Cache Consistency

    Full text link
    With web caching and cache-related services like CDNs and edge services playing an increasingly significant role in the modern internet, the problem of the weak consistency and coherence provisions in current web protocols is becoming increasingly significant and drawing the attention of the standards community [LCD01]. Toward this end, we present definitions of consistency and coherence for web-like environments, that is, distributed client-server information systems where the semantics of interactions with resource are more general than the read/write operations found in memory hierarchies and distributed file systems. We then present a brief review of proposed mechanisms which strengthen the consistency of caches in the web, focusing upon their conceptual contributions and their weaknesses in real-world practice. These insights motivate a new mechanism, which we call "Basis Token Consistency" or BTC; when implemented at the server, this mechanism allows any client (independent of the presence and conformity of any intermediaries) to maintain a self-consistent view of the server's state. This is accomplished by annotating responses with additional per-resource application information which allows client caches to recognize the obsolescence of currently cached entities and identify responses from other caches which are already stale in light of what has already been seen. The mechanism requires no deviation from the existing client-server communication model, and does not require servers to maintain any additional per-client state. We discuss how our mechanism could be integrated into a fragment-assembling Content Management System (CMS), and present a simulation-driven performance comparison between the BTC algorithm and the use of the Time-To-Live (TTL) heuristic.National Science Foundation (ANI-9986397, ANI-0095988

    A review of cyber vigilance tasks for network defense

    Get PDF
    The capacity to sustain attention to virtual threat landscapes has led cyber security to emerge as a new and novel domain for vigilance research. However, unlike classic domains, such as driving and air traffic control and baggage security, very few vigilance tasks exist for the cyber security domain. Four essential challenges that must be overcome in the development of a modern, validated cyber vigilance task are extracted from this review of existent platforms that can be found in the literature. Firstly, it can be difficult for researchers to access confidential cyber security systems and personnel. Secondly, network defense is vastly more complex and difficult to emulate than classic vigilance domains such as driving. Thirdly, there exists no single, common software console in cyber security that a cyber vigilance task could be based on. Finally, the rapid pace of technological evolution in network defense correspondingly means that cyber vigilance tasks can become obsolete just as quickly. Understanding these challenges is imperative in advancing human factors research in cyber security

    Proceedings of the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications

    Get PDF
    The proceedings of the National Space Science Data Center Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications held July 23 through 25, 1991 at the NASA/Goddard Space Flight Center are presented. The program includes a keynote address, invited technical papers, and selected technical presentations to provide a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's

    Managing IT Operations in a Cloud-driven Enterprise: Case Studies

    Get PDF
    Enterprise IT needs a new approach to manage processes, applications and infrastructure which are distributed across a mix of environments. In an Enterprise traditionally a request to deliver an application to business could take weeks or months due to decision-making functions, multiple approval bodies and processes that exist within IT departments. These delays in delivering a requested service can lead to dissatisfaction, with the result that the line-of-business group may seek alternative sources of IT capabilities. Also the complex IT infrastructure of these enterprises cannot keep up with the demand of new applications and services from an increasingly dispersed and mobile workforce which results in slower rollout of critical applications and services, limited resources, poor operation visibility and control. In such scenarios, it’s better to adopt cloud services to substitute for new application deployment otherwise most Enterprise IT organizations face the risk of losing 'market share' to the Public Cloud. Using Cloud Model the organizations should increase ROI, lower TCO and operate with seamless IT operations. It also helps to beat shadow IT and the practice of resource over-or under provisioning. In this research paper we have given two case studies where we migrated two Enterprise IT application to public clouds for the purpose of lower TCO and higher ROI. By migrating, the IT organizations improved IT agility, enterprise-class software for performance, security and control. In this paper, we also focus on the advantages and challenges while adopting cloud services

    2012 PWST Workshop Summary

    Get PDF
    No abstract availabl

    Proceedings of the 12th International Conference on Digital Preservation

    Get PDF
    The 12th International Conference on Digital Preservation (iPRES) was held on November 2-6, 2015 in Chapel Hill, North Carolina, USA. There were 327 delegates from 22 countries. The program included 12 long papers, 15 short papers, 33 posters, 3 demos, 6 workshops, 3 tutorials and 5 panels, as well as several interactive sessions and a Digital Preservation Showcase
    • …
    corecore