20,207 research outputs found

    Low cost management of replicated data in fault-tolerant distributed systems

    Get PDF
    Many distributed systems replicate data for fault tolerance or availability. In such systems, a logical update on a data item results in a physical update on a number of copies. The synchronization and communication required to keep the copies of replicated data consistent introduce a delay when operations are performed. A technique is described that relaxes the usual degree of synchronization, permitting replicated data items to be updated concurrently with other operations, while at the same time ensuring that correctness is not violated. The additional concurrency thus obtained results in better response time when performing operations on replicated data. How this technique performs in conjunction with a roll-back and a roll-forward failure recovery mechanism is also discussed

    An architecture of internet based data processing based on multicast and anycast protocols

    Full text link
    Most of the current web-based application systems suffer from poor performance and costly heterogeneous accessing. Distributed or replicated strategies can alleviate the problem in some degree, but there are still some problems of the distributed or replicated model, such as data synchronization, load balance, and so on.&nbsp; In this paper, we propose a novel architecture for Internet-based data processing system based on multicast and anycast protocols. The proposed architecture breaks the functionalities of existing data processing system, in particular, the database functionality, into several agents. These agents communicate with each other using multicast and anycast mechanisms. We show that the proposed architecture provides better scalability, robustness, automatic load balance, and performance than the current distributed architecture of Internet-based dataprocessing.<br /

    Asynchronous replication of metadata across multi-master servers in distributed data storage systems

    Get PDF
    In recent years, scientific applications have become increasingly data intensive. The increase in the size of data generated by scientific applications necessitates collaboration and sharing data among the nation\u27s education and research institutions. To address this, distributed storage systems spanning multiple institutions over wide area networks have been developed. One of the important features of distributed storage systems is providing global unified name space across all participating institutions, which enables easy data sharing without the knowledge of actual physical location of data. This feature depends on the ``location metadata\u27\u27 of all data sets in the system being available to all participating institutions. This introduces new challenges. In this thesis, we study different metadata server layouts in terms of high availability, scalability and performance. A central metadata server is a single point of failure leading to low availability. Ensuring high availability requires replication of metadata servers. A synchronously replicated metadata servers layout introduces synchronization overhead which degrades the performance of data operations. We propose an asynchronously replicated multi-master metadata servers layout which ensures high availability, scalability and provides better performance. We discuss the implications of asynchronously replicated multi-master metadata servers on metadata consistency and conflict resolution. Further, we design and implement our own asynchronous multi-master replication tool, deploy it in the state-wide distributed data storage system called PetaShare, and compare performance of all three metadata server layouts: central metadata server, synchronously replicated multi-master metadata servers and asynchronously replicated multi-master metadata servers

    Consistency types for replicated data in a higher-order distributed programming language

    Full text link
    Distributed systems address the increasing demand for fast access to resources and fault tolerance for data. However, due to scalability requirements, software developers need to trade consistency for performance. For certain data, consistency guarantees may be weakened if application correctness is unaffected. In contrast, data flow from data with weak consistency to data with strong consistency requirements is problematic, since application correctness may be broken. In this paper, we propose lattice-based consistency types for replicated data (CTRD), a higher-order static consistency-typed language with replicated data types. The type system of CTRD supports shared data among multiple clients, and statically enforces noninterference between data types with weaker consistency and data types with stronger consistency. The language can be applied to many distributed applications and guarantees that updates of weakly-consistent data can never affect strongly-consistent data. We also extend the basic CTRD with an optimization that reduces synchronization for generating reference graphs

    Object level physics data replication in the Grid

    Get PDF
    To support distributed physics analysis on a scale as foreseen by the LHC experiments, 'Grid' systems are needed that manage and streamline data distribution, replication, and synchronization. We report on the development of a tool that allows large physics datasets to be managed and replicated at the granularity level of single objects. Efficient and convenient support for data extraction and replication at the level of individual objects and events will enable for types of interactive data analysis that would be too inconvenient or costly to perform with tools that work on a file level only. Our tool development effort is intended as both a demonstrator project for various types of existing Grid technology, and as a research effort to develop Grid technology further. The basic use case supported by our tool is one in which a physicist repeatedly selects some physics objects located at a central repository, and replicates them to a local site. The selection can be done using 'tag' or 'ntuple' analysis at the local site. The tool replicates the selected objects, and merges all replicated objects into a single single coherent 'virtual' dataset. This allows all objects to be used together seamlessly, even if they were replicated at different times or from different locations. The version of the tool that is reported on in this paper replicates ORCA based physics data created by CMS in its ongoing high level trigger design studies. The basic capabilities and limitations of the tool are discussed, together with some performance results. Some tool internals are also presented. Finally we will report on experiences so far and on future plans
    corecore