22,220 research outputs found

    Optimistic Replication and Resolution

    Get PDF
    International audienceData replication places physical copies of a shared logical item onto different sites. Optimistic replication (OR) allows a program at some site to read or update the local replica at any time. An update is tentative because it may conflict with a remote update. Such conflicts are resolved after the fact, in the background. Replicas may diverge occasionally but are expected to converge eventually

    Survey of data replication in P2P systems

    Get PDF
    Large-scale distributed collaborative applications are getting common as a result of rapid progress in distributed technologies (grid, peer-to-peer, and mobile computing). Peer-to-peer (P2P) systems are particularly interesting for collaborative applications as they can scale without the need for powerful servers. In P2P systems, data storage and processing are distributed across autonomous peers, which can join and leave the network at any time. To provide high data availability in spite of such dynamic behavior, P2P systems rely on data replication. Some replication approaches assume static, read-only data (e.g. music files). Other solutions deal with updates, but they simplify replica management by assuming no update conflicts or single-master replication (i.e. only one copy of the replicated data accepts write operations). P2P advanced applications, which must deal with semantically rich data (e.g. XML documents, relational tables, etc.) using a high-level SQL-like query language, are likely to need more sophisticated capabilities such as multi-master replication (i.e. all replicas accept write operations) and update conflict resolution. These issues are addressed by optimistic replication. Optimistic replication allows asynchronous updating of replicas so that applications can progress even though some nodes are disconnected or have failed. As a result, users can collaborate asynchronously. However, concurrent updates may cause replica divergence and conflicts, which should be reconciled. In this survey, we present an overview of data replication, focusing on the optimistic approach that provides good properties for dynamic environments. We also introduce P2P systems and the replication solutions they implement. In particular, we show that current P2P systems do not provide eventual consistency among replicas in the presence of updates, apart from APPA system, a P2P data management system that we are building

    Effects of baryons on weak lensing peak statistics

    Full text link
    Upcoming weak-lensing surveys have the potential to become leading cosmological probes provided all systematic effects are under control. Recently, the ejection of gas due to feedback energy from active galactic nuclei (AGN) has been identified as major source of uncertainty, challenging the success of future weak-lensing probes in terms of cosmology. In this paper we investigate the effects of baryons on the number of weak-lensing peaks in the convergence field. Our analysis is based on full-sky convergence maps constructed via light-cones from NN-body simulations, and we rely on the baryonic correction model of Schneider et al. (2019) to model the baryonic effects on the density field. As a result we find that the baryonic effects strongly depend on the Gaussian smoothing applied to the convergence map. For a DES-like survey setup, a smoothing of ξk≳8\theta_k\gtrsim8 arcmin is sufficient to keep the baryon signal below the expected statistical error. Smaller smoothing scales lead to a significant suppression of high peaks (with signal-to-noise above 2), while lower peaks are not affected. The situation is more severe for a Euclid-like setup, where a smoothing of ξk≳16\theta_k\gtrsim16 arcmin is required to keep the baryonic suppression signal below the statistical error. Smaller smoothing scales require a full modelling of baryonic effects since both low and high peaks are strongly affected by baryonic feedback.Comment: 22 pages, 11 figures, JCAP accepte

    MDCC: Multi-Data Center Consistency

    Get PDF
    Replicating data across multiple data centers not only allows moving the data closer to the user and, thus, reduces latency for applications, but also increases the availability in the event of a data center failure. Therefore, it is not surprising that companies like Google, Yahoo, and Netflix already replicate user data across geographically different regions. However, replication across data centers is expensive. Inter-data center network delays are in the hundreds of milliseconds and vary significantly. Synchronous wide-area replication is therefore considered to be unfeasible with strong consistency and current solutions either settle for asynchronous replication which implies the risk of losing data in the event of failures, restrict consistency to small partitions, or give up consistency entirely. With MDCC (Multi-Data Center Consistency), we describe the first optimistic commit protocol, that does not require a master or partitioning, and is strongly consistent at a cost similar to eventually consistent protocols. MDCC can commit transactions in a single round-trip across data centers in the normal operational case. We further propose a new programming model which empowers the application developer to handle longer and unpredictable latencies caused by inter-data center communication. Our evaluation using the TPC-W benchmark with MDCC deployed across 5 geographically diverse data centers shows that MDCC is able to achieve throughput and latency similar to eventually consistent quorum protocols and that MDCC is able to sustain a data center outage without a significant impact on response times while guaranteeing strong consistency

    Scalable XML Collaborative Editing with Undo short paper

    Get PDF
    Commutative Replicated Data-Type (CRDT) is a new class of algorithms that ensures scalable consistency of replicated data. It has been successfully applied to collaborative editing of texts without complex concurrency control. In this paper, we present a CRDT to edit XML data. Compared to existing approaches for XML collaborative editing, our approach is more scalable and handles all the XML editing aspects : elements, contents, attributes and undo. Indeed, undo is recognized as an important feature for collaborative editing that allows to overcome system complexity through error recovery or collaborative conflict resolution

    Survey of data replication in P2P systems

    Get PDF
    Large-scale distributed collaborative applications are getting common as a result of rapid progress in distributed technologies (grid, peer-to-peer, and mobile computing). Peer-to-peer (P2P) systems are particularly interesting for collaborative applications as they can scale without the need for powerful servers. In P2P systems, data storage and processing are distributed across autonomous peers, which can join and leave the network at any time. To provide high data availability in spite of such dynamic behavior, P2P systems rely on data replication. Some replication approaches assume static, read-only data (e.g. music files). Other solutions deal with updates, but they simplify replica management by assuming no update conflicts or single-master replication (i.e. only one copy of the replicated data accepts write operations). P2P advanced applications, which must deal with semantically rich data (e.g. XML documents, relational tables, etc.) using a high-level SQL-like query language, are likely to need more sophisticated capabilities such as multi-master replication (i.e. all replicas accept write operations) and update conflict resolution. These issues are addressed by optimistic replication. Optimistic replication allows asynchronous updating of replicas so that applications can progress even though some nodes are disconnected or have failed. As a result, users can collaborate asynchronously. However, concurrent updates may cause replica divergence and conflicts, which should be reconciled. In this survey, we present an overview of data replication, focusing on the optimistic approach that provides good properties for dynamic environments. We also introduce P2P systems and the replication solutions they implement. In particular, we show that current P2P systems do not provide eventual consistency among replicas in the presence of updates, apart from APPA system, a P2P data management system that we are building

    CRDTs: Consistency without concurrency control

    Get PDF
    A CRDT is a data type whose operations commute when they are concurrent. Replicas of a CRDT eventually converge without any complex concurrency control. As an existence proof, we exhibit a non-trivial CRDT: a shared edit buffer called Treedoc. We outline the design, implementation and performance of Treedoc. We discuss how the CRDT concept can be generalised, and its limitations

    The Role of Gravity in Determining Physics at High as Well as Low Energies

    Get PDF
    It is noted that in the context of a supersymmetric preonic approach to unification, gravity, though weak, can play an essential role in determining some crucial aspects of low-energy physics. These include: (i) SUSY-breaking, (ii) electroweak symmetry-breaking, and (iii) generation of masses of quarks and leptons, all of which would vanish if we turn off gravity. Such a role of gravity has its roots in the Witten index theorem which would forbid SUSY-breaking, within the class of theories under consideration, in the absence of gravity.Comment: 14 pages, 2 figures, Plain Te

    A Protocol for the Atomic Capture of Multiple Molecules at Large Scale

    Get PDF
    With the rise of service-oriented computing, applications are more and more based on coordination of autonomous services. Envisioned over largely distributed and highly dynamic platforms, expressing this coordination calls for alternative programming models. The chemical programming paradigm, which models applications as chemical solutions where molecules representing digital entities involved in the computation, react together to produce a result, has been recently shown to provide the needed abstractions for autonomic coordination of services. However, the execution of such programs over large scale platforms raises several problems hindering this paradigm to be actually leveraged. Among them, the atomic capture of molecules participating in concur- rent reactions is one of the most significant. In this paper, we propose a protocol for the atomic capture of these molecules distributed and evolving over a large scale platform. As the density of possible reactions is crucial for the liveness and efficiency of such a capture, the protocol proposed is made up of two sub-protocols, each of them aimed at addressing different levels of densities of potential reactions in the solution. While the decision to choose one or the other is local to each node participating in a program's execution, a global coherent behaviour is obtained. Proof of liveness, as well as intensive simulation results showing the efficiency and limited overhead of the protocol are given.Comment: 13th International Conference on Distributed Computing and Networking (2012
    • 

    corecore