15,702 research outputs found

    On consistency maintenance in service discovery

    Get PDF
    Communication and node failures degrade the ability of a service discovery protocol to ensure Users receive the correct service information when the service changes. We propose that service discovery protocols employ a set of recovery techniques to recover from failures and regain consistency. We use simulations to show that the type of recovery technique a protocol uses significantly impacts the performance. We benchmark the performance of our own service discovery protocol, FRODO against the performance of first generation service discovery protocols, Jini and UPnP during increasing communication and node failures. The results show that FRODO has the best overall consistency maintenance performance

    Deterministic Object Management in Large Distributed Systems

    Get PDF
    Caching is a widely used technique to improve the scalability of distributed systems. A central issue with caching is maintaining object replicas consistent with their master copies. Large distributed systems, such as the Web, typically deploy heuristic-based consistency mechanisms, which increase delay and place extra load on the servers, while not providing guarantees that cached copies served to clients are up-to-date. Server-driven invalidation has been proposed as an approach to strong cache consistency, but it requires servers to keep track of which objects are cached by which clients. We propose an alternative approach to strong cache consistency, called MONARCH, which does not require servers to maintain per-client state. Our approach builds on a few key observations. Large and popular sites, which attract the majority of the traffic, construct their pages from distinct components with various characteristics. Components may have different content types, change characteristics, and semantics. These components are merged together to produce a monolithic page, and the information about their uniqueness is lost. In our view, pages should serve as containers holding distinct objects with heterogeneous type and change characteristics while preserving the boundaries between these objects. Servers compile object characteristics and information about relationships between containers and embedded objects into explicit object management commands. Servers piggyback these commands onto existing request/response traffic so that client caches can use these commands to make object management decisions. The use of explicit content control commands is a deterministic, rather than heuristic, object management mechanism that gives content providers more control over their content. The deterministic object management with strong cache consistency offered by MONARCH allows content providers to make more of their content cacheable. Furthermore, MONARCH enables content providers to expose internal structure of their pages to clients. We evaluated MONARCH using simulations with content collected from real Web sites. The results show that MONARCH provides strong cache consistency for all objects, even for unpredictably changing ones, and incurs smaller byte and message overhead than heuristic policies. The results also show that as the request arrival rate or the number of clients increases, the amount of server state maintained by MONARCH remains the same while the amount of server state incurred by server invalidation mechanisms grows

    Banking in Brazil: Structure, Performance, Drivers, and Policy Implications

    Get PDF
    he objective of this paper is to analyze the industry structure of banking services in Brazil in order to shed light on financial performance and its drivers at a disaggregated level. The study illustrates how differences across market segments -- which tend to be averaged out in aggregate analysis -- need to be taken into account when analyzing performance and designing public policy for the banking sector. In particular, retail banking is found to be less sensitive to price competition and to exhibit considerably higher returns than corporate banking. The authors identify and discuss the factors underlying revenues, costs, and risks in each market segment, and conclude with policy implications.Brazil; banking; competition; industry structure; performance

    ORLease: Optimistically Replicated Lease Using Lease Version Vector For Higher Replica Consistency in Optimistic Replication Systems

    Get PDF
    There is a tradeoff between the availability and consistency properties of any distributed replication system. Optimistic replication favors high availability over strong consistency so that the replication system can support disconnected replicas as well as high network latency between replicas. Optimistic replication improves the availability of these systems by allowing data updates to be committed at their originating replicas first before they are asynchronously replicated out and committed later at the rest of the replicas. This leads the whole system to suffer from a relaxed data consistency. This is due to the lack of any locking mechanism to synchronize access to the replicated data resources in order to mutually exclude one another. When consistency is relaxed, there is a potential of reading from stale data as well as introducing data conflicts due to the concurrent data updates that might have been introduced at different replicas. These issues could be ameliorated if the optimistic replication system is aggressively propagating the data updates at times of good network connectivity between replicas. However, aggressive propagation for data updates does not scale well in write intensive environments and leads to communication overhead in order to keep all replicas in sync. In pursuance of a solution to mitigate the relaxed consistency drawback, a new technique has been developed that improves the consistency of optimistic replication systems without sacrificing its availability and with minimal communication overhead. This new methodology is based on applying the concurrency control technique of leasing in an optimistic way. The optimistic lease technique is built on top of a replication framework that prioritizes metadata replication over data replication. The framework treats the lease requests as replication metadata updates and replicates them aggressively in order to optimistically acquire leases on replicated data resources. The technique is demonstrating a best effort semi-locking semantics that improves the overall system consistency while avoiding any locking issues that could arise in optimistic replication systems

    Hierarchical Replication Control

    Full text link
    We present a hierarchical locking algorithm that dynamically elects a primary server in a replicated file system at various granularities. We introduce two lock types: shallow locks that control a single file or directory, and deep locks that lock everything in the subtree rooted at a directory. Experimental results show that for typical use cases, deep locks can make the overhead of replication control negligible, even when replication servers are widely distributed.http://deepblue.lib.umich.edu/bitstream/2027.42/107961/1/citi-tr-06-3.pd

    LAND AND AGRARIAN REFORM IN THE KYRGYZ REPUBLIC

    Get PDF
    This report presents LTC's findings and recommendations on the land tenure transition. The information contained in this report has been used to prepare a second document, Land and Agrarian Reform in the Kyrgyz Republic: Consolidation Plan, that proposes a set of actions to ensure that the reforms are completed and produce a viable, market-oriented, agricultural sector. Chapter 1 offers baseline geographic information on the KR, an account of the macroeconomic environment in which reforms are taking place, a brief project history, and a description of the research methods. Chapter 2 chronicles the legal and regulatory changes that have driven land and agrarian reform in the KR since 1991 and evaluates this legislation for its legal consistency, underlying economic assumptions, and broad policy implications. Chapter 3 employs national land statistics to describe changes in the agrarian structure that have resulted from the legal and regulatory evolution during 1991-1995, including the number and size of farms, land use, and related indicators of land tenure change and agrarian reform. Chapter 4 reviews the structure, function, and efficacy of administrative bodies that set and enforce land and land reform policy, recommends administrative adjustments, and identifies land administration tasks the state can eliminate-and others it will need to bolster-as the KR completes its transition from command structures to market principles in agriculture. Finally, Chapter 5 uses data obtained in structured surveys and case studies conducted by LTC on a 10 percent sample of former state and collective farms to describe at the farm level the successes and shortcomings of reform measures to date; the chapter also makes recommendations for new or altered land reform policies and procedures.Agrarian structure--Kyrgyzstan, Land administration--Kyrgyzstan, Land reform--Kyrgyzstan, Land tenure--Government policy--Kyrgyzstan, Land tenure--Kyrgyzstan, Land titles--Registration and transfer--Kyrgyzstan, International Development, Land Economics/Use,

    Assise: Performance and Availability via NVM Colocation in a Distributed File System

    Full text link
    The adoption of very low latency persistent memory modules (PMMs) upends the long-established model of disaggregated file system access. Instead, by colocating computation and PMM storage, we can provide applications much higher I/O performance, sub-second application failover, and strong consistency. To demonstrate this, we built the Assise distributed file system, based on a persistent, replicated coherence protocol for managing a set of server-colocated PMMs as a fast, crash-recoverable cache between applications and slower disaggregated storage, such as SSDs. Unlike disaggregated file systems, Assise maximizes locality for all file IO by carrying out IO on colocated PMM whenever possible and minimizes coherence overhead by maintaining consistency at IO operation granularity, rather than at fixed block sizes. We compare Assise to Ceph/Bluestore, NFS, and Octopus on a cluster with Intel Optane DC PMMs and SSDs for common cloud applications and benchmarks, such as LevelDB, Postfix, and FileBench. We find that Assise improves write latency up to 22x, throughput up to 56x, fail-over time up to 103x, and scales up to 6x better than its counterparts, while providing stronger consistency semantics. Assise promises to beat the MinuteSort world record by 1.5x

    Crux: Locality-Preserving Distributed Services

    Full text link
    Distributed systems achieve scalability by distributing load across many machines, but wide-area deployments can introduce worst-case response latencies proportional to the network's diameter. Crux is a general framework to build locality-preserving distributed systems, by transforming an existing scalable distributed algorithm A into a new locality-preserving algorithm ALP, which guarantees for any two clients u and v interacting via ALP that their interactions exhibit worst-case response latencies proportional to the network latency between u and v. Crux builds on compact-routing theory, but generalizes these techniques beyond routing applications. Crux provides weak and strong consistency flavors, and shows latency improvements for localized interactions in both cases, specifically up to several orders of magnitude for weakly-consistent Crux (from roughly 900ms to 1ms). We deployed on PlanetLab locality-preserving versions of a Memcached distributed cache, a Bamboo distributed hash table, and a Redis publish/subscribe. Our results indicate that Crux is effective and applicable to a variety of existing distributed algorithms.Comment: 11 figure
    corecore