28,373 research outputs found
About the efficiency of partial replication to implement Distributed Shared Memory
Distributed Shared Memory abstraction (DSM) is traditionally realized through a distributed memory consistency system(MCS) on top of a message passing system. In this paper we analyze the impossibility of efficient partial replication implementation of causally consistent DSM. Efficiency is discussed in terms of control information that processes have to propagate to maintain consistency. We introduce the notions of share graph and hoop to model variable distribution and the concept of dependency chain to characterize processes that have to manage information about a variable even though they do not read or write that variable. Then, we weaken causal consistency to try to define new consistency criteria weaker enough to allow efficient partial replication implementations and strong enough to solve interesting problems. Finally, we prove that PRAM is such a criterion, and illustrate its power with the Bellman-Ford shortest path algorithm. / Les mémoires partagées réparties constituent une abstraction qui est traditionellement concrétisée par un système réparti de mémoire cohérente, au-dessus d'un système de communication par messages. Dans ce rapport, on analyse l'impossibilité d'avoir une implémentation efficace de mémoire partagée répartie à cohérence causale, basée sur la duplication partielle des variables. L'efficacité est envisagée en terme d'information contrôle qui doit être propagée pour assurer la cohérence. On introduit les notions de graphe de partage et d'arceau, qui modélisent la répartition des variables et la notion de chaîne de dépendance pour caractériser les processus qui doivent gérer des informations relatives à une variable dont ils ne possèdent pas de copie locale. Ensuite, on affaiblit le critère de cohérence causale, dans le but de déterminer un nouveau critère de cohérence qui soit suffisament faible pour permettre un implémentation efficace basée sur la duplication partielle, mais suffisament forte pour pouvoir résoudre des problèmes intéressants. Finalement, on prouve que le critère appelé PRAM satisfait ces exigences, et illustrons sa pertinence en montrant une implémentation de l'algorithme de plus court chemin de Bellman-Ford
Parallel Deferred Update Replication
Deferred update replication (DUR) is an established approach to implementing
highly efficient and available storage. While the throughput of read-only
transactions scales linearly with the number of deployed replicas in DUR, the
throughput of update transactions experiences limited improvements as replicas
are added. This paper presents Parallel Deferred Update Replication (P-DUR), a
variation of classical DUR that scales both read-only and update transactions
with the number of cores available in a replica. In addition to introducing the
new approach, we describe its full implementation and compare its performance
to classical DUR and to Berkeley DB, a well-known standalone database
TOFEC: Achieving Optimal Throughput-Delay Trade-off of Cloud Storage Using Erasure Codes
Our paper presents solutions using erasure coding, parallel connections to
storage cloud and limited chunking (i.e., dividing the object into a few
smaller segments) together to significantly improve the delay performance of
uploading and downloading data in and out of cloud storage.
TOFEC is a strategy that helps front-end proxy adapt to level of workload by
treating scalable cloud storage (e.g. Amazon S3) as a shared resource requiring
admission control. Under light workloads, TOFEC creates more smaller chunks and
uses more parallel connections per file, minimizing service delay. Under heavy
workloads, TOFEC automatically reduces the level of chunking (fewer chunks with
increased size) and uses fewer parallel connections to reduce overhead,
resulting in higher throughput and preventing queueing delay. Our trace-driven
simulation results show that TOFEC's adaptation mechanism converges to an
appropriate code that provides the optimal delay-throughput trade-off without
reducing system capacity. Compared to a non-adaptive strategy optimized for
throughput, TOFEC delivers 2.5x lower latency under light workloads; compared
to a non-adaptive strategy optimized for latency, TOFEC can scale to support
over 3x as many requests
- …