7 research outputs found

    Fisheye Consistency: Keeping Data in Synch in a Georeplicated World

    Get PDF
    Over the last thirty years, numerous consistency conditions for replicated data have been proposed and implemented. Popular examples of such conditions include linearizability (or atomicity), sequential consistency, causal consistency, and eventual consistency. These consistency conditions are usually defined independently from the computing entities (nodes) that manipulate the replicated data; i.e., they do not take into account how computing entities might be linked to one another, or geographically distributed. To address this lack, as a first contribution, this paper introduces the notion of proximity graph between computing nodes. If two nodes are connected in this graph, their operations must satisfy a strong consistency condition, while the operations invoked by other nodes are allowed to satisfy a weaker condition. The second contribution is the use of such a graph to provide a generic approach to the hybridization of data consistency conditions into the same system. We illustrate this approach on sequential consistency and causal consistency, and present a model in which all data operations are causally consistent, while operations by neighboring processes in the proximity graph are sequentially consistent. The third contribution of the paper is the design and the proof of a distributed algorithm based on this proximity graph, which combines sequential consistency and causal consistency (the resulting condition is called fisheye consistency). In doing so the paper not only extends the domain of consistency conditions, but provides a generic provably correct solution of direct relevance to modern georeplicated systems

    Towards quality-of-service driven consistency for Big Data management

    Get PDF
    International audienceWith the advent of Cloud Computing, Big Data management has become a fundamental challenge during the deployment and operation of distributed highly available and fault-tolerant storage systems such as the HBase extensible record-store. These systems can provide support for geo-replication, which comes with the issue of data consistency among distributed sites. In order to offer a best-in-class service to applications, one wants to maximise performance while minimising latency. In terms of data replication, that means incurring in as low latency as possible when moving data between distant data centres. Traditional consistency models introduce a significant problem for systems architects, which is specially important to note in cases where large amounts of data need to be replicated across wide-area networks. In such scenarios it might be suitable to use eventual consistency, and even though not always convenient, latency can be partly reduced and traded for consistency guarantees so that data-transfers do not impact performance. In contrast, this work proposes a broader range of data semantics for consistency while prioritising data at the cost of putting a minimum latency overhead on the rest of non-critical updates. Finally, we show how these semantics can help in finding an optimal data replication strategy for achieving just the required level of data consistency under low latency and a more efficient network bandwidth utilisation

    Correctness and Progress Verification of Non-Blocking Programs

    Get PDF
    The progression of multi-core processors has inspired the development of concurrency libraries that guarantee safety and liveness properties of multiprocessor applications. The difficulty of reasoning about safety and liveness properties in a concurrent environment has led to the development of tools to verify that a concurrent data structure meets a correctness condition or progress guarantee. However, these tools possess shortcomings regarding the ability to verify a composition of data structure operations. Additionally, verification techniques for transactional memory evaluate correctness based on low-level read/write histories, which is not applicable to transactional data structures that use a high-level semantic conflict detection. In my dissertation, I present tools for checking the correctness of multiprocessor programs that overcome the limitations of previous correctness verification techniques. Correctness Condition Specification (CCSpec) is the first tool that automatically checks the correctness of a composition of concurrent multi-container operations performed in a non-atomic manner. Transactional Correctness tool for Abstract Data Types (TxC-ADT) is the first tool that can check the correctness of transactional data structures. TxC-ADT elevates the standard definitions of transactional correctness to be in terms of an abstract data type, an essential aspect for checking correctness of transactions that synchronize only for high-level semantic conflicts. Many practical concurrent data structures, transactional data structures, and algorithms to facilitate non-blocking programming all incorporate helping schemes to ensure that an operation comprising multiple atomic steps is completed according to the progress guarantee. The helping scheme introduces additional interference by the active threads in the system to achieve the designed progress guarantee. Previous progress verification techniques do not accommodate loops whose termination is dependent on complex behaviors of the interfering threads, making these approaches unsuitable. My dissertation presents the first progress verification technique for non-blocking algorithms that are dependent on descriptor-based helping mechanisms

    La cohérence en oeil de poisson : maintenir la synchronisation des données dans un monde géo-répliqué

    Get PDF
    Over the last thirty years, numerous consistency conditions for replicated data have been proposed and implemented. Popular examples of such conditions include linearizability (or atomicity), sequential consistency, causal consistency, and eventual consistency. These consistency conditions are usually defined independently from the computing entities (nodes) that manipulate the replicated data; i.e., they do not take into account how computing entities might be linked to one another, or geographically distributed. To address this lack, as a first contribution, this paper introduces the notion of proximity graph between computing nodes. If two nodes are connected in this graph, their operations must satisfy a strong consistency condition, while the operations invoked by other nodes are allowed to satisfy a weaker condition. The second contribution is the use of such a graph to provide a generic approach to the hybridization of data consistency conditions into the same system. We illustrate this approach on sequential consistency and causal consistency, and present a model in which all data operations are causally consistent, while operations by neighboring processes in the proximity graph are sequentially consistent. The third contribution of the paper is the design and the proof of a distributed algorithm based on this proximity graph, which combines sequential consistency and causal consistency (the resulting condition is called fisheye consistency). In doing so the paper not only extends the domain of consistency conditions, but provides a generic provably correct solution of direct relevance to modern georeplicated systems.Au cours des trente derniĂšres annĂ©es, de nombreuses conditions de cohĂ©rence pour les donnĂ©es rĂ©pliquĂ©es ont Ă©tĂ© proposĂ©es et mises en oeuvre. Les exemples courants de ces conditions comprennent la linĂ©arisabilitĂ© (ou atomicitĂ©), la cohĂ©rence sĂ©quentielle, la cohĂ©rence causale, et la cohĂ©rence Ă©ventuelle. Ces conditions de cohĂ©rence sont gĂ©nĂ©ralement dĂ©finies indĂ©pendamment des entitĂ©s informatiques (noeuds) qui manipulent les donnĂ©es rĂ©pliquĂ©es; c'est Ă  dire qu'elles ne prennent pas en compte la façon dont les entitĂ©s informatiques peuvent ĂȘtre liĂ©es les unes aux autres, ou gĂ©ographiquement distribuĂ©es. Pour combler ce manque, ce document introduit la notion de graphe de proximitĂ© entre les noeuds de calcul d'un systĂšme rĂ©parti. Si deux noeuds sont connectĂ©s dans ce graphe, leurs activitĂ©s doivent satisfaire une condition de cohĂ©rence forte, tandis que les opĂ©rations invoquĂ©es par d'autres noeuds peuvent ne satisfaire qu'une condition plus faible. Nous proposons d'utiliser un tel graphe pour fournir une approche gĂ©nĂ©rique Ă  l'hybridation de conditions de cohĂ©rence des donnĂ©es dans un mĂȘme systĂšme. Nous illustrons cette approche sur l'exemple de la cohĂ©rence sĂ©quentielle et de la cohĂ©rence causale, et prĂ©sentons un modĂšle dans lequel, d'une part, toutes les opĂ©rations sont causalement cohĂ©rentes, et, d'autre part, les opĂ©rations par des processus qui sont voisins dans le graphe de proximitĂ© satisfont la cohĂ©rence sĂ©quentielle. Nous proposons et prouvons un algorithme distribuĂ© basĂ© sur ce graphe de proximitĂ©, qui combine la cohĂ©rence sĂ©quentielle et la cohĂ©rence causal (nous appelons la cohĂ©rence obtenue cohĂ©rence en oeil de poisson). Ce faisant, le papier non seulement Ă©tend le domaine des conditions de cohĂ©rence, mais fournit une solution algorithmiquement correcte et gĂ©nĂ©rique directement applicable aux systĂšmes gĂ©o-rĂ©partis modernes

    Smuggling in theories and practices of contemporary visual culture

    Get PDF
    The term smuggling has, for the most part, functioned in critical theory and visual culture only as an arch-metaphor. It conveniently carries discourse unproblematically and invisibly across impasses and between bodies of incompatible work. Alternatively, it is all too visible and taken for granted as romantic stereotype. In the thesis, contraband and smuggling are examined for their complexity beyond these omissions and over-determinations in their theorization and circulation in literary and visual cultures. Secrecies, emergences and partial visibilities of smuggling are considered for how they disrupt dominant modes of vision, such as the scopic geometry of border checkpoints and simplistic representative mappings of territory that assign fixed cultural identities and positionalities. The thesis proposes that contraband subjectivities produce new ways of being-in-the-world, critical perspectives and modes of mobility, as well as providing a toolbox for examining ways that art practice negotiates between its visibility and its constitutive secrecy. The simplistic, unimpeded scopic structuring of the border drama between smuggler and customs/Law, that often becomes ensnared in systematic psychoanalytic and socio-anthropologic readings, is contested, and instead proposed as a site of variability; of partial visibilities, knowledges and meanings. Smuggling, rarely considered in postcolonial theory, is put forward as a mediating installation and subjective occupation of a space that began to be opened up through the oscillating veil theorized, by amongst others, Frantz Fanon. The argument attempts to move beyond the screening of contraband towards another form of mobility that is most subtly expressed through the baroque notion of the fold theorized by Gilles Deleuze (after Leibniz) and that suggests forms of dissimulation that go beyond surface and towards productive secrecy. In a case study that examines a very overt, literal form of smuggling in Colombia it is suggested that secrecy must be built back into conceptions of contrabanding for it to be, at least in part, visually comprehensible. New ways of thinking contraband, for instance in alliance with law and as public secrecy, are examined for how they form relational counter-cartographies and singular fields of operation that might be taken up by art practices. The capacity for critical theorists to get close to affective contraband milieu through visual material becomes a measure of how they themselves perform as smugglers

    Consistency Without Borders

    No full text
    Distributed consistency is a perennial research topic; in recent years it has become an urgent practical matter as well. The research literature has focused on enforcing various flavors of consistency at the I/O layer, such as linearizability of read/write registers. For practitioners, strong I/O consistency is often impractical at scale, while looser forms of I/O consistency are difficult to map to application-level concerns. Instead, it is common for developers to take matters of distributed consistency into their own hands, leading to application-specific solutions that are tricky to write, test and maintain. In this paper, we agitate for the technical community to shift its attention to approaches that lie between the extremes of I/O-level and application-level consistency. We ground our discussion in early work in the area, including our own experiences building programmer tools and languages that help developers guarantee distributed consistency at the application level. Much remains to be done, and we highlight some of the challenges that we feel deserve more attention.
    corecore