385 research outputs found

    Update Consistency for Wait-free Concurrent Objects

    Get PDF
    In large scale systems such as the Internet, replicating data is an essential feature in order to provide availability and fault-tolerance. Attiya and Welch proved that using strong consistency criteria such as atomicity is costly as each operation may need an execution time linear with the latency of the communication network. Weaker consistency criteria like causal consistency and PRAM consistency do not ensure convergence. The different replicas are not guaranteed to converge towards a unique state. Eventual consistency guarantees that all replicas eventually converge when the participants stop updating. However, it fails to fully specify the semantics of the operations on shared objects and requires additional non-intuitive and error-prone distributed specification techniques. This paper introduces and formalizes a new consistency criterion, called update consistency, that requires the state of a replicated object to be consistent with a linearization of all the updates. In other words, whereas atomicity imposes a linearization of all of the operations, this criterion imposes this only on updates. Consequently some read operations may return out-dated values. Update consistency is stronger than eventual consistency, so we can replace eventually consistent objects with update consistent ones in any program. Finally, we prove that update consistency is universal, in the sense that any object can be implemented under this criterion in a distributed system where any number of nodes may crash.Comment: appears in International Parallel and Distributed Processing Symposium, May 2015, Hyderabad, Indi

    The perceived public value of social media in Queensland local Councils

    Get PDF
    Although many enterprises have been pursuing a digital strategy to facilitate larger and more diverse ecosystems in recent years, there are few successful examples of conglomeration of digital ecosystems through aligning or combining diverse ecosystems. Hybrid organizing is a sound guiding theory that we adopt to examine the nature of conglomeration of digital ecosystems. We construct a theoretical lens aligning hybridization approaches with forms of ecosystems through an organization design based on IT/IS capabilities. Guided by this lens, we conduct an in-depth case study of a successful company in China. This study reveals a process model towards conglomeration of digital ecosystems, which consists of dismissing, separating, and cumulating phases. Our findings contribute to existing body of literature, in the field of digital ecosystems, hybrid organizing and IT/IS capabilities. Core firms of ecosystems can use the model to design and develop digital ecosystems with rational deliberation and planning

    Permission-based fault tolerant mutual exclusion algorithm for mobile Ad Hoc networks

    Get PDF
    This study focuses on resolving the problem of mutual exclusion in mobile ad hoc networks. A Mobile Ad Hoc Network (MANET) is a wireless network without fixed infrastructure. Nodes are mobile and topology of MANET changes very frequently and unpredictably. Due to these limitations, conventional mutual exclusion algorithms presented for distributed systems (DS) are not applicable for MANETs unless they attach to a mechanism for dynamic changes in their topology. Algorithms for mutual exclusion in DS are categorized into two main classes including token-based and permission-based algorithms. Token-based algorithms depend on circulation of a specific message known as token. The owner of the token has priority for entering the critical section. Token may lose during communications, because of link failure or failure of token host. However, the processes for token-loss detection and token regeneration are very complicated and time-consuming. Token-based algorithms are generally non-fault-tolerant (although some mechanisms are utilized to increase their level of fault-tolerance) because of common problem of single token as a single point of failure. On the contrary, permission-based algorithms utilize the permission of multiple nodes to guarantee mutual exclusion. It yields to high traffic when number of nodes is high. Moreover, the number of message transmissions and energy consumption increase in MANET by increasing the number of mobile nodes accompanied in every decision making cycle. The purpose of this study is to introduce a method of managing the critical section,named as Ancestral, having higher fault-tolerance than token-based and fewer message transmissions and traffic rather that permission-based algorithms. This method makes a tradeoff between token-based and permission-based. It does not utilize any token, that is similar to permission-based, and the latest node having the critical section influences the entrance of the next node to the critical section, that is similar to token-based algorithms. The algorithm based on ancestral is named as DAD algorithms and increases the availability of fully connected network between 2.86 to 59.83% and decreases the number of message transmissions from 4j-2 to 3j messages (j as number of nodes in partition). This method is then utilized as the basis of dynamic ancestral mutual exclusion algorithm for MANET which is named as MDA. This algorithm is presented and evaluated for different scenarios of mobility of nodes, failure, load and number of nodes. The results of study show that MDA algorithm guarantees mutual exclusion,dead lock freedom and starvation freedom. It improves the availability of CS to minimum 154.94% and 113.36% for low load and high load of CS requests respectively compared to other permission-based lgorithm.Furthermore, it improves response time up to 90.69% for high load and 75.21% for low load of CS requests. It degrades the number of messages from n to 2 messages in the best case and from 3n/2 to n in the worst case. MDA algorithm is resilient to transient partitioning of network that is normally occurs due to failure of nodes or links

    On the nature of progress

    Get PDF
    15th International Conference, OPODIS 2011, Toulouse, France, December 13-16, 2011. ProceedingsWe identify a simple relationship that unifies seemingly unrelated progress conditions ranging from the deadlock-free and starvation-free properties common to lock-based systems, to non-blocking conditions such as obstruction-freedom, lock-freedom, and wait-freedom. Properties can be classified along two dimensions based on the demands they make on the operating system scheduler. A gap in the classification reveals a new non-blocking progress condition, weaker than obstruction-freedom, which we call clash-freedom. The classification provides an intuitively-appealing explanation why programmers continue to devise data structures that mix both blocking and non-blocking progress conditions. It also explains why the wait-free property is a natural basis for the consensus hierarchy: a theory of shared-memory computation requires an independent progress condition, not one that makes demands of the operating system scheduler

    Average Case Analysis of a Shared Register Emulation Algorithm

    Get PDF
    Distributed algorithms are important for managing systems with multiple networked components, where each component operates independently but coordinates to achieve a common goal. Previous theoretical research has produced numerous distributed algorithms but primarily uses mathematical proofs to yield theoretical results, such as worst-case runtime complexity. However, less research has been done on how these algorithms behave in practice, such as average-case runtime complexity. This paper will describe the empirical behavior of the distributed algorithm CCReg in a realistic environment, using the language DistAlgo to implement said algorithm. CCReg emulates a shared read/write register using a message-passing system. In particular, CCReg allows the underlying message-passing system to experience continuous changes to the set of components present, and tolerates crash failures of components. When the rate of component change and the fraction of crash failures are bounded, CCReg is proven to work correctly. The original paper specifies bounds for both that are guaranteed to work, and gives proof for those bounds. However, these bounds are restrictive and do not allow for much component change or many crash failures. Thus the goal of our implementation is to determine if CCReg's theoretical bounds can be relaxed in practice. We focus on CCReg's safety and liveness conditions: the algorithm eventually terminates, and a consistency condition called linearizability is maintained. We use a general method developed by Gibbons and Korach for determining if any ordering of operations satisfying linearizability exists. We find that, for executions where operations are randomly invoked, the algorithm does not exhibit any adverse behaviors. Each execution we tested terminates in finite time and has an order of operations satisfying linearizability. We discuss these findings, as well as future approaches and methodology for testing the theoretical boundaries in practice
    corecore