27 research outputs found
Brief Announcement: On the Impossibility of Detecting Concurrency
We identify a general principle of distributed computing: one cannot force two processes running in parallel to see each other. This principle is formally stated in the context of asynchronous processes communicating through shared objects, using trace-based semantics. We prove that it holds in a reasonable computational model, and then study the class of concurrent specifications which satisfy this property. This allows us to derive a Galois connection theorem for different variants of linearizability
Toward Linearizability Testing for Multi-Word Persistent Synchronization Primitives
Persistent memory makes it possible to recover in-memory data structures following a failure instead of rebuilding them from state saved in slow secondary storage. Implementing such recoverable data structures correctly is challenging as their underlying algorithms must deal with both parallelism and failures, which makes them especially susceptible to programming errors. Traditional proofs of correctness should therefore be combined with other methods, such as model checking or software testing, to minimize the likelihood of uncaught defects. This research focuses specifically on the algorithmic principles of software testing, particularly linearizability analysis, for multi-word persistent synchronization primitives such as conditional swap operations. We describe an efficient decision procedure for linearizability in this context, and discuss its practical applications in detecting previously-unknown bugs in implementations of multi-word persistent primitives
Two-Bit Messages are Sufficient to Implement Atomic Read/Write Registers in Crash-prone Systems
Atomic registers are certainly the most basic objects of computing science.
Their implementation on top of an n-process asynchronous message-passing system
has received a lot of attention. It has been shown that t \textless{} n/2
(where t is the maximal number of processes that may crash) is a necessary and
sufficient requirement to build an atomic register on top of a crash-prone
asynchronous message-passing system. Considering such a context, this paper
presents an algorithm which implements a single-writer multi-reader atomic
register with four message types only, and where no message needs to carry
control information in addition to its type. Hence, two bits are sufficient to
capture all the control information carried by all the implementation messages.
Moreover, the messages of two types need to carry a data value while the
messages of the two other types carry no value at all. As far as we know, this
algorithm is the first with such an optimality property on the size of control
information carried by messages. It is also particularly efficient from a time
complexity point of view
Maintain the Consistency of Auditing Cloud
In day to day life cloud is most essential part. Now cloud storage are use for business purpose the cloud is popular due to their huge amount of advantages the cloud is portable we can able to access the cloud anywhere globally. A cloud service provider maintains much duplication and each piece of data are globally distributed on servers. The main problem of cloud is to handle duplication of data which is too costly to achieve powerful consistency on world wide .In this paper we present a novel consistency service model which contain a large amount of data cloud and multiple audit clouds In The Consistency Service model . a data cloud is maintain by Cloud service Provider (CSP) and the number of user constitute group and that group of user can constitute an audit cloud Which can check whether the data cloud provides the valid level of consistency or not we suggest the 2 level auditing architecture, two level auditing architecture requires a loosely synchronize clock in the audit cloud. Then, design algorithms to quantify the commonality of violations metrics, and the staleness of the value of a read metrics. Finally, a Analytical Auditing Strategy (AAS) to shows as many violations as possible. Thus system performed using a combination of simulations and real cloud deployments to validate Analytical Auditing Strategy (AAS).
DOI: 10.17762/ijritcc2321-8169.15012
Overview of Auditing Cloud Consistency
Cloud storage services have become very popular due to their infinite advantages. To provide always-on access, a cloud service provider (CSP) maintains multiple copies for each piece of data on geographically distributed servers. A major disadvantage of using this technique in clouds is that it is very expensive to achieve strong consistency on a worldwide scale. In this system, a novel consistency as a service (CaaS) model is presented, which involves a large data cloud and many small audit clouds. In the CaaS model we are presented in our system, a data cloud is maintained by a CSP. A group of users that participate an audit cloud can verify whether the data cloud provides the promised level of consistency or not. The system proposes a two level auditing architecture, which need a loosely synchronize clock in the audit cloud. Then design algorithms to measure the severity of violations with two metrics: the commonality of violations, and the oldness value of read. Finally, heuristic auditing strategy (HAS) is devised to find out as many violations as possible. Many experiments were performed using a combination of simulations and a real cloud deployment to validate HAS.
DOI: 10.17762/ijritcc2321-8169.15011
Providing Consistency in Cloud Using Read after Write Technique to Endusers
ABSTRACT: A cloud service provider maintains multiple replicas for each piece of data on geographically distributed servers. In Existing system ,there occurs conflict in shared files,if one member in the group opens the file and editing or do some operations,if another member in the group opens and modifies the same file which is being used by another one then there occurs a conflict. In existing the algorithm used is HAS,heuristic auditing strategy but in proposed we are following Read-after-write consistency allows you to build distributed systems with less latency. As touched on above, without read-after-write consistency you'll need to incorporate some kind of delay to ensure that the data you just wrote will be visible to the other parts of your system