188,520 research outputs found
Survey on replication techniques for distributed system
Distributed systems mainly provide access to a large amount of data and computational resources through a wide range of interfaces. Besides its dynamic nature, which means that resources may enter and leave the environment at any time, many distributed systems applications will be running in an environment where faults are more likely to occur due to their ever-increasing scales and the complexity. Due to diverse faults and failures conditions, fault tolerance has become a critical element for distributed computing in order for the system to perform its function correctly even in the present of faults. Replication techniques primarily concentrate on the two fault tolerance manners precisely masking the failures as well as reconfigure the system in response. This paper presents a brief survey on different replication techniques such as Read One Write All (ROWA), Quorum Consensus (QC), Tree Quorum (TQ) Protocol, Grid Configuration (GC) Protocol, Two-Replica Distribution Techniques (TRDT), Neighbour Replica Triangular Grid (NRTG) and Neighbour Replication Distributed Techniques (NRDT). These techniques have its own redeeming features and shortcoming which forms the subject matter of this survey
IEC 61499 REPLICATION FOR FAULT TOLERANT SYSTEM
The IEC 61499 was developed thinking about the new generation of distributed control and automation systems. This provides essential resources for the development of distributed systems such as encapsulation, portability and reconfiguration. In this sense, and to ensure confidence in the operation should be implemented fault tolerance techniques dealing with hardware failures and errors off software associated with us where the distributed application runs. In this paper, we propose an approach to deal with failures in distributed systems tolerance problems, based on a replication model based on replication software/hardware as a means to achieve confidence in the operation.info:eu-repo/semantics/publishedVersio
Optimistic replication
Data replication is a key technology in distributed data sharing systems, enabling higher availability and performance. This paper surveys optimistic replication algorithms that allow replica contents to diverge in the short term, in order to support concurrent work practices and to tolerate failures in low-quality communication links. The importance of such techniques is increasing as collaboration through wide-area and mobile networks becomes popular. Optimistic replication techniques are different from traditional “pessimistic ” ones. Instead of synchronous replica coordination, an optimistic algorithm propagates changes in the background, discovers conflicts after they happen and reaches agreement on the final contents incrementally. We explore the solution space for optimistic replication algorithms. This paper identifies key challenges facing optimistic replication systems — ordering operations, detecting and resolving conflicts, propagating changes efficiently, and bounding replica divergence — and provides a comprehensive survey of techniques developed for addressing these challenges
Techniques for building highly available distributed file systems
This paper analyzes recent research in the field of distributed file systems, with a particular emphasis on the problem of high availability. Several of the techniques involved in building such a system are discussed individually: naming, replication, multiple versions, caching, stashing, and logging. These techniques range from extensions of ideas used in centralized file systems, through new notions already in use, to radical ideas that have not yet been implemented. A number of working and proposed systems are described in conjunction with the analysis of each technique. The paper concludes that a low degree of replication, a liberal use of client and server caching, and optimistic behavior in the face of network partition are all necessary to ensure high availability
Epcast: Controlled Dissemination in Human-based Wireless Networks by means of Epidemic Spreading Models
Epidemics-inspired techniques have received huge attention in recent years
from the distributed systems and networking communities. These algorithms and
protocols rely on probabilistic message replication and redundancy to ensure
reliable communication. Moreover, they have been successfully exploited to
support group communication in distributed systems, broadcasting, multicasting
and information dissemination in fixed and mobile networks. However, in most of
the existing work, the probability of infection is determined heuristically,
without relying on any analytical model. This often leads to unnecessarily high
transmission overheads.
In this paper we show that models of epidemic spreading in complex networks
can be applied to the problem of tuning and controlling the dissemination of
information in wireless ad hoc networks composed of devices carried by
individuals, i.e., human-based networks. The novelty of our idea resides in the
evaluation and exploitation of the structure of the underlying human network
for the automatic tuning of the dissemination process in order to improve the
protocol performance. We evaluate the results using synthetic mobility models
and real human contacts traces
- …