3,875 research outputs found
Algorithms for Replica Placement in High-Availability Storage
A new model of causal failure is presented and used to solve a novel replica
placement problem in data centers. The model describes dependencies among
system components as a directed graph. A replica placement is defined as a
subset of vertices in such a graph. A criterion for optimizing replica
placements is formalized and explained. In this work, the optimization goal is
to avoid choosing placements in which a single failure event is likely to wipe
out multiple replicas. Using this criterion, a fast algorithm is given for the
scenario in which the dependency model is a tree. The main contribution of the
paper is an dynamic programming algorithm for placing
replicas on a tree with vertices. This algorithm exhibits the
interesting property that only two subproblems need to be recursively
considered at each stage. An greedy algorithm is also briefly
reported.Comment: 22 pages, 7 figures, 4 algorithm listing
Tolerating Correlated Failures in Massively Parallel Stream Processing Engines
Fault-tolerance techniques for stream processing engines can be categorized
into passive and active approaches. A typical passive approach periodically
checkpoints a processing task's runtime states and can recover a failed task by
restoring its runtime state using its latest checkpoint. On the other hand, an
active approach usually employs backup nodes to run replicated tasks. Upon
failure, the active replica can take over the processing of the failed task
with minimal latency. However, both approaches have their own inadequacies in
Massively Parallel Stream Processing Engines (MPSPE). The passive approach
incurs a long recovery latency especially when a number of correlated nodes
fail simultaneously, while the active approach requires extra replication
resources. In this paper, we propose a new fault-tolerance framework, which is
Passive and Partially Active (PPA). In a PPA scheme, the passive approach is
applied to all tasks while only a selected set of tasks will be actively
replicated. The number of actively replicated tasks depends on the available
resources. If tasks without active replicas fail, tentative outputs will be
generated before the completion of the recovery process. We also propose
effective and efficient algorithms to optimize a partially active replication
plan to maximize the quality of tentative outputs. We implemented PPA on top of
Storm, an open-source MPSPE and conducted extensive experiments using both real
and synthetic datasets to verify the effectiveness of our approach
Adaptive Replication in Distributed Content Delivery Networks
We address the problem of content replication in large distributed content
delivery networks, composed of a data center assisted by many small servers
with limited capabilities and located at the edge of the network. The objective
is to optimize the placement of contents on the servers to offload as much as
possible the data center. We model the system constituted by the small servers
as a loss network, each loss corresponding to a request to the data center.
Based on large system / storage behavior, we obtain an asymptotic formula for
the optimal replication of contents and propose adaptive schemes related to
those encountered in cache networks but reacting here to loss events, and
faster algorithms generating virtual events at higher rate while keeping the
same target replication. We show through simulations that our adaptive schemes
outperform significantly standard replication strategies both in terms of loss
rates and adaptation speed.Comment: 10 pages, 5 figure
DATA REPLICATION IN DISTRIBUTED SYSTEMS USING OLYMPIAD OPTIMIZATION ALGORITHM
Achieving timely access to data objects is a major challenge in big distributed systems like the Internet of Things (IoT) platforms. Therefore, minimizing the data read and write operation time in distributed systems has elevated to a higher priority for system designers and mechanical engineers. Replication and the appropriate placement of the replicas on the most accessible data servers is a problem of NP-complete optimization. The key objectives of the current study are minimizing the data access time, reducing the quantity of replicas, and improving the data availability. The current paper employs the Olympiad Optimization Algorithm (OOA) as a novel population-based and discrete heuristic algorithm to solve the replica placement problem which is also applicable to other fields such as mechanical and computer engineering design problems. This discrete algorithm was inspired by the learning process of student groups who are preparing for the Olympiad exams. The proposed algorithm, which is divide-and-conquer-based with local and global search strategies, was used in solving the replica placement problem in a standard simulated distributed system. The 'European Union Database' (EUData) was employed to evaluate the proposed algorithm, which contains 28 nodes as servers and a network architecture in the format of a complete graph. It was revealed that the proposed technique reduces data access time by 39% with around six replicas, which is vastly superior to the earlier methods. Moreover, the standard deviation of the results of the algorithm's different executions is approximately 0.0062, which is lower than the other techniques' standard deviation within the same experiments
- …