5,114 research outputs found
Resilient Backhaul Network Design Using Hybrid Radio/Free-Space Optical Technology
The radio-frequency (RF) technology is a scalable solution for the backhaul
planning. However, its performance is limited in terms of data rate and
latency. Free Space Optical (FSO) backhaul, on the other hand, offers a higher
data rate but is sensitive to weather conditions. To combine the advantages of
RF and FSO backhauls, this paper proposes a cost-efficient backhaul network
using the hybrid RF/FSO technology. To ensure a resilient backhaul, the paper
imposes a given degree of redundancy by connecting each node through
link-disjoint paths so as to cope with potential link failures. Hence, the
network planning problem considered in this paper is the one of minimizing the
total deployment cost by choosing the appropriate link type, i.e., either
hybrid RF/FSO or optical fiber (OF), between each couple of base-stations while
guaranteeing link-disjoint connections, a data rate target, and a
reliability threshold. The paper solves the problem using graph theory
techniques. It reformulates the problem as a maximum weight clique problem in
the planning graph, under a specified realistic assumption about the cost of OF
and hybrid RF/FSO links. Simulation results show the cost of the different
planning and suggest that the proposed heuristic solution has a
close-to-optimal performance for a significant gain in computation complexity
A survey and classification of storage deduplication systems
The automatic elimination of duplicate data in a storage system commonly known as deduplication is increasingly accepted as an effective technique to reduce storage costs. Thus, it has been applied to different storage types, including archives and backups, primary storage, within solid state disks, and even to random access memory. Although the general approach to deduplication is shared by all storage types, each poses specific challenges and leads to different trade-offs and solutions. This diversity is often misunderstood, thus underestimating the relevance of new research and development.
The first contribution of this paper is a classification of deduplication systems according to six criteria that correspond to key design decisions: granularity, locality, timing, indexing, technique, and scope.
This classification identifies and describes the different approaches used for each of them. As a second contribution, we describe which combinations of these design decisions have been proposed and found more useful for challenges in each storage type. Finally, outstanding research challenges and unexplored design points are identified and discussed.This work is funded by the European Regional Development Fund (EDRF) through the COMPETE Programme (operational programme for competitiveness) and by National Funds through the Fundacao para a Ciencia e a Tecnologia (FCT; Portuguese Foundation for Science and Technology) within project RED FCOMP-01-0124-FEDER-010156 and the FCT by PhD scholarship SFRH-BD-71372-2010
Alpha Entanglement Codes: Practical Erasure Codes to Archive Data in Unreliable Environments
Data centres that use consumer-grade disks drives and distributed
peer-to-peer systems are unreliable environments to archive data without enough
redundancy. Most redundancy schemes are not completely effective for providing
high availability, durability and integrity in the long-term. We propose alpha
entanglement codes, a mechanism that creates a virtual layer of highly
interconnected storage devices to propagate redundant information across a
large scale storage system. Our motivation is to design flexible and practical
erasure codes with high fault-tolerance to improve data durability and
availability even in catastrophic scenarios. By flexible and practical, we mean
code settings that can be adapted to future requirements and practical
implementations with reasonable trade-offs between security, resource usage and
performance. The codes have three parameters. Alpha increases storage overhead
linearly but increases the possible paths to recover data exponentially. Two
other parameters increase fault-tolerance even further without the need of
additional storage. As a result, an entangled storage system can provide high
availability, durability and offer additional integrity: it is more difficult
to modify data undetectably. We evaluate how several redundancy schemes perform
in unreliable environments and show that alpha entanglement codes are flexible
and practical codes. Remarkably, they excel at code locality, hence, they
reduce repair costs and become less dependent on storage locations with poor
availability. Our solution outperforms Reed-Solomon codes in many disaster
recovery scenarios.Comment: The publication has 12 pages and 13 figures. This work was partially
supported by Swiss National Science Foundation SNSF Doc.Mobility 162014, 2018
48th Annual IEEE/IFIP International Conference on Dependable Systems and
Networks (DSN
- …