5,542 research outputs found

    A Case for Redundant Arrays of Hybrid Disks (RAHD)

    Get PDF
    Hybrid Hard Disk Drive was originally concepted by Samsung, which incorporates a Flash memory in a magnetic disk. The combined ultra-high-density benefits of magnetic storage and the low-power and fast read access of NAND technology inspires us to construct Redundant Arrays of Hybrid Disks (RAHD) to offer a possible alternative to today’s Redundant Arrays of Independent Disks (RAIDs) and/or Massive Arrays of Idle Disks (MAIDs). We first design an internal management system (including Energy-Efficient Control) for hybrid disks. Three traces collected from real systems as well as a synthetic trace are then used to evaluate the RAHD arrays. The trace-driven experimental results show: in the high speed mode, a RAHD outplays the purely-magnetic-disk-based RAIDs by a factor of 2.4–4; in the energy-efficient mode, a RAHD4/5 can save up to 89% of energy at little performance degradationPeer reviewe

    EVENODD: An Efficient Scheme for Tolerating Double Disk Failures in RAID Architectures

    Get PDF
    We present a novel method, that we call EVENODD, for tolerating up to two disk failures in RAID architectures. EVENODD employs the addition of only two redundant disks and consists of simple exclusive-OR computations. This redundant storage is optimal, in the sense that two failed disks cannot be retrieved with less than two redundant disks. A major advantage of EVENODD is that it only requires parity hardware, which is typically present in standard RAID-5 controllers. Hence, EVENODD can be implemented on standard RAID-5 controllers without any hardware changes. The most commonly used scheme that employes optimal redundant storage (i.e., two extra disks) is based on Reed-Solomon (RS) error-correcting codes. This scheme requires computation over finite fields and results in a more complex implementation. For example, we show that the complexity of implementing EVENODD in a disk array with 15 disks is about 50% of the one required when using the RS scheme. The new scheme is not limited to RAID architectures: it can be used in any system requiring large symbols and relatively short codes, for instance, in multitrack magnetic recording. To this end, we also present a decoding algorithm for one column (track) in error

    Redundancy and Aging of Efficient Multidimensional MDS-Parity Protected Distributed Storage Systems

    Full text link
    The effect of redundancy on the aging of an efficient Maximum Distance Separable (MDS) parity--protected distributed storage system that consists of multidimensional arrays of storage units is explored. In light of the experimental evidences and survey data, this paper develops generalized expressions for the reliability of array storage systems based on more realistic time to failure distributions such as Weibull. For instance, a distributed disk array system is considered in which the array components are disseminated across the network and are subject to independent failure rates. Based on such, generalized closed form hazard rate expressions are derived. These expressions are extended to estimate the asymptotical reliability behavior of large scale storage networks equipped with MDS parity-based protection. Unlike previous studies, a generic hazard rate function is assumed, a generic MDS code for parity generation is used, and an evaluation of the implications of adjustable redundancy level for an efficient distributed storage system is presented. Results of this study are applicable to any erasure correction code as long as it is accompanied with a suitable structure and an appropriate encoding/decoding algorithm such that the MDS property is maintained.Comment: 11 pages, 6 figures, Accepted for publication in IEEE Transactions on Device and Materials Reliability (TDMR), Nov. 201

    Alpha Entanglement Codes: Practical Erasure Codes to Archive Data in Unreliable Environments

    Full text link
    Data centres that use consumer-grade disks drives and distributed peer-to-peer systems are unreliable environments to archive data without enough redundancy. Most redundancy schemes are not completely effective for providing high availability, durability and integrity in the long-term. We propose alpha entanglement codes, a mechanism that creates a virtual layer of highly interconnected storage devices to propagate redundant information across a large scale storage system. Our motivation is to design flexible and practical erasure codes with high fault-tolerance to improve data durability and availability even in catastrophic scenarios. By flexible and practical, we mean code settings that can be adapted to future requirements and practical implementations with reasonable trade-offs between security, resource usage and performance. The codes have three parameters. Alpha increases storage overhead linearly but increases the possible paths to recover data exponentially. Two other parameters increase fault-tolerance even further without the need of additional storage. As a result, an entangled storage system can provide high availability, durability and offer additional integrity: it is more difficult to modify data undetectably. We evaluate how several redundancy schemes perform in unreliable environments and show that alpha entanglement codes are flexible and practical codes. Remarkably, they excel at code locality, hence, they reduce repair costs and become less dependent on storage locations with poor availability. Our solution outperforms Reed-Solomon codes in many disaster recovery scenarios.Comment: The publication has 12 pages and 13 figures. This work was partially supported by Swiss National Science Foundation SNSF Doc.Mobility 162014, 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN

    Low-Complexity Codes for Random and Clustered High-Order Failures in Storage Arrays

    Get PDF
    RC (Random/Clustered) codes are a new efficient array-code family for recovering from 4-erasures. RC codes correct most 4-erasures, and essentially all 4-erasures that are clustered. Clustered erasures are introduced as a new erasure model for storage arrays. This model draws its motivation from correlated device failures, that are caused by physical proximity of devices, or by age proximity of endurance-limited solid-state drives. The reliability of storage arrays that employ RC codes is analyzed and compared to known codes. The new RC code is significantly more efficient, in all practical implementation factors, than the best known 4-erasure correcting MDS code. These factors include: small-write update-complexity, full-device update-complexity, decoding complexity and number of supported devices in the array

    Self-Repairing Disk Arrays

    Full text link
    As the prices of magnetic storage continue to decrease, the cost of replacing failed disks becomes increasingly dominated by the cost of the service call itself. We propose to eliminate these calls by building disk arrays that contain enough spare disks to operate without any human intervention during their whole lifetime. To evaluate the feasibility of this approach, we have simulated the behavior of two-dimensional disk arrays with n parity disks and n(n-1)/2 data disks under realistic failure and repair assumptions. Our conclusion is that having n(n+1)/2 spare disks is more than enough to achieve a 99.999 percent probability of not losing data over four years. We observe that the same objectives cannot be reached with RAID level 6 organizations and would require RAID stripes that could tolerate triple disk failures.Comment: Part of ADAPT Workshop proceedings, 2015 (arXiv:1412.2347

    Graph Theoretic Modeling: Case Studies In Redundant Arrays Of Independent Disks And Network Defense

    Get PDF
    Graph theoretic modeling has served as an invaluable tool for solving a variety of problems since its introduction in Euler\u27s paper on the Bridges of Königsberg in 1736 . Two amongst them of contemporary interest are the modeling of Redundant Arrays of Inexpensive Disks (RAID), and the identification of network attacks. While the former is vital to the protection and uninterrupted availability of data, the latter is crucial to the integrity of systems comprising networks. Both are of practical importance due to the continuing growth of data and its demand at increasing numbers of geographically distributed locations through the use of networks such as the Internet. The popularity of RAID has soared because of the enhanced I/O bandwidths and large capacities they offer at low cost. However, the demand for bigger capacities has led to the use of larger arrays with increased probability of random disk failures. This has motivated the need for RAID systems to tolerate two or more disk failures, without sacrificing performance or storage space. To this end, we shall first perform a comparative study of the existing techniques that achieve this objective. Next, we shall devise novel graph-theoretic algorithms for placing data and parity in arrays of n disks (n ≥ 3) that can recover from two random disk failures, for n = p - 1, n = p and n = 2p - 2, where p is a prime number. Each shall be shown to utilize an optimal ratio of space for storing parity. We shall also show how to extend the algorithms to arrays with an arbitrary number of disks, albeit with non-optimal values for the aforementioned ratio. The growth of the Internet has led to the increased proliferation of malignant applications seeking to breach the security of networked systems. Hence, considerable effort has been focused on detecting and predicting the attacks they perpetrate. However, the enormity of the Internet poses a challenge to representing and analyzing them by using scalable models. Furthermore, forecasting the systems that they are likely to exploit in the future is difficult due to the unavailability of complete information on network vulnerabilities. We shall present a technique that identifies attacks on large networks using a scalable model, while filtering for false positives and negatives. Furthermore, it also forecasts the propagation of security failures proliferated by attacks over time and their likely targets in the future
    corecore