23,329 research outputs found
Self-Repairing Disk Arrays
As the prices of magnetic storage continue to decrease, the cost of replacing
failed disks becomes increasingly dominated by the cost of the service call
itself. We propose to eliminate these calls by building disk arrays that
contain enough spare disks to operate without any human intervention during
their whole lifetime. To evaluate the feasibility of this approach, we have
simulated the behavior of two-dimensional disk arrays with n parity disks and
n(n-1)/2 data disks under realistic failure and repair assumptions. Our
conclusion is that having n(n+1)/2 spare disks is more than enough to achieve a
99.999 percent probability of not losing data over four years. We observe that
the same objectives cannot be reached with RAID level 6 organizations and would
require RAID stripes that could tolerate triple disk failures.Comment: Part of ADAPT Workshop proceedings, 2015 (arXiv:1412.2347
Redundant Arrays of IDE Drives
The next generation of high-energy physics experiments is expected to gather
prodigious amounts of data. New methods must be developed to handle this data
and make analysis at universities possible. We examine some techniques that use
recent developments in commodity hardware. We test redundant arrays of
integrated drive electronics (IDE) disk drives for use in offline high-energy
physics data analysis. IDE redundant array of inexpensive disks (RAID) prices
now equal the cost per terabyte of million-dollar tape robots! The arrays can
be scaled to sizes affordable to institutions without robots and used when fast
random access at low cost is important. We also explore three methods of moving
data between sites; internet transfers, hot pluggable IDE disks in FireWire
cases, and writable digital video disks (DVD-R).Comment: Submitted to IEEE Transactions On Nuclear Science, for the 2001 IEEE
Nuclear Science Symposium and Medical Imaging Conference, 8 pages, 1 figure,
uses IEEEtran.cls. Revised March 19, 2002 and published August 200
Redundancy and Aging of Efficient Multidimensional MDS-Parity Protected Distributed Storage Systems
The effect of redundancy on the aging of an efficient Maximum Distance
Separable (MDS) parity--protected distributed storage system that consists of
multidimensional arrays of storage units is explored. In light of the
experimental evidences and survey data, this paper develops generalized
expressions for the reliability of array storage systems based on more
realistic time to failure distributions such as Weibull. For instance, a
distributed disk array system is considered in which the array components are
disseminated across the network and are subject to independent failure rates.
Based on such, generalized closed form hazard rate expressions are derived.
These expressions are extended to estimate the asymptotical reliability
behavior of large scale storage networks equipped with MDS parity-based
protection. Unlike previous studies, a generic hazard rate function is assumed,
a generic MDS code for parity generation is used, and an evaluation of the
implications of adjustable redundancy level for an efficient distributed
storage system is presented. Results of this study are applicable to any
erasure correction code as long as it is accompanied with a suitable structure
and an appropriate encoding/decoding algorithm such that the MDS property is
maintained.Comment: 11 pages, 6 figures, Accepted for publication in IEEE Transactions on
Device and Materials Reliability (TDMR), Nov. 201
Multi-Terabyte EIDE Disk Arrays running Linux RAID5
High-energy physics experiments are currently recording large amounts of data
and in a few years will be recording prodigious quantities of data. New methods
must be developed to handle this data and make analysis at universities
possible. Grid Computing is one method; however, the data must be cached at the
various Grid nodes. We examine some storage techniques that exploit recent
developments in commodity hardware. Disk arrays using RAID level 5 (RAID-5)
include both parity and striping. The striping improves access speed. The
parity protects data in the event of a single disk failure, but not in the case
of multiple disk failures.
We report on tests of dual-processor Linux Software RAID-5 arrays and
Hardware RAID-5 arrays using a 12-disk 3ware controller, in conjunction with
250 and 300 GB disks, for use in offline high-energy physics data analysis. The
price of IDE disks is now less than $1/GB. These RAID-5 disk arrays can be
scaled to sizes affordable to small institutions and used when fast random
access at low cost is important.Comment: Talk from the 2004 Computing in High Energy and Nuclear Physics
(CHEP04), Interlaken, Switzerland, 27th September - 1st October 2004, 4
pages, LaTeX, uses CHEP2004.cls. ID 47, Poster Session 2, Track
Low-Complexity Codes for Random and Clustered High-Order Failures in Storage Arrays
RC (Random/Clustered) codes are a new efficient array-code family for recovering from 4-erasures. RC codes correct most 4-erasures, and essentially all 4-erasures that are clustered. Clustered erasures are introduced as a new erasure model for storage arrays. This model draws its motivation from correlated device failures, that are caused by physical proximity of devices, or by age proximity of endurance-limited solid-state drives. The reliability of storage arrays that employ RC codes is analyzed and compared to known codes. The new RC code is significantly more efficient, in all practical implementation factors, than the best known 4-erasure correcting MDS code. These factors include: small-write update-complexity, full-device update-complexity, decoding complexity and number of supported devices in the array
- âŠ