14 research outputs found

    A RAID reconfiguration scheme for gracefully degraded operations

    Get PDF
    One distinct advantage of Redundant Array of Independent Disks (RAID) is fault tolerance. But the performance of a disk array in degraded mode is so poor that no one uses the RAID after failure. Continuous operation of RAID in degraded mode is very important in many real time applications, which can not be interrupted in providing continuous services. In this paper, we propose an efficient architectural reconfiguration scheme to enhance the performance of RAID-5 in degraded mode, called reconfigurable RAID-5. It reconfigures RAID-5 to RPTD-0 in degraded mode. Using this scheme, the calculation of the failure data and the generation of parity in writing the new data to the failed disk can be reduced. It also alleviates the small write problem for RAID-5 in degraded mode. We use the phase parallel model to analyze the total execution time of the RAID-5 and of the reconfigurable RAID-5. Through theoretical analysis and benchmark test, we find the performance of the reconfigurable RAID-5 can be 200 times better than conventional RAID-5.published_or_final_versio

    Data allocation in disk arrays with multiple raid levels

    Get PDF
    There has been an explosion in the amount of generated data, which has to be stored reliably because it is not easily reproducible. Some datasets require frequent read and write access. like online transaction processing applications. Others just need to be stored safely and read once in a while, as in data mining. This different access requirements can be solved by using the RAID (redundant array of inexpensive disks) paradigm. i.e., RAIDi for the first situation and RAID5 for the second situation. Furthermore rather than providing two disk arrays with RAID 1 and RAID5 capabilities, a controller can be postulated to emulate both. It is referred as a heterogeneous disk array (HDA). Dedicating a subset of disks to RAID 1 results in poor disk utilization, since RAIDi vs RAID5 capacity and bandwidth requirements are not known a priori. Balancing disk loads when disk space is shared among allocation requests, referred to as virtual arrays - VAs poses a difficult problem. RAIDi disk arrays have a higher access rate per gigabyte than RAID5 disk arrays. Allocating more VAs while keeping disk utilizations balanced and within acceptable bounds is the goal of this study. Given its size and access rate a VA\u27s width or the number of its Virtual Disks -VDs is determined. VDs allocations on physical disks using vector-packing heuristics, with disk capacity and bandwidth as the two dimensions are shown to be the best. An allocation is acceptable if it does riot exceed the disk capacity and overload disks even in the presence of disk failures. When disk bandwidth rather than capacity is the bottleneck, the clustered RAID paradigm is applied, which offers a tradeoff between disk space and bandwidth. Another scenario is also considered where the RAID level is determined by a classification algorithm utilizing the access characteristics of the VA, i.e., fractions of small versus large access and the fraction of write versus read accesses. The effect of RAID 1 organization on its reliability and performance is studied too. The effect of disk failures on the X-code two disk failure tolerant array is analyzed and it is shown that the load across disks is highly unbalanced unless in an NxN array groups of N stripes are randomly rotated

    A Tutorial on RAID Storage Systems

    Get PDF
    RAID storage systems have been in use since the early 1990's. Recently, however, as the demand for huge amounts of on-line storage has increased, RAID has once again come into focus. This report reviews the history of RAID, as well as where and how RAID systems fit in the storage hierarchy of an Enterprize Computing System (EIS). We describe the known RAID configurations and the advantages and disadvantages of each. Since the focus of our research is on the performance of RAID systems we devote a section to the various factors which affect RAID performance. Modelling RAID systems for their performance analysis is the topic of the next section and we report on the issues as well as briefly describe one simulator, RAIDframe, which has been developed. We conclude with section which describes the current open research questions in the area

    Scheduling policies for disks and disk arrays

    Get PDF
    Recent rapid advances of magnetic recording technology have enabled substantial increases in disk capacity. There has been less than 10% improvement annually in the random access time to small data blocks on the disk. Such accesses are very common in OLTP applications, which tend to have stringent response time requirements. Scheduling of disk requests is intended to improve their response time, reduce disk service time, and increase disk access bandwidth with respect to the default FCFS scheduling policy. Shortest Access Time First policy has been shown to outperform other classical disk scheduling policies in numerous studies. Before verifying this conclusion, this dissertation develops an empirical analysis of the SATF policy, and produces a valuable by-product, expressed as x[m] = mp, during the study. Classical scheduling policies and some well-known variations of the SATE policy are re-evaluated, and three extensions are proposed. The performance evaluation uses self-developed simulators containing detailed disk information. The simulators, driven with both synthetic and trace workloads, report the measurements of requests, such as the mean and the 95th percentile of the response times, as well as the measurements of the system, such as the maximum throughput. A comprehensive arrangement of routing and scheduling schemes is presented or mirrored disk systems, or RAIDi. The performance evaluation is based on a twodimensional configuration classification: independent queues (i.e. a router sends the requests to one of the disks as soon as these requests arrive) versus a shared queue (i.e. the requests are held in a common queue at the router and are scheduled to be served); normal data layout versus transposed data layout (i.e. the data stored on the inner cylinders of one disk is duplicated on the outer cylinders of the mirrored disk). The availability of a non-volatile storage or NVS, which allows the processing of write requests to be deferred, is also investigated. Finally, various strategies of mirrored disk declustering are compared against the basic disk mirroring. Their competence of load balancing and their reliability are examined in both normal mode and degraded mode

    Efficient data mappings for parity-declustered data layouts

    Get PDF
    AbstractThe joint demands of high performance and fault tolerance in a large array of disks can be satisfied by a parity-declustered data layout. Such a data layout is generated by partitioning the data on the disks into stripes and choosing a part of each stripe to hold redundant information. Thus the data layout can be represented as a table of stripes. The data mapping problem is the problem of translating a data address into a disk identifier and an offset on that disk. Recent work has yielded mappings that compute disks and offsets directly from data addresses without the need to store tables. In this paper, we show that parity-declustered data layouts based on commutative rings yield mappings with improved computational efficiency and wider applicability

    Studies of disk arrays tolerating two disk failures and a proposal for a heterogeneous disk array

    Get PDF
    There has been an explosion in the amount of generated data in the past decade. Online access to these data is made possible by large disk arrays, especially in the RAID (Redundant Array of Independent Disks) paradigm. According to the RAID level a disk array can tolerate one or more disk failures, so that the storage subsystem can continue operating with disk failure(s). RAID 5 is a single disk failure tolerant array which dedicates the capacity of one disk to parity information. The content on the failed disk can be reconstructed on demand and written onto a spare disk. However, RAID5 does not provide enough protection for data since the data loss may occur when there is a media failure (unreadable sectors) or a second disk failure during the rebuild process. Due to the high cost of downtime in many applications, two disk failure tolerant arrays, such as RAID6 and EVENODD, have become popular. These schemes use 2/N of the capacity of the array for redundant information in order to tolerate two disk failures. RM2 is another scheme that can tolerate two disk failures, with slightly higher redundancy ratio. However, the performance of these two disk failure tolerant RAID schemes is impaired, since there are two check disks to be updated for each write request. Therefore, their performance, especially when there are disk failure(s), is of interest. In the first part of the dissertation, the operations for the RAID5, RAID6, EVENODD and RM2 schemes are described. A cost model is developed for these RAID schemes by analyzing the operations in various operating modes. This cost model offers a measure of the volume of data being transmitted, and provides adevice-independent comparison of the efficiency of these RAID schemes. Based on this cost model, the maximum throughput of a RAID scheme can be obtained given detailed disk characteristic and RAID configuration. Utilizing M/G/1 queuing model and other favorable modeling assumptions, a queuing analysis to obtain the mean read response time is described. Simulation is used to validate analytic results, as well as to evaluate the RAID systems in analytically intractable cases. The second part of this dissertation describes a new disk array architecture, namely Heterogeneous Disk Array (HDA). The HDA is motivated by a few observations of the trends in storage technology. The HDA architecture allows a disk array to have two forms of heterogeneity: (1) device heterogeneity, i.e., disks of different types can be incorporated in a single HDA; and (2) RAID level heterogeneity, i.e., various RAID schemes can coexist in the same array. The goal of this architecture is (1) utilizing the extra resource (i.e. bandwidth and capacity) introduced by new disk drives in an automated and efficient way; and (2) using appropriate RAID levels to meet the varying availability requirements for different applications. In HDA, each new object is associated with an appropriate RAID level and the allocation is carried out in a way to keep disk bandwidth and capacity utilizations balanced. Design considerations for the data structures of HDA metadata are described, followed by the actual design of the data structures and flowcharts for the most frequent operations. Then a data allocation algorithm is described in detail. Finally, the HDA architecture is prototyped based on the DASim simulation toolkit developed at NJIT and simulation results of an HDA with two RAID levels (RAID 1 and RAIDS) are presented

    Graph Theoretic Modeling: Case Studies In Redundant Arrays Of Independent Disks And Network Defense

    Get PDF
    Graph theoretic modeling has served as an invaluable tool for solving a variety of problems since its introduction in Euler\u27s paper on the Bridges of Königsberg in 1736 . Two amongst them of contemporary interest are the modeling of Redundant Arrays of Inexpensive Disks (RAID), and the identification of network attacks. While the former is vital to the protection and uninterrupted availability of data, the latter is crucial to the integrity of systems comprising networks. Both are of practical importance due to the continuing growth of data and its demand at increasing numbers of geographically distributed locations through the use of networks such as the Internet. The popularity of RAID has soared because of the enhanced I/O bandwidths and large capacities they offer at low cost. However, the demand for bigger capacities has led to the use of larger arrays with increased probability of random disk failures. This has motivated the need for RAID systems to tolerate two or more disk failures, without sacrificing performance or storage space. To this end, we shall first perform a comparative study of the existing techniques that achieve this objective. Next, we shall devise novel graph-theoretic algorithms for placing data and parity in arrays of n disks (n ≥ 3) that can recover from two random disk failures, for n = p - 1, n = p and n = 2p - 2, where p is a prime number. Each shall be shown to utilize an optimal ratio of space for storing parity. We shall also show how to extend the algorithms to arrays with an arbitrary number of disks, albeit with non-optimal values for the aforementioned ratio. The growth of the Internet has led to the increased proliferation of malignant applications seeking to breach the security of networked systems. Hence, considerable effort has been focused on detecting and predicting the attacks they perpetrate. However, the enormity of the Internet poses a challenge to representing and analyzing them by using scalable models. Furthermore, forecasting the systems that they are likely to exploit in the future is difficult due to the unavailability of complete information on network vulnerabilities. We shall present a technique that identifies attacks on large networks using a scalable model, while filtering for false positives and negatives. Furthermore, it also forecasts the propagation of security failures proliferated by attacks over time and their likely targets in the future

    Redundancy schemes for high availability computer clusters

    Get PDF
    The primary goal of computer clusters is to improve computing performances by taking advantage of the parallelism they intrinsically provide. Moreover, their use of redundant hardware components enables them to offer high availability services. In this paper, we present an analytical model for analyzing redundancy schemes and their impact on the cluster’s overall performance. Furthermore, several cluster redundancy techniques are analyzed with an emphasis on hardware and data redundancy, from which we derive an applicable redundancy scheme design. Also, our solution provides a disaster recovery mechanism that improves the cluster’s availability. In the case of data redundancy, we present improvements to the replication and parity data replication techniques for which we investigate the availability of the cluster under several scenarios that take into account, among other things, the number of replicated nodes, the number of CPUs that hold parity data and the relation between primary and replicated data. For this purpose, we developed a simulator that analyzes the impact of a redundancy scheme on the processing rate of the cluster. We also studied the performance of two well-known schemes according to the usage rate of the CPUs. We found that two important aspects influencing the performance of a transaction-oriented cluster were the cluster’s failover and data redundancy schemes. We simulated several data redundancy schemes and found that data replication offered higher cluster availability than the parity model
    corecore