588 research outputs found

    Studies of disk arrays tolerating two disk failures and a proposal for a heterogeneous disk array

    Get PDF
    There has been an explosion in the amount of generated data in the past decade. Online access to these data is made possible by large disk arrays, especially in the RAID (Redundant Array of Independent Disks) paradigm. According to the RAID level a disk array can tolerate one or more disk failures, so that the storage subsystem can continue operating with disk failure(s). RAID 5 is a single disk failure tolerant array which dedicates the capacity of one disk to parity information. The content on the failed disk can be reconstructed on demand and written onto a spare disk. However, RAID5 does not provide enough protection for data since the data loss may occur when there is a media failure (unreadable sectors) or a second disk failure during the rebuild process. Due to the high cost of downtime in many applications, two disk failure tolerant arrays, such as RAID6 and EVENODD, have become popular. These schemes use 2/N of the capacity of the array for redundant information in order to tolerate two disk failures. RM2 is another scheme that can tolerate two disk failures, with slightly higher redundancy ratio. However, the performance of these two disk failure tolerant RAID schemes is impaired, since there are two check disks to be updated for each write request. Therefore, their performance, especially when there are disk failure(s), is of interest. In the first part of the dissertation, the operations for the RAID5, RAID6, EVENODD and RM2 schemes are described. A cost model is developed for these RAID schemes by analyzing the operations in various operating modes. This cost model offers a measure of the volume of data being transmitted, and provides adevice-independent comparison of the efficiency of these RAID schemes. Based on this cost model, the maximum throughput of a RAID scheme can be obtained given detailed disk characteristic and RAID configuration. Utilizing M/G/1 queuing model and other favorable modeling assumptions, a queuing analysis to obtain the mean read response time is described. Simulation is used to validate analytic results, as well as to evaluate the RAID systems in analytically intractable cases. The second part of this dissertation describes a new disk array architecture, namely Heterogeneous Disk Array (HDA). The HDA is motivated by a few observations of the trends in storage technology. The HDA architecture allows a disk array to have two forms of heterogeneity: (1) device heterogeneity, i.e., disks of different types can be incorporated in a single HDA; and (2) RAID level heterogeneity, i.e., various RAID schemes can coexist in the same array. The goal of this architecture is (1) utilizing the extra resource (i.e. bandwidth and capacity) introduced by new disk drives in an automated and efficient way; and (2) using appropriate RAID levels to meet the varying availability requirements for different applications. In HDA, each new object is associated with an appropriate RAID level and the allocation is carried out in a way to keep disk bandwidth and capacity utilizations balanced. Design considerations for the data structures of HDA metadata are described, followed by the actual design of the data structures and flowcharts for the most frequent operations. Then a data allocation algorithm is described in detail. Finally, the HDA architecture is prototyped based on the DASim simulation toolkit developed at NJIT and simulation results of an HDA with two RAID levels (RAID 1 and RAIDS) are presented

    Data partitioning and load balancing in parallel disk systems

    Get PDF
    Parallel disk systems provide opportunities for exploiting I/O parallelism in two possible ways, namely via inter-request and intra-request parallelism. In this paper we discuss the main issues in performance tuning of such systems, namely striping and load balancing, and show their relationship to response time and throughput. We outline the main components of an intelligent file system that optimizes striping by taking into account the requirements of the applications, and performs load balancing by judicious file allocation and dynamic redistributions of the data when access patterns change. Our system uses simple but effective heuristics that incur only little overhead. We present performance experiments based on synthetic workloads and real-life traces

    Selected Papers from the First International Symposium on Future ICT (Future-ICT 2019) in Conjunction with 4th International Symposium on Mobile Internet Security (MobiSec 2019)

    Get PDF
    The International Symposium on Future ICT (Future-ICT 2019) in conjunction with the 4th International Symposium on Mobile Internet Security (MobiSec 2019) was held on 17–19 October 2019 in Taichung, Taiwan. The symposium provided academic and industry professionals an opportunity to discuss the latest issues and progress in advancing smart applications based on future ICT and its relative security. The symposium aimed to publish high-quality papers strictly related to the various theories and practical applications concerning advanced smart applications, future ICT, and related communications and networks. It was expected that the symposium and its publications would be a trigger for further related research and technology improvements in this field

    Gollach : configuration of a cluster based linux virtual server

    Get PDF
    Includes bibliographical references.This thesis describes the Gollach cluster. The Gollach is an eight machine computing cluster that is aimed at being a general purpose computing resource for research purposes. This includes image processing and simulations. The main quest in this project is to create a cluster server that gives increased computational power and a unified system image (at several levels) without requiring the users to learn specialised tricks. At the same time the cluster must not be tasking to administer

    Scalability in extensible and heterogeneous storage systems

    Get PDF
    The evolution of computer systems has brought an exponential growth in data volumes, which pushes the capabilities of current storage architectures to organize and access this information effectively: as the unending creation and demand of computer-generated data grows at an estimated rate of 40-60% per year, storage infrastructures need increasingly scalable data distribution layouts that are able to adapt to this growth with adequate performance. In order to provide the required performance and reliability, large-scale storage systems have traditionally relied on multiple RAID-5 or RAID-6 storage arrays, interconnected with high-speed networks like FibreChannel or SAS. Unfortunately, the performance of the current, most commonly-used storage technology-the magnetic disk drive-can't keep up with the rate of growth needed to sustain this explosive growth. Moreover, storage architectures based on solid-state devices (the successors of current magnetic drives) don't seem poised to replace HDD-based storage for the next 5-10 years, at least in data centers. Though the performance of SSDs significantly improves that of hard drives, it would cost the NAND industry hundreds of billions of dollars to build enough manufacturing plants to satisfy the forecasted demand. Besides the problems derived from technological and mechanical limitations, the massive data growth poses more challenges: to build a storage infrastructure, the most flexible approach consists in using pools of storage devices that can be expanded as needed by adding new devices or replacing older ones, thus seamlessly increasing the system's performance and capacity. This approach however, needs data layouts that can adapt to these topology changes and also exploit the potential performance offered by the hardware. Such strategies should be able to rebuild the data layout to accommodate the new devices in the infrastructure, extracting the utmost performance from the hardware and offering a balanced workload distribution. An inadequate data layout might not effectively use the enlarged capacity or better performance provided by newer devices, thus leading to unbalancing problems like bottlenecks or resource underusage. Besides, massive storage systems will inevitably be composed of a collection of heterogeneous hardware: as capacity and performance requirements grow, new storage devices must be added to cope with demand, but it is unlikely that these devices will have the same capacity or performance of those installed. Moreover, upon failure, disks are most commonly replaced by faster and larger ones, since it is not always easy (or cheap) to find a particular model of drive. In the long run, any large-scale storage system will have to cope with a myriad of devices. The title of this dissertation, "Scalability in Extensible and Heterogeneous Storage Systems", refers to the main focus of our contributions in scalable data distributions that can adapt to increasing volumes of data. Our first contribution is the design of a scalable data layout that can adapt to hardware changes while redistributing only the minimum data to keep a balanced workload. With the second contribution, we perform a comparative study on the influence of pseudo-random number generators in the performance and distribution quality of randomized layouts and prove that a badly chosen generator can degrade the quality of the strategy. Our third contribution is an an analysis of long-term data access patterns in several real-world traces to determine if it is possible to offer high performance and a balanced load with less than minimal data rebalancing. In our final contribution, we apply the knowledge learnt about long-term access patterns to design an extensible RAID architecture that can adapt to changes in the number of disks without migrating large amounts of data, and prove that it can be competitive with current RAID arrays with an overhead of at most 1.28% the storage capacity.L'evolució dels sistemes de computació ha dut un creixement exponencial dels volums de dades, que porta al límit la capacitat d'organitzar i accedir informació de les arquitectures d'emmagatzemament actuals. Amb una incessant creació de dades que creix a un ritme estimat del 40-60% per any, les infraestructures de dades requereixen de distribucions de dades cada cop més escalables que puguin adaptar-se a aquest creixement amb un rendiment adequat. Per tal de proporcionar aquest rendiment, els sistemes d'emmagatzemament de gran escala fan servir agregacions RAID5 o RAID6 connectades amb xarxes d'alta velocitat com FibreChannel o SAS. Malauradament, el rendiment de la tecnologia més emprada actualment, el disc magnètic, no creix prou ràpid per sostenir tal creixement explosiu. D'altra banda, les prediccions apunten que els dispositius d'estat sòlid, els successors de la tecnologia actual, no substituiran els discos magnètics fins d'aquí 5-10 anys. Tot i que el rendiment és molt superior, la indústria NAND necessitarà invertir centenars de milions de dòlars per construir prou fàbriques per satisfer la demanda prevista. A més dels problemes derivats de limitacions tècniques i mecàniques, el creixement massiu de les dades suposa més problemes: la solució més flexible per construir una infraestructura d'emmagatzematge consisteix en fer servir grups de dispositius que es poden fer créixer bé afegint-ne de nous, bé reemplaçant-ne els més vells, incrementant així la capacitat i el rendiment del sistema de forma transparent. Aquesta solució, però, requereix distribucions de dades que es puguin adaptar a aquests canvis a la topologia i explotar el rendiment potencial que el hardware ofereix. Aquestes distribucions haurien de poder reconstruir la col.locació de les dades per acomodar els nous dispositius, extraient-ne el màxim rendiment i oferint una càrrega de treball balancejada. Una distribució inadient pot no fer servir de manera efectiva la capacitat o el rendiment addicional ofert pels nous dispositius, provocant problemes de balanceig com colls d¿ampolla o infrautilització. A més, els sistemes d'emmagatzematge massius estaran inevitablement formats per hardware heterogeni: en créixer els requisits de capacitat i rendiment, es fa necessari afegir nous dispositius per poder suportar la demanda, però és poc probable que els dispositius afegits tinguin la mateixa capacitat o rendiment que els ja instal.lats. A més, en cas de fallada, els discos són reemplaçats per d'altres més ràpids i de més capacitat, ja que no sempre és fàcil (o barat) trobar-ne un model particular. A llarg termini, qualsevol arquitectura d'emmagatzematge de gran escala estarà formada per una miríade de dispositius diferents. El títol d'aquesta tesi, "Scalability in Extensible and Heterogeneous Storage Systems", fa referència a les nostres contribucions a la recerca de distribucions de dades escalables que es puguin adaptar a volums creixents d'informació. La primera contribució és el disseny d'una distribució escalable que es pot adaptar canvis de hardware només redistribuint el mínim per mantenir un càrrega de treball balancejada. A la segona contribució, fem un estudi comparatiu sobre l'impacte del generadors de números pseudo-aleatoris en el rendiment i qualitat de les distribucions pseudo-aleatòries de dades, i provem que una mala selecció del generador pot degradar la qualitat de l'estratègia. La tercera contribució és un anàlisi dels patrons d'accés a dades de llarga duració en traces de sistemes reals, per determinar si és possible oferir un alt rendiment i una bona distribució amb una rebalanceig inferior al mínim. A la contribució final, apliquem el coneixement adquirit en aquest estudi per dissenyar una arquitectura RAID extensible que es pot adaptar a canvis en el número de dispositius sense migrar grans volums de dades, i demostrem que pot ser competitiva amb les distribucions ideals RAID actuals, amb només una penalització del 1.28% de la capacita

    Data partitioning and load balancing in parallel disk systems

    Get PDF
    Parallel disk systems provide opportunities for exploiting I/O parallelism in two possible ways, namely via inter-request and intra-request parallelism. In this paper we discuss the main issues in performance tuning of such systems, namely striping and load balancing, and show their relationship to response time and throughput. We outline the main components of an intelligent file system that optimizes striping by taking into account the requirements of the applications, and performs load balancing by judicious file allocation and dynamic redistributions of the data when access patterns change. Our system uses simple but effective heuristics that incur only little overhead. We present performance experiments based on synthetic workloads and real-life traces

    Designing Reliable High-Performance Storage Systems for HPC Environments

    Get PDF
    Advances in processing capability have far outpaced advances in I/O throughput and latency. Distributed file system based storage systems help to address this performance discrepancy in high performance computing (HPC) environments; however, they can be difficult to deploy and challenging to maintain. This thesis explores the design considerations as well as the pitfalls faced when deploying high performance storage systems. It includes best practices in identifying system requirements, techniques for generating I/O profiles of applications, and recommendations for disk subsystem configuration and maintenance based upon a number of recent papers addressing latent sector and unrecoverable read errors

    Goddard Conference on Mass Storage Systems and Technologies, Volume 1

    Get PDF
    Copies of nearly all of the technical papers and viewgraphs presented at the Goddard Conference on Mass Storage Systems and Technologies held in Sep. 1992 are included. The conference served as an informational exchange forum for topics primarily relating to the ingestion and management of massive amounts of data and the attendant problems (data ingestion rates now approach the order of terabytes per day). Discussion topics include the IEEE Mass Storage System Reference Model, data archiving standards, high-performance storage devices, magnetic and magneto-optic storage systems, magnetic and optical recording technologies, high-performance helical scan recording systems, and low end helical scan tape drives. Additional topics addressed the evolution of the identifiable unit for processing purposes as data ingestion rates increase dramatically, and the present state of the art in mass storage technology

    ClusterRAID: Architecture and Prototype of a Distributed Fault-Tolerant Mass Storage System for Clusters

    Get PDF
    During the past few years clusters built from commodity off-the-shelf (COTS) components have emerged as the predominant supercomputer architecture. Typically comprising a collection of standard PCs or workstations and an interconnection network, they have replaced the traditionally used integrated systems due to their better price/performance ratio. As paradigms shift from mere computing intensive to I/O intensive applications, mass storage solutions for cluster installations become a more and more crucial aspect of these systems. The inherent unreliability of the underlying components is one of the reasons why no system has been established as a standard storage solution for clusters yet. This thesis sets out the architecture and prototype implementation of a novel distributed mass storage system for commodity off-the-shelf clusters and addresses the issue of the unreliable constituent components. The key concept of the presented system is the conversion of the local hard disk drive of a cluster node into a reliable device while preserving the block device interface. By the deployment of sophisticated erasure-correcting codes, the system allows the adjustment of the number of tolerable failures and thus the overall reliability. In addition, the applied data layout considers the access behaviour of a broad range of applications and minimizes the number of required network transactions. Extensive measurements and functionality tests of the prototype, both stand-alone and in conjunction with local or distributed file systems, show the validity of the concept

    The Third NASA Goddard Conference on Mass Storage Systems and Technologies

    Get PDF
    This report contains copies of nearly all of the technical papers and viewgraphs presented at the Goddard Conference on Mass Storage Systems and Technologies held in October 1993. The conference served as an informational exchange forum for topics primarily relating to the ingestion and management of massive amounts of data and the attendant problems involved. Discussion topics include the necessary use of computers in the solution of today's infinitely complex problems, the need for greatly increased storage densities in both optical and magnetic recording media, currently popular storage media and magnetic media storage risk factors, data archiving standards including a talk on the current status of the IEEE Storage Systems Reference Model (RM). Additional topics addressed System performance, data storage system concepts, communications technologies, data distribution systems, data compression, and error detection and correction
    • …
    corecore