16,859 research outputs found

    How migrating 0.0001% of address space saves 12% of energy in hybrid storage

    Get PDF
    We present a simple, operating-\ud system independent method to reduce the num-\ud ber of seek operations and consequently reduce\ud the energy consumption of a hybrid storage\ud device consisting of a hard disk and a flash\ud memory. Trace-driven simulations show that\ud migrating a tiny amount of the address space\ud (0.0001%) from disk to flash already results\ud in a significant storage energy reduction (12%)\ud at virtually no extra cost. We show that the\ud amount of energy saving depends on which part\ud of the address space is migrated, and we present\ud two indicators for this, namely sequentiality and\ud request frequency. Our simulations show that\ud both are suitable as criterion for energy-saving\ud file placement methods in hybrid storage. We\ud address potential wear problems in the flash\ud subsystem by presenting a simple way to pro-\ud long its expected lifetime.\u

    CRAID: Online RAID upgrades using dynamic hot data reorganization

    Get PDF
    Current algorithms used to upgrade RAID arrays typically require large amounts of data to be migrated, even those that move only the minimum amount of data required to keep a balanced data load. This paper presents CRAID, a self-optimizing RAID array that performs an online block reorganization of frequently used, long-term accessed data in order to reduce this migration even further. To achieve this objective, CRAID tracks frequently used, long-term data blocks and copies them to a dedicated partition spread across all the disks in the array. When new disks are added, CRAID only needs to extend this process to the new devices to redistribute this partition, thus greatly reducing the overhead of the upgrade process. In addition, the reorganized access patterns within this partition improve the array’s performance, amortizing the copy overhead and allowing CRAID to offer a performance competitive with traditional RAIDs. We describe CRAID’s motivation and design and we evaluate it by replaying seven real-world workloads including a file server, a web server and a user share. Our experiments show that CRAID can successfully detect hot data variations and begin using new disks as soon as they are added to the array. Also, the usage of a dedicated partition improves the sequentiality of relevant data access, which amortizes the cost of reorganizations. Finally, we prove that a full-HDD CRAID array with a small distributed partition (<1.28% per disk) can compete in performance with an ideally restriped RAID-5 and a hybrid RAID-5 with a small SSD cache.Peer ReviewedPostprint (published version

    Isolating and Quantifying the Role of Developmental Noise in Generating Phenotypic Variation

    Get PDF
    Genotypic variation, environmental variation, and their interaction may produce variation in the developmental process and cause phenotypic differences among individuals. Developmental noise, which arises during development from stochasticity in cellular and molecular processes when genotype and environment are fixed, also contributes to phenotypic variation. While evolutionary biology has long focused on teasing apart the relative contribution of genes and environment to phenotypic variation, our understanding of the role of developmental noise has lagged due to technical difficulties in directly measuring the contribution of developmental noise. The influence of developmental noise is likely underestimated in studies of phenotypic variation due to intrinsic mechanisms within organisms that stabilize phenotypes and decrease variation. Since we are just beginning to appreciate the extent to which phenotypic variation due to stochasticity is potentially adaptive, the contribution of developmental noise to phenotypic variation must be separated and measured to fully understand its role in evolution. Here, we show that variation in the component of the developmental process corresponding to environmental and genetic factors (here treated together as a unit called the LALI-type) versus the contribution of developmental noise, can be distinguished for leopard gecko (Eublepharis macularius) head color patterns using mathematical simulations that model the role of random variation (corresponding to developmental noise) in patterning. Specifically, we modified the parameters of simulations corresponding to variation in the LALI-type to generate the full range of phenotypic variation in color pattern seen on the heads of eight leopard geckos. We observed that over the range of these parameters, variation in color pattern due to LALI-type variation exceeds that due to developmental noise in the studied gecko cohort. However, the effect of developmental noise on patterning is also substantial. Our approach addresses one of the major goals of evolutionary biology: to quantify the role of stochasticity in shaping phenotypic variation

    Real-time computer data system for the 40- by 80-foot wind tunnel facility at Ames Research Center

    Get PDF
    The background material and operational concepts of a computer-based system for an operating wind tunnel are described. An on-line real-time computer system was installed in a wind tunnel facility to gather static and dynamic data. The computer system monitored aerodynamic forces and moments of periodic and quasi-periodic functions, and displayed and plotted computed results in real time. The total system is comprised of several off-the-shelf, interconnected subsystems that are linked to a large data processing center. The system includes a central processor unit with 32,000 24-bit words of core memory, a number of standard peripherals, and several special processors; namely, a dynamic analysis subsystem, a 256-channel PCM-data subsystem and ground station, a 60-channel high-speed data acquisition subsystem, a communication link, and static force and pressure subsystems. The role of the test engineer as a vital link in the system is also described

    Studies of disk arrays tolerating two disk failures and a proposal for a heterogeneous disk array

    Get PDF
    There has been an explosion in the amount of generated data in the past decade. Online access to these data is made possible by large disk arrays, especially in the RAID (Redundant Array of Independent Disks) paradigm. According to the RAID level a disk array can tolerate one or more disk failures, so that the storage subsystem can continue operating with disk failure(s). RAID 5 is a single disk failure tolerant array which dedicates the capacity of one disk to parity information. The content on the failed disk can be reconstructed on demand and written onto a spare disk. However, RAID5 does not provide enough protection for data since the data loss may occur when there is a media failure (unreadable sectors) or a second disk failure during the rebuild process. Due to the high cost of downtime in many applications, two disk failure tolerant arrays, such as RAID6 and EVENODD, have become popular. These schemes use 2/N of the capacity of the array for redundant information in order to tolerate two disk failures. RM2 is another scheme that can tolerate two disk failures, with slightly higher redundancy ratio. However, the performance of these two disk failure tolerant RAID schemes is impaired, since there are two check disks to be updated for each write request. Therefore, their performance, especially when there are disk failure(s), is of interest. In the first part of the dissertation, the operations for the RAID5, RAID6, EVENODD and RM2 schemes are described. A cost model is developed for these RAID schemes by analyzing the operations in various operating modes. This cost model offers a measure of the volume of data being transmitted, and provides adevice-independent comparison of the efficiency of these RAID schemes. Based on this cost model, the maximum throughput of a RAID scheme can be obtained given detailed disk characteristic and RAID configuration. Utilizing M/G/1 queuing model and other favorable modeling assumptions, a queuing analysis to obtain the mean read response time is described. Simulation is used to validate analytic results, as well as to evaluate the RAID systems in analytically intractable cases. The second part of this dissertation describes a new disk array architecture, namely Heterogeneous Disk Array (HDA). The HDA is motivated by a few observations of the trends in storage technology. The HDA architecture allows a disk array to have two forms of heterogeneity: (1) device heterogeneity, i.e., disks of different types can be incorporated in a single HDA; and (2) RAID level heterogeneity, i.e., various RAID schemes can coexist in the same array. The goal of this architecture is (1) utilizing the extra resource (i.e. bandwidth and capacity) introduced by new disk drives in an automated and efficient way; and (2) using appropriate RAID levels to meet the varying availability requirements for different applications. In HDA, each new object is associated with an appropriate RAID level and the allocation is carried out in a way to keep disk bandwidth and capacity utilizations balanced. Design considerations for the data structures of HDA metadata are described, followed by the actual design of the data structures and flowcharts for the most frequent operations. Then a data allocation algorithm is described in detail. Finally, the HDA architecture is prototyped based on the DASim simulation toolkit developed at NJIT and simulation results of an HDA with two RAID levels (RAID 1 and RAIDS) are presented

    NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications, volume 1

    Get PDF
    Papers and viewgraphs from the conference are presented. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disks and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's
    corecore