1,451 research outputs found

    Elevating commodity storage with the SALSA host translation layer

    Full text link
    To satisfy increasing storage demands in both capacity and performance, industry has turned to multiple storage technologies, including Flash SSDs and SMR disks. These devices employ a translation layer that conceals the idiosyncrasies of their mediums and enables random access. Device translation layers are, however, inherently constrained: resources on the drive are scarce, they cannot be adapted to application requirements, and lack visibility across multiple devices. As a result, performance and durability of many storage devices is severely degraded. In this paper, we present SALSA: a translation layer that executes on the host and allows unmodified applications to better utilize commodity storage. SALSA supports a wide range of single- and multi-device optimizations and, because is implemented in software, can adapt to specific workloads. We describe SALSA's design, and demonstrate its significant benefits using microbenchmarks and case studies based on three applications: MySQL, the Swift object store, and a video server.Comment: Presented at 2018 IEEE 26th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS

    Robo-line storage: Low latency, high capacity storage systems over geographically distributed networks

    Get PDF
    Rapid advances in high performance computing are making possible more complete and accurate computer-based modeling of complex physical phenomena, such as weather front interactions, dynamics of chemical reactions, numerical aerodynamic analysis of airframes, and ocean-land-atmosphere interactions. Many of these 'grand challenge' applications are as demanding of the underlying storage system, in terms of their capacity and bandwidth requirements, as they are on the computational power of the processor. A global view of the Earth's ocean chlorophyll and land vegetation requires over 2 terabytes of raw satellite image data. In this paper, we describe our planned research program in high capacity, high bandwidth storage systems. The project has four overall goals. First, we will examine new methods for high capacity storage systems, made possible by low cost, small form factor magnetic and optical tape systems. Second, access to the storage system will be low latency and high bandwidth. To achieve this, we must interleave data transfer at all levels of the storage system, including devices, controllers, servers, and communications links. Latency will be reduced by extensive caching throughout the storage hierarchy. Third, we will provide effective management of a storage hierarchy, extending the techniques already developed for the Log Structured File System. Finally, we will construct a protototype high capacity file server, suitable for use on the National Research and Education Network (NREN). Such research must be a Cornerstone of any coherent program in high performance computing and communications

    Scheduling policies for disks and disk arrays

    Get PDF
    Recent rapid advances of magnetic recording technology have enabled substantial increases in disk capacity. There has been less than 10% improvement annually in the random access time to small data blocks on the disk. Such accesses are very common in OLTP applications, which tend to have stringent response time requirements. Scheduling of disk requests is intended to improve their response time, reduce disk service time, and increase disk access bandwidth with respect to the default FCFS scheduling policy. Shortest Access Time First policy has been shown to outperform other classical disk scheduling policies in numerous studies. Before verifying this conclusion, this dissertation develops an empirical analysis of the SATF policy, and produces a valuable by-product, expressed as x[m] = mp, during the study. Classical scheduling policies and some well-known variations of the SATE policy are re-evaluated, and three extensions are proposed. The performance evaluation uses self-developed simulators containing detailed disk information. The simulators, driven with both synthetic and trace workloads, report the measurements of requests, such as the mean and the 95th percentile of the response times, as well as the measurements of the system, such as the maximum throughput. A comprehensive arrangement of routing and scheduling schemes is presented or mirrored disk systems, or RAIDi. The performance evaluation is based on a twodimensional configuration classification: independent queues (i.e. a router sends the requests to one of the disks as soon as these requests arrive) versus a shared queue (i.e. the requests are held in a common queue at the router and are scheduled to be served); normal data layout versus transposed data layout (i.e. the data stored on the inner cylinders of one disk is duplicated on the outer cylinders of the mirrored disk). The availability of a non-volatile storage or NVS, which allows the processing of write requests to be deferred, is also investigated. Finally, various strategies of mirrored disk declustering are compared against the basic disk mirroring. Their competence of load balancing and their reliability are examined in both normal mode and degraded mode

    NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications, volume 1

    Get PDF
    Papers and viewgraphs from the conference are presented. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disks and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's

    Simple I/O-efficient flow accumulation on grid terrains

    Full text link
    The flow accumulation problem for grid terrains takes as input a matrix of flow directions, that specifies for each cell of the grid to which of its eight neighbours any incoming water would flow. The problem is to compute, for each cell c, from how many cells of the terrain water would reach c. We show that this problem can be solved in O(scan(N)) I/Os for a terrain of N cells. Taking constant factors in the I/O-efficiency into account, our algorithm may be an order of magnitude faster than the previously known algorithm that is based on time-forward processing and needs O(sort(N)) I/Os.Comment: This paper is an exact copy of the paper that appeared in the abstract collection of the Workshop on Massive Data Algorithms, Aarhus, 200

    Proceedings of the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications

    Get PDF
    The proceedings of the National Space Science Data Center Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications held July 23 through 25, 1991 at the NASA/Goddard Space Flight Center are presented. The program includes a keynote address, invited technical papers, and selected technical presentations to provide a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's

    Early Observations on Performance of Google Compute Engine for Scientific Computing

    Full text link
    Although Cloud computing emerged for business applications in industry, public Cloud services have been widely accepted and encouraged for scientific computing in academia. The recently available Google Compute Engine (GCE) is claimed to support high-performance and computationally intensive tasks, while little evaluation studies can be found to reveal GCE's scientific capabilities. Considering that fundamental performance benchmarking is the strategy of early-stage evaluation of new Cloud services, we followed the Cloud Evaluation Experiment Methodology (CEEM) to benchmark GCE and also compare it with Amazon EC2, to help understand the elementary capability of GCE for dealing with scientific problems. The experimental results and analyses show both potential advantages of, and possible threats to applying GCE to scientific computing. For example, compared to Amazon's EC2 service, GCE may better suit applications that require frequent disk operations, while it may not be ready yet for single VM-based parallel computing. Following the same evaluation methodology, different evaluators can replicate and/or supplement this fundamental evaluation of GCE. Based on the fundamental evaluation results, suitable GCE environments can be further established for case studies of solving real science problems.Comment: Proceedings of the 5th International Conference on Cloud Computing Technologies and Science (CloudCom 2013), pp. 1-8, Bristol, UK, December 2-5, 201
    corecore