101 research outputs found

    A differentiated proposal of three dimension i/o performance characterization model focusing on storage environments

    Get PDF
    The I/O bottleneck remains a central issue in high-performance environments. Cloud computing, high-performance computing (HPC) and big data environments share many underneath difficulties to deliver data at a desirable time rate requested by high-performance applications. This increases the possibility of creating bottlenecks throughout the application feeding process by bottom hardware devices located in the storage system layer. In the last years, many researchers have been proposed solutions to improve the I/O architecture considering different approaches. Some of them take advantage of hardware devices while others focus on a sophisticated software approach. However, due to the complexity of dealing with high-performance environments, creating solutions to improve I/O performance in both software and hardware is challenging and gives researchers many opportunities. Classifying these improvements in different dimensions allows researchers to understand how these improvements have been built over the years and how it progresses. In addition, it also allows future efforts to be directed to research topics that have developed at a lower rate, balancing the general development process. This research present a three-dimension characterization model for classifying research works on I/O performance improvements for large scale storage computing facilities. This classification model can also be used as a guideline framework to summarize researches providing an overview of the actual scenario. We also used the proposed model to perform a systematic literature mapping that covered ten years of research on I/O performance improvements in storage environments. This study classified hundreds of distinct researches identifying which were the hardware, software, and storage systems that received more attention over the years, which were the most researches proposals elements and where these elements were evaluated. In order to justify the importance of this model and the development of solutions that targets I/O performance improvements, we evaluated a subset of these improvements using a a real and complete experimentation environment, the Grid5000. Analysis over different scenarios using a synthetic I/O benchmark demonstrates how the throughput and latency parameters behaves when performing different I/O operations using distinct storage technologies and approaches.O gargalo de E/S continua sendo um problema central em ambientes de alto desempenho. Os ambientes de computação em nuvem, computação de alto desempenho (HPC) e big data compartilham muitas dificuldades para fornecer dados em uma taxa de tempo desejável solicitada por aplicações de alto desempenho. Isso aumenta a possibilidade de criar gargalos em todo o processo de alimentação de aplicativos pelos dispositivos de hardware inferiores localizados na camada do sistema de armazenamento. Nos últimos anos, muitos pesquisadores propuseram soluções para melhorar a arquitetura de E/S considerando diferentes abordagens. Alguns deles aproveitam os dispositivos de hardware, enquanto outros se concentram em uma abordagem sofisticada de software. No entanto, devido à complexidade de lidar com ambientes de alto desempenho, criar soluções para melhorar o desempenho de E/S em software e hardware é um desafio e oferece aos pesquisadores muitas oportunidades. A classificação dessas melhorias em diferentes dimensões permite que os pesquisadores entendam como essas melhorias foram construídas ao longo dos anos e como elas progridem. Além disso, também permite que futuros esforços sejam direcionados para tópicos de pesquisa que se desenvolveram em menor proporção, equilibrando o processo geral de desenvolvimento. Esta pesquisa apresenta um modelo de caracterização tridimensional para classificar trabalhos de pesquisa sobre melhorias de desempenho de E/S para instalações de computação de armazenamento em larga escala. Esse modelo de classificação também pode ser usado como uma estrutura de diretrizes para resumir as pesquisas, fornecendo uma visão geral do cenário real. Também usamos o modelo proposto para realizar um mapeamento sistemático da literatura que abrangeu dez anos de pesquisa sobre melhorias no desempenho de E/S em ambientes de armazenamento. Este estudo classificou centenas de pesquisas distintas, identificando quais eram os dispositivos de hardware, software e sistemas de armazenamento que receberam mais atenção ao longo dos anos, quais foram os elementos de proposta mais pesquisados e onde esses elementos foram avaliados. Para justificar a importância desse modelo e o desenvolvimento de soluções que visam melhorias no desempenho de E/S, avaliamos um subconjunto dessas melhorias usando um ambiente de experimentação real e completo, o Grid5000. Análises em cenários diferentes usando um benchmark de E/S sintética demonstra como os parâmetros de vazão e latência se comportam ao executar diferentes operações de E/S usando tecnologias e abordagens distintas de armazenamento

    Reliability of SSD Storage Systems

    Get PDF
    Solid-state drives (SSDs) are attractive storage components due to their many attractive properties, however, concerns about their reliability still remain and this delays the wider deployment of the SSDs. Many protection schemes have been proposed to improve the reliability of SSDs. For example, some techniques like error correction codes (ECC), log-like writing of ash translation layer (FTL), garbage collection and wear leveling improve the reliability of SSD at the device level. Composing an array of SSDs and employing system level parity protection is one of the popular protection schemes at the system level. Enterprise class (high-end) SSDs are faster and more resilient than client class (low-end) SSDs but they are expensive to be deployed in large scale storage systems. It is an attractive and practical alternative to exploit the high-end SSDs as a cache and low-end SSDs as main storage. The high-end SSD cache equipped on a low-end SSD array enhances both latency and reduces write count of the SSD storage system at the same time. This work analyzes the effectiveness of protection schemes originally designed for HDDs but applied to SSD storage systems. We find that different characteristics of HDDs and SSDs make integration of those solutions in SSD storage systems not so straight-forward. This work, at first, analyzes the effectiveness of the device level protection schemes such as ECC and scrubbing. A Markov model based analysis of the protection schemes is presented. Our model considers time varying nature of the reliability of ash memory as well as write amplification of various device level protection schemes. Our study shows that write amplification from these various sources can significantly affect the benefits of protection schemes in improving the lifetime. Based on the results from our analysis, we propose that bit errors within an SSD page be left uncorrected until a threshold of errors are accumulated. We show that such an approach can significantly improve lifetimes by up to 40%. This work also analyzes the effectiveness of parity protection over SSD arrays, a widely used protection scheme for SSD arrays at system level. The parity protection is typically employed to compose reliable storage systems. However, careful consideration is required when SSD based systems employ parity protection. Additional writes are required for parity updates. Also, parity consumes space on the device, which results in write amplification from less efficient garbage collection at higher space utilization. We present a Markov model to estimate the lifetime of SSD based RAID systems in different environments. In a small array, our results show that parity protection provides benefit only with considerably low space utilizations and low data access rates. However, in a large system, RAID improves data lifetime even when we take write amplification into account. This work explores how to optimize a mixed SSD array in terms of performance and lifetime. We show that simple integration of different classes of SSDs in traditional caching policies results in poor reliability. We also reveal that caching policies with static workload classifiers are not always efficient. We propose a sampling based adaptive approach that achieves fair workload distribution across the cache and the storage. The proposed algorithm enables fine-grained control of the workload distribution which minimizes latency over lifetime of mixed SSD arrays. We show that our adaptive algorithm is very effective in improving the latency over lifetime metric, on an average, by up to 2.36 times over LRU, across a number of workloads

    Analysis of SSD’s Performance in Database Servers

    Get PDF
    Data storage is much needed in any type of device and there are multiple mechanisms for data storage which vary from the device to device but at the end it’s a magnetic drive which holds the data and stored in the form of digital format. One predominant data storage device is hard disk drive also called as HDD. Hard disk drives are used in a wide range of systems like computers, laptops and netbooks etc., it has magnetic platter which is used for reading and writing operations. (Hard disk drive, n.d.) With the emerging technologies and modularization of web application design architecture created a need for different kind of operating system and system architecture based on the functionality. If we want a server where files need to be placed it should be designed in such a way that it needs to be good at input and output operations (I/O). (How does a hard drive work?, 2018) If we want to store videos and stream, that server should be good at asynchronous streaming functionality. If we need to store the structured/un-structured data which can be pertained to any educational institution or an organization, we can use a database server to store this data in tables and it can be used. In general, we use hard disk drives to store any kind of data in all the servers, but there will be only changes in the system architecture. The concept of HDD utilization has been constant from past 20 years. There was a huge growth in the architectural design of operating systems used for hosting database servers, but when it comes to storage HDD’s have been used for many years. With the need for speed and faster operations from the perspective of storage, solid state drives come in to picture. (SSD Advantage, n.d.)They have a different kind of architecture when compared to HDD and they are called as SSD. This paper discusses the idea of using SSD’s instead of HDD’s in database servers. We created multiple database instances for SSD’s and HDD’s and also created multiple web applications using JAVA and connected to each of these database servers to access data via REST API’s. We have run multiple tests to compare the load time of all the different database instances and generated some visual analytics how it behaves when multiple/series of get operations made on the database with the REST API. This analysis will help in finding if there are any anomalies in the behavior with increase in throughput of read and write operations

    NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications, volume 1

    Get PDF
    Papers and viewgraphs from the conference are presented. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disks and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's

    Proceedings of the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications

    Get PDF
    The proceedings of the National Space Science Data Center Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications held July 23 through 25, 1991 at the NASA/Goddard Space Flight Center are presented. The program includes a keynote address, invited technical papers, and selected technical presentations to provide a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's

    Shingled Magnetic Recording disks for Mass Storage Systems

    Get PDF
    Disk drives have seen a dramatic increase in storage density over the last five decades, but to continue the growth seems difficult if not impossible because of physical limitations. One way to increase storage density is using a shingled magnetic recording (SMR) disk. Shingled writing is a promising technique that trades off the inability to update in-place for narrower tracks and thus a much higher data density. It is particularly appealing as it can be adopted while utilizing essentially the same physical recording mechanisms currently in use. Because of its manner of writing, an SMR disk would be unable to update a written track without overwriting neighboring tracks, potentially requiring the rewrite of all the tracks to the end of a band where the end of a band is an area left unwritten to allow for a non-overlapped final track. Random reads are still possible on such devices, but the handling of writes becomes particularly critical. In this manuscript, we first look at a variety of potential workloads, drawn from real-world traces, and evaluate their impact on SMR disk models. Later, we evaluate the behavior of SMR disks when used in an array configuration or when faced with heavily interleaved workloads. Specifically, we demonstrate the dramatically different effects that different workloads can have upon the opposing approaches of remapping and restoring blocks, and how write-heavy workloads can (under the right conditions, and contrary to intuition) result in a performance advantage for an SMR disk
    • …
    corecore