36 research outputs found

    A Case for Redundant Arrays of Hybrid Disks (RAHD)

    Get PDF
    Hybrid Hard Disk Drive was originally concepted by Samsung, which incorporates a Flash memory in a magnetic disk. The combined ultra-high-density benefits of magnetic storage and the low-power and fast read access of NAND technology inspires us to construct Redundant Arrays of Hybrid Disks (RAHD) to offer a possible alternative to today’s Redundant Arrays of Independent Disks (RAIDs) and/or Massive Arrays of Idle Disks (MAIDs). We first design an internal management system (including Energy-Efficient Control) for hybrid disks. Three traces collected from real systems as well as a synthetic trace are then used to evaluate the RAHD arrays. The trace-driven experimental results show: in the high speed mode, a RAHD outplays the purely-magnetic-disk-based RAIDs by a factor of 2.4–4; in the energy-efficient mode, a RAHD4/5 can save up to 89% of energy at little performance degradationPeer reviewe

    How migrating 0.0001% of address space saves 12% of energy in hybrid storage

    Get PDF
    We present a simple, operating-\ud system independent method to reduce the num-\ud ber of seek operations and consequently reduce\ud the energy consumption of a hybrid storage\ud device consisting of a hard disk and a flash\ud memory. Trace-driven simulations show that\ud migrating a tiny amount of the address space\ud (0.0001%) from disk to flash already results\ud in a significant storage energy reduction (12%)\ud at virtually no extra cost. We show that the\ud amount of energy saving depends on which part\ud of the address space is migrated, and we present\ud two indicators for this, namely sequentiality and\ud request frequency. Our simulations show that\ud both are suitable as criterion for energy-saving\ud file placement methods in hybrid storage. We\ud address potential wear problems in the flash\ud subsystem by presenting a simple way to pro-\ud long its expected lifetime.\u

    In-memory preprocessing of streaming sensory data - a partitioned relational database approach

    Get PDF
    In this paper we present a database architecture and an application area and method in detail. As sensory data stream in the database it is efficient to preprocess them in-memory before integrating in the repository in order to save storage I/O cost. After data are integrated, it is important to allow efficient quering based on data retrieval profile. This can also be supported by the presented database architecture by partitioning the database upon different criteria. It is mandatory to hide the internal partitioned architectural details from higher layers, so options to allow transparent quering options are also presented. We have implemented a test system and experimental results are also given

    Analysis of Trade-Off Between Power Saving and Response Time in Disk Storage Systems

    Get PDF
    It is anticipated that in the near future disk storage systems will surpass application servers and will become the primary consumer of power in the data centers. Shutting down of inactive disks is one of the more widespread solutions to save power consumption of disk systems. This solution involves spinning down or completely shutting off disks that exhibit long periods of inactivity and placing them in standby mode. A file request from a disk in standby mode will incur an I/O cost penalty as it takes time to spin up the disk before it can serve the file. In this paper, we address the problem of designing and implementing file allocation strategies on disk storage that save energy while meeting performance requirements of file retrievals. We present an algorithm for solving this problem with guaranteed bounds from the optimal solution. Our algorithm runs in O(nlogn) time where n is the number of files allocated. Detailed simulation results and experiments with real life workloads are also presented

    Analysis of trade-off between power saving and response time in disk storage systems

    Full text link

    High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

    Full text link
    Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.Comment: 72 page

    TI VERDE – EFICIÊNCIA ENERGÉTICA DE DATA CENTER

    Get PDF
    O mercado competitivo em que as organizações estão inseridas exige constante adequação às novas tecnologias, redução de custos e melhoria de desempenho.  A Tecnologia da Informação (TI) Verde solicita uma visão global das operações de TI e de negócio para que, com análise apurada, possa apoiar as empresas nos itens já mencionados e questões relacionadas à eficiência no consumo de energia, provisionando e dimensionando recursos, efetuando controles sobre a operação e também cuidado com o meio ambiente. Algumas empresas já abordam este assunto, pois além da preocupação com a preservação dos recursos naturais, saem ganhando com melhor eficiência das operações de TI

    Playout-järjestelmän arkistointi

    Get PDF
    Insinöörityön tarkoitus oli suunnitella ja toteuttaa sovellus, joka siirtää videomateriaalia Windows 2012 R2 -palvelimelta DAC ALTO -palvelimelle. Sovellus on osa playout-järjestelmää, jossa arkistoidaan videomateriaali, jota ei enää lähitulevaisuudessa käytetä. Sovellus siirtää tiedostoja määriteltyjen aikajaksojen väliltä automaattisesti sekä manuaalisesti käyttäjän määrittelemien päivien väliltä. Sovellukseen toteutettiin käyttöliittymä, josta ilmenee siirtymässä olevat tiedostot, arvioitu tiedostonsiirtonopeus ja tiedostonsiirron edistyminen. Sovellus ohjelmoitiin pääsääntöisesti C#-kielellä, ja sen käyttöliittymä pohjautui Windows Forms -kirjastoon. Insinöörityössä tarkasteltiin sovelluksen toteutuksen lisäksi myös arkistoinnin kannalta oleellisten komponenttien vuorovaikutusta työn tilaajan konkreettisessa playout-järjestelmässä. Insinöörityössä käytiin läpi playout-järjestelmässä liikkuvan videomateriaalin elinkaari vaiheittain videomateriaalin toimituksesta arkistointiin asti sekä se, kuinka sen eri vaiheet on toteutettu. Insinöörityön päätteeksi todettiin, että toteutettu sovellus oli toimiva ja se täytti työn tilaajan asettamat teknilliset määritelmät. Sovellusta jatkokehitetään työn tilaajan tarpeiden mukaanThe purpose of this thesis work was to design and implement an application that transfers video material from Windows 2012 R2 -server to DAC ALTO -server. The application is a part of a playout system, where its purpose is to archive video material no longer needed in the near future. The application archives the video files between set dates automatically and manually by the end user. A graphical user interface was implemented to the application that visualizes the estimated transfer speed and progression of the ongoing transaction and the files to be moved. The application was written in C# and its GUI as a Windows Forms application. In addition to going through the implementation of the application, the study also observed the interactions between the components relevant to the archiving process in the playout system. The study observed the life cycle of the video material that went through the play-out system and how each step had been implemented. The outcome of the study turned out to be a working solution and it met the client’s technical requirements. The application will be further developed when the client needs more functionalities or changes in it
    corecore