19 research outputs found

    Data Management Strategies for Relative Quality of Service in Virtualised Storage Systems

    No full text
    The amount of data managed by organisations continues to grow relentlessly. Driven by the high costs of maintaining multiple local storage systems, there is a well established trend towards storage consolidation using multi-tier Virtualised Storage Systems (VSSs). At the same time, storage infrastructures are increasingly subject to stringent Quality of Service (QoS) demands. Within a VSS, it is challenging to match desired QoS with delivered QoS, considering the latter can vary dramatically both across and within tiers. Manual efforts to achieve this match require extensive and ongoing human intervention. Automated efforts are based on workload analysis, which ignores the business importance of infrequently accessed data. This thesis presents our design, implementation and evaluation of data maintenance strategies in an enhanced version of the popular Linux Extended 3 Filesystem which features support for the elegant specification of QoS metadata while maintaining compatibility with stock kernels. Users and applications specify QoS requirements using a chmod-like interface. System administrators are provided with a character device kernel interface that allows for profiling of the QoS delivered by the underlying storage. We propose a novel score-based metric, together with associated visualisation resources, to evaluate the degree of QoS matching achieved by any given data layout. We also design and implement new inode and datablock allocation and migration strategies which exploit this metric in seeking to match the QoS attributes set by users and/or applications on files and directories with the QoS actually delivered by each of the filesystem’s block groups. To create realistic test filesystems we have included QoS metadata support in the Impressions benchmarking framework. The effectiveness of the resulting data layout in terms of QoS matching is evaluated using a special kernel module that is capable of inspecting detailed filesystem data on-the-fly. We show that our implementations of the proposed inode and datablock allocation strategies are capable of dramatically improving data placement with respect to QoS requirements when compared to the default allocators

    Goddard Conference on Mass Storage Systems and Technologies, Volume 1

    Get PDF
    Copies of nearly all of the technical papers and viewgraphs presented at the Goddard Conference on Mass Storage Systems and Technologies held in Sep. 1992 are included. The conference served as an informational exchange forum for topics primarily relating to the ingestion and management of massive amounts of data and the attendant problems (data ingestion rates now approach the order of terabytes per day). Discussion topics include the IEEE Mass Storage System Reference Model, data archiving standards, high-performance storage devices, magnetic and magneto-optic storage systems, magnetic and optical recording technologies, high-performance helical scan recording systems, and low end helical scan tape drives. Additional topics addressed the evolution of the identifiable unit for processing purposes as data ingestion rates increase dramatically, and the present state of the art in mass storage technology

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    Fault Tolerant Task Mapping in Many-Core Systems

    Get PDF
    The advent of many-core systems, a network on chip containing hundreds or thousands of homogeneous processors cores, present new challenges in managing the cores effectively in response to processing demands, hardware faults and the need for heat management. Continually diminishing feature size of devices increase the probability of fabrication de- fects and the variability of performance of individual transistors. In many-core systems this can result in the failure of individual processing cores, routing nodes or communication links, which require the use of fault tolerant mechanisms. Diminishing feature size also increases the power density of devices, giving rise to the concept of dark silicon where only a portion of the functionality available on a chip can be active at any one time. Core fault tolerance and management of dark silicon can both be achieved by allocating a percentage of cores to be idle at any one time. Idle cores can be used as dark silicon to evenly distribute heat generated by processing cores and can also be used as spare cores to implement fault tolerance. Both of these can be achieved by the dynamic allocation of processes to tasks in response to changes to the status of hardware resources and the demands placed on the system, which in turn requires real time task mapping. This research proposes the use of a continuous fault/recovery cycle to implement graceful degradation and amelioration to provide real-time fault tolerance. Objective measures for core fault tolerance, link fault tolerance, network power and excess traffic have been developed for use by a multi-objective evolutionary algorithm that uses knowledge of the processing demands and hardware status to identify optimal task mappings. The fault/recovery cycle is shown to be effective in maintaining a high level of performance of a many-core array when presented with a series of hardware faults

    The Third NASA Goddard Conference on Mass Storage Systems and Technologies

    Get PDF
    This report contains copies of nearly all of the technical papers and viewgraphs presented at the Goddard Conference on Mass Storage Systems and Technologies held in October 1993. The conference served as an informational exchange forum for topics primarily relating to the ingestion and management of massive amounts of data and the attendant problems involved. Discussion topics include the necessary use of computers in the solution of today's infinitely complex problems, the need for greatly increased storage densities in both optical and magnetic recording media, currently popular storage media and magnetic media storage risk factors, data archiving standards including a talk on the current status of the IEEE Storage Systems Reference Model (RM). Additional topics addressed System performance, data storage system concepts, communications technologies, data distribution systems, data compression, and error detection and correction

    Process query systems : advanced technologies for process detection and tracking

    Get PDF
    Vrijwel alles wat rondom ons heen gebeurt is van nature proces georienteerd. Het is dan niet verbazingwekkend dat het mentale omgevingsbeeld dat mensen van hun omgeving vormen hierop is gebaseerd. Zodra we iets waarnemen, en vervolgens herkennen, betekent dit dat we de waarneming begrijpen, ze bij elkaar kunnen groeperen, en voorspellen welke andere waarnemingen spoedig zullen volgen. Neem bijvoorbeeld een kamer met een televisie. Zodra we de kamer binnenkomen horen we geluiden, misschien stemmen, mischien muziek. Als we om ons heen kijken zien wij spoedig, visueel, de televisie. Omdat we het "proces" van TV goed kennen, kunnen we mentaal de geluiden bij het beeld van de televisie voegen. Ook weten we dat de telvisie aan is, en daarom verwachten we dat er nog meer geluiden zullen volgen. Zodra we de afstandsbediening oppakken en de televisie uitzetten, verwachten we dat het beeld verdwijnt en de geluiden ophouden. Als dit niet gebeurt, merken we dit direct op: we waren niet succesvol in het veranderen van de staat van het "proces TV". Over het algemeen, als onze waarnemingen niet bij een bekend proces passen zijn wij verbaasd, geinteresseerd, of zelfs bang. Dit is een goed voorbeeld van hoe mensen hun omgeving beschouwen, gebaseerd op processen classificeren we al onze waarnemingen, en zijn we in staat te voorspellen welke waarnemingen komen gaan. Computers zijn traditioneel niet in staat om herkenning op diezelfde wijze te realiseren. Computerverwerking van signalen is vaak gebaseerd op eenvoudige "signatures", ofwel enkelvoudige eigenschappen waar direct naar gezocht wordt. Vaak zijn deze systemen heel specifiek en kunnen slechts zeer beperkte voorspellingen maken inzake de waargenomen omgeving. Dit proefschrift introduceert een algemene methode waarin omgevingsbeschrijvingen worden ingevoerd als processen: een nieuwe klasse van gegevensverwerkende systemen, genaamd Process Query Systems (PQS). Een PQS stelt de gebruiker in staat om snel en efficient een robuust omgevingsbewust systeem te bouwen, dat in staat is meerdere processen en meerdere instanties van processen te detecteren en volgen. Met behulp van PQS worden verschillende systemen gepresenteerd zo divers als de beveiliging van grote computer netwerken, tot het volgen van vissen in een vistank. Het enige verschil tussen al deze systemen is de procesmodellen die ingevoerd werden in de PQS. Deze technologie is een nieuw en veelbelovend vakgebied dat het potentieel heeft zeer succesvol te worden in alle vormen van digitale signaalverwerking.UBL - phd migration 201

    Seventh International Workshop on Simulation, 21-25 May, 2013, Department of Statistical Sciences, Unit of Rimini, University of Bologna, Italy. Book of Abstracts

    Get PDF
    Seventh International Workshop on Simulation, 21-25 May, 2013, Department of Statistical Sciences, Unit of Rimini, University of Bologna, Italy. Book of Abstract

    Proceedings of the Third International Mobile Satellite Conference (IMSC 1993)

    Get PDF
    Satellite-based mobile communications systems provide voice and data communications to users over a vast geographic area. The users may communicate via mobile or hand-held terminals, which may also provide access to terrestrial cellular communications services. While the first and second International Mobile Satellite Conferences (IMSC) mostly concentrated on technical advances, this Third IMSC also focuses on the increasing worldwide commercial activities in Mobile Satellite Services. Because of the large service areas provided by such systems, it is important to consider political and regulatory issues in addition to technical and user requirements issues. Topics covered include: the direct broadcast of audio programming from satellites; spacecraft technology; regulatory and policy considerations; advanced system concepts and analysis; propagation; and user requirements and applications

    Technology 2003: The Fourth National Technology Transfer Conference and Exposition, volume 2

    Get PDF
    Proceedings from symposia of the Technology 2003 Conference and Exposition, Dec. 7-9, 1993, Anaheim, CA, are presented. Volume 2 features papers on artificial intelligence, CAD&E, computer hardware, computer software, information management, photonics, robotics, test and measurement, video and imaging, and virtual reality/simulation
    corecore