30 research outputs found

    A Survey on the Integration of NAND Flash Storage in the Design of File Systems and the Host Storage Software Stack

    Full text link
    With the ever-increasing amount of data generate in the world, estimated to reach over 200 Zettabytes by 2025, pressure on efficient data storage systems is intensifying. The shift from HDD to flash-based SSD provides one of the most fundamental shifts in storage technology, increasing performance capabilities significantly. However, flash storage comes with different characteristics than prior HDD storage technology. Therefore, storage software was unsuitable for leveraging the capabilities of flash storage. As a result, a plethora of storage applications have been design to better integrate with flash storage and align with flash characteristics. In this literature study we evaluate the effect the introduction of flash storage has had on the design of file systems, which providing one of the most essential mechanisms for managing persistent storage. We analyze the mechanisms for effectively managing flash storage, managing overheads of introduced design requirements, and leverage the capabilities of flash storage. Numerous methods have been adopted in file systems, however prominently revolve around similar design decisions, adhering to the flash hardware constrains, and limiting software intervention. Future design of storage software remains prominent with the constant growth in flash-based storage devices and interfaces, providing an increasing possibility to enhance flash integration in the host storage software stack

    A Survey on the Integration of NAND Flash Storage in the Design of File Systems and the Host Storage Software Stack

    Get PDF
    With the ever-increasing amount of data generate in the world, estimated to reach over 200 Zettabytes by 2025, pressure on efficient data storage systems is intensifying. The shift from HDD to flash-based SSD provides one of the most fundamental shifts in storage technology, increasing performance capabilities significantly. However, flash storage comes with different characteristics than prior HDD storage technology. Therefore, storage software was unsuitable for leveraging the capabilities of flash storage. As a result, a plethora of storage applications have been design to better integrate with flash storage and align with flash characteristics. In this literature study we evaluate the effect the introduction of flash storage has had on the design of file systems, which providing one of the most essential mechanisms for managing persistent storage. We analyze the mechanisms for effectively managing flash storage, managing overheads of introduced design requirements, and leverage the capabilities of flash storage. Numerous methods have been adopted in file systems, however prominently revolve around similar design decisions, adhering to the flash hardware constrains, and limiting software intervention. Future design of storage software remains prominent with the constant growth in flash-based storage devices and interfaces, providing an increasing possibility to enhance flash integration in the host storage software stack

    Mitigating Interference During Virtual Machine Live Migration through Storage Offloading

    Get PDF
    Today\u27s cloud landscape has evolved computing infrastructure into a dynamic, high utilization, service-oriented paradigm. This shift has enabled the commoditization of large-scale storage and distributed computation, allowing engineers to tackle previously untenable problems without large upfront investment. A key enabler of flexibility in the cloud is the ability to transfer running virtual machines across subnets or even datacenters using live migration. However, live migration can be a costly process, one that has the potential to interfere with other applications not involved with the migration. This work investigates storage interference through experimentation with real-world systems and well-established benchmarks. In order to address migration interference in general, a buffering technique is presented that offloads the migration\u27s read, eliminating interference in the majority of scenarios

    Finding Morton-Like Layouts for Multi-Dimensional Arrays Using Evolutionary Algorithms

    Full text link
    The layout of multi-dimensional data can have a significant impact on the efficacy of hardware caches and, by extension, the performance of applications. Common multi-dimensional layouts include the canonical row-major and column-major layouts as well as the Morton curve layout. In this paper, we describe how the Morton layout can be generalized to a very large family of multi-dimensional data layouts with widely varying performance characteristics. We posit that this design space can be efficiently explored using a combinatorial evolutionary methodology based on genetic algorithms. To this end, we propose a chromosomal representation for such layouts as well as a methodology for estimating the fitness of array layouts using cache simulation. We show that our fitness function correlates to kernel running time in real hardware, and that our evolutionary strategy allows us to find candidates with favorable simulated cache properties in four out of the eight real-world applications under consideration in a small number of generations. Finally, we demonstrate that the array layouts found using our evolutionary method perform well not only in simulated environments but that they can effect significant performance gains -- up to a factor ten in extreme cases -- in real hardware

    OrderlessChain: Do Permissioned Blockchains Need Total Global Order of Transactions?

    Full text link
    Existing permissioned blockchains often rely on coordination-based consensus protocols to ensure the safe execution of applications in a Byzantine environment. Furthermore, these protocols serialize the transactions by ordering them into a total global order. The serializability preserves the correctness of the application's state stored on the blockchain. However, using coordination-based protocols to attain the global order of transactions can limit the throughput and induce high latency. In contrast, application-level correctness requirements exist that are not dependent on the order of transactions, known as invariant-confluence (I-confluence). The I-confluent applications can execute in a coordination-free manner benefiting from the improved performance compared to the coordination-based approaches. The safety and liveness of I-confluent applications are studied in non-Byzantine environments, but the correct execution of such applications remains a challenge in Byzantine coordination-free environments. This work introduces OrderlessChain, a coordination-free permissioned blockchain for the safe and live execution of I-confluent applications in a Byzantine environment. We implemented a prototype of our system, and our evaluation results demonstrate that our coordination-free approach performs better than coordination-based blockchains

    Caches collaboratifs noyau adaptés aux environnements virtualisés

    Get PDF
    With the advent of cloud architectures, virtualization has become a key mechanism for ensuring isolation and flexibility. However, a drawback of using virtual machines (VMs) is the fragmentation of physical resources. As operating systems leverage free memory for I/O caching, memory fragmentation is particularly problematic for I/O-intensive applications, which suffer a significant performance drop. In this context, providing the ability to dynamically adjust the resources allocated among the VMs is a primary concern.To address this issue, this thesis proposes a distributed cache mechanism called Puma. Puma pools together the free memory left unused by VMs: it enables a VM to entrust clean page-cache pages to other VMs. Puma extends the Linux kernel page cache, and thus remains transparent, to both applications and the rest of the operating system. Puma adjusts itself dynamically to the caching activity of a VM, which Puma evaluates by means of metrics derived from existing Linux kernel memory management mechanisms. Our experiments show that Puma significantly improves the performance of I/O-intensive applications and that it adapts well to dynamically changing conditions.Avec l'avènement du cloud computing, la virtualisation est devenue aujourd'hui incontournable. Elle offre isolation et flexibilité, en revanche elle implique une fragmentation des ressources, et notamment de la mémoire. Les performances des applications qui effectuent beaucoup d'entrées/sorties (E/S) en sont particulièrement impactées. En effet, celles-ci reposent en grande partie sur la présence de mémoire libre, utilisée par le système pour faire du cache et ainsi accélérer les E/S. Ajuster dynamiquement les ressources d'une machine virtuelle devient donc un enjeu majeur. Dans cette thèse nous nous intéressons à ce problème, et nous proposons Puma, un cache réparti permettant de mutualiser la mémoire inutilisée des machines virtuelles pour améliorer les performances des applications qui effectuent beaucoup d'E/S. Contrairement aux solutions existantes, notre approche noyau permet à Puma de fonctionner avec les applications sans adaptation ni système de fichiers spécifique. Nous proposons plusieurs métriques, reposant sur des mécanismes existants du noyau Linux, qui permettent de définir le niveau d'activité « cache » du système. Ces métriques sont utilisées par Puma pour automatiser le niveau de contribution d'un noeud au cache réparti. Nos évaluations de Puma montrent qu'il est capable d'améliorer significativement les performances d'applications qui effectuent beaucoup d'E/S et de s'adapter dynamiquement afin de ne pas dégrader leurs performances

    Leveraging disaggregated accelerators and non-volatile memories to improve the efficiency of modern datacenters

    Get PDF
    (English) Traditional data centers consist of computing nodes that possess all the resources physically attached. When there was the need to deal with more significant demands, the solution has been to either add more nodes (scaling out) or increase the capacity of existing ones (scaling-up). Workload requirements are traditionally fulfilled by selecting compute platforms from pools that better satisfy their average or maximum resource requirements depending on the price that the user is willing to pay. The amount of processor, memory, storage, and network bandwidth of a selected platform needs to meet or exceed the platform requirements of the workload. Beyond those explicitly required by the workload, additional resources are considered stranded resources (if not used) or bonus resources (if used). Meanwhile, workloads in all market segments have evolved significantly during the last decades. Today, workloads have a larger variety of requirements in terms of characteristics related to the computing platforms. Those workload new requirements include new technologies such as GPU, FPGA, NVMe, etc. These new technologies are more expensive and thus become more limited. It is no longer feasible to increase the number of resources according to potential peak demands, as this significantly raises the total cost of ownership. Software-Defined-Infrastructures (SDI), a new concept for the data center architecture, is being developed to address those issues. The main SDI proposition is to disaggregate all the resources over the fabric to enable the required flexibility. On SDI, instead of pools of computational nodes, the pools consist of individual units of resources (CPU, memory, FPGA, NVMe, GPU, etc.). When an application needs to be executed, SDI identifies the computational requirements and assembles all the resources required, creating a composite node. Resource disaggregation brings new challenges and opportunities that this thesis will explore. This thesis demonstrates that resource disaggregation brings opportunities to increase the efficiency of modern data centers. This thesis demonstrates that resource disaggregation may increase workloads' performance when sharing a single resource. Thus, needing fewer resources to achieve similar results. On the other hand, this thesis demonstrates how through disaggregation, aggregation of resources can be made, increasing a workload's performance. However, to take maximum advantage of those characteristics and flexibility, orchestrators must be aware of them. This thesis demonstrates how workload-aware techniques applied at the resource management level allow for improved quality of service leveraging resource disaggregation. Enabling resource disaggregation, this thesis demonstrates a reduction of up to 49% missed deadlines compared to a traditional schema. This reduction can rise up to 100% when enabling workload awareness. Moreover, this thesis demonstrates that GPU partitioning and disaggregation further enhances the data center flexibility. This increased flexibility can achieve the same results with half the resources. That is, with a single physical GPU partitioned and disaggregated, the same results can be achieved with 2 GPU disaggregated but not partitioned. Finally, this thesis demonstrates that resource fragmentation becomes key when having a limited set of heterogeneous resources, namely NVMe and GPU. For the case of an heterogeneous set of resources, and specifically when some of those resources are highly demanded but limited in quantity. That is, the situation where the demand for a resource is unexpectedly high, this thesis proposes a technique to minimize fragmentation that reduces deadlines missed compared to a disaggregation-aware policy of up to 86%.(Català) Els datacenters tradicionals consisteixen en un seguit de nodes computacionals que contenen al seu interior tots els recursos necessaris. Quan hi ha una necessitat de gestionar demandes superiors la solució era o afegir més nodes (scale-out) o incrementar la capacitat dels existents (scale-up). Els requisits de les aplicacions tradicionalment són satisfets seleccionant recursos de racks que satisfan millor el seu SLA basats o en la mitjana dels requisits o en el màxim possible, en funció del preu que l'usuari estigui disposat a pagar. La quantitat de processadors, memòria, disc, i banda d'ampla d'un rack necessita satisfer o excedir els requisits de l'aplicació. Els recursos addicionals als requerits per les aplicacions són considerats inactius (si no es fan servir) o addicionals (si es fan servir). Per altra banda, les aplicacions en tots els segments de mercat han evolucionat significativament en les últimes dècades. Avui en dia, les aplicacions tenen una gran varietat de requisits en termes de característiques que ha de tenir la infraestructura. Aquests nous requisits inclouen tecnologies com GPU, FPGA, NVMe, etc. Aquestes tecnologies són més cares i, per tant, més limitades. Ja no és factible incrementar el nombre de recursos segons el potencial pic de demanda, ja que això incrementa significativament el cost total de la infraestructura. Software-Defined Infrastructures és un nou concepte per a l'arquitectura de datacenters que s'està desenvolupant per pal·liar aquests problemes. La proposició principal de SDI és desagregar tots els recursos sobre la xarxa per garantir una major flexibilitat. Sota SDI, en comptes de racks de nodes computacionals, els racks consisteix en unitats individuals de recursos (CPU, memòria, FPGA, NVMe, GPU, etc). Quan una aplicació necessita executar, SDI identifica els requisits computacionals i munta una plataforma amb tots els recursos necessaris, creant un node composat. La desagregació de recursos porta nous reptes i oportunitats que s'exploren en aquesta tesi. Aquesta tesi demostra que la desagregació de recursos ens dona l'oportunitat d'incrementar l'eficiència dels datacenters moderns. Aquesta tesi demostra la desagregació pot incrementar el rendiment de les aplicacions. Però per treure el màxim partit a aquestes característiques i d'aquesta flexibilitat, els orquestradors n'han de ser conscient. Aquesta tesi demostra que aplicant tècniques conscients de l'aplicació aplicades a la gestió de recursos permeten millorar la qualitat del servei a través de la desagregació de recursos. Habilitar la desagregació de recursos porta a una reducció de fins al 49% els deadlines perduts comparat a una política tradicional. Aquesta reducció pot incrementar-se fins al 100% quan s'habilita la consciència de l'aplicació. A més a més, aquesta tesi demostra que el particionat de GPU combinat amb la desagregació millora encara més la flexibilitat. Aquesta millora permet aconseguir els mateixos resultats amb la meitat de recursos. És a dir, amb una sola GPU física particionada i desagregada, els mateixos resultats són obtinguts que utilitzant-ne dues desagregades però no particionades. Finalment, aquesta tesi demostra que la gestió de la fragmentació de recursos és una peça clau quan la quantitat de recursos és limitada en un conjunt heterogeni de recursos. Pel cas d'un conjunt heterogeni de recursos, i especialment quan aquests recursos tenen molta demanda però són limitats en quantitat. És a dir, quan la demanda pels recursos és inesperadament alta, aquesta tesi proposa una tècnica minimitzant la fragmentació que redueix els deadlines perduts comparats a una política de desagregació de fins al 86%.Arquitectura de computador
    corecore