23 research outputs found

    Operating System Support for High-Performance Solid State Drives

    Get PDF

    uFLIP: Understanding the Energy Consumption of Flash Devices

    Get PDF
    International audienceUnderstanding the energy consumption of flash devices is important for two reasons. First, energy is emerging as a key metric for data management systems. It is thus important to understand how we can reason about the energy consumption of flash devices beyond their approximate aggregate consumption (low power consumption in idle mode, average Watt consumption from the data sheets). Second, when measured at a sufficiently fine granularity, the energy consumption of a given device might complement the performance characteristics derived from its response time profile. Indeed, background work which is not directly observable with a response time profile appears clearly when energy is used as a metric. In this paper, we discuss the results from the uFLIP benchmark applied to four different SSD devices using both response time and energy as metric

    LightNVM: The Linux Open-Channel SSD Subsystem

    Get PDF

    Performance Characterization of NVMe Flash Devices with Zoned Namespaces (ZNS)

    Full text link
    The recent emergence of NVMe flash devices with Zoned Namespace support, ZNS SSDs, represents a significant new advancement in flash storage. ZNS SSDs introduce a new storage abstraction of append-only zones with a set of new I/O (i.e., append) and management (zone state machine transition) commands. With the new abstraction and commands, ZNS SSDs offer more control to the host software stack than a non-zoned SSD for flash management, which is known to be complex (because of garbage collection, scheduling, block allocation, parallelism management, overprovisioning). ZNS SSDs are, consequently, gaining adoption in a variety of applications (e.g., file systems, key-value stores, and databases), particularly latency-sensitive big-data applications. Despite this enthusiasm, there has yet to be a systematic characterization of ZNS SSD performance with its zoned storage model abstractions and I/O operations. This work addresses this crucial shortcoming. We report on the performance features of a commercially available ZNS SSD (13 key observations), explain how these features can be incorporated into publicly available state-of-the-art ZNS emulators, and recommend guidelines for ZNS SSD application developers. All artifacts (code and data sets) of this study are publicly available at https://github.com/stonet-research/NVMeBenchmarks.Comment: Paper to appear in the https://clustercomp.org/2023/program

    Performance Characterization of NVMe Flash Devices with Zoned Namespaces (ZNS)

    Get PDF
    The recent emergence of NVMe flash devices with Zoned Namespace support, ZNS SSDs, represents a significant new advancement in flash storage. ZNS SSDs introduce a new storage abstraction of append-only zones with a set of new I/O (i.e., append) and management (zone state machine transition) commands. With the new abstraction and commands, ZNS SSDs offer more control to the host software stack than a non-zoned SSD for flash management, which is known to be complex (because of garbage collection, scheduling, block allocation, parallelism management, overprovisioning). ZNS SSDs are, consequently, gaining adoption in a variety of applications (e.g., file systems, key-value stores, and databases), particularly latency-sensitive big-data applications. Despite this enthusiasm, there has yet to be a systematic characterization of ZNS SSD performance with its zoned storage model abstractions and I/O operations. This work addresses this crucial shortcoming. We report on the performance features of a commercially available ZNS SSD (13 key observations), explain how these features can be incorporated into publicly available state-of-the-art ZNS emulators, and recommend guidelines for ZNS SSD application developers. All artifacts (code and data sets) of this study are publicly available at https://github.com/stonet-research/NVMeBenchmarks

    uFLIP-OC: Understanding Flash I/O Patterns on Open-Channel Solid State Drives

    Get PDF
    International audienceSolid-State Drives (SSDs) have gained acceptance by providing the same block device abstraction as magnetic hard drives, at the cost of suboptimal resource utilisation and unpredictable performance. Recently, Open-Channel SSDs have emerged as a means to obtain predictably high performance, based on a clean break from the block device abstraction. Open-channel SSDs embed a minimal flash translation layer (FTL) and expose their internals to the host. The Linux open-channel SSD subsystem, LightNVM, lets kernel modules as well as user-space applications control data placement and I/O scheduling. This way, it is the host that is responsible for SSD management. But what kind of performance model should the host rely on to guide the way it manages data placement and I/O scheduling? For addressing this question we have defined uFLIP-OC, a benchmark designed to identify the I/O patterns that are best suited for a given open-channel SSD. Our experiments on a Dragon-Fire Card (DFC) SSD, equipped with the OX controller, illustrate the performance impact of media characteristics and parallelism. We discuss how uFLIP-OC can be used to guide the design of host-based data systems on open-channel SSDs

    The Necessary Death of the Block Device Interface.

    Get PDF
    International audienceSolid State Drives (SSDs) are replacing magnetic disks assecondary storage for database management, as they offer ordersof magnitude improvement in terms of bandwidth and latency. Interms of system design, the advent of SSDs raises considerablechallenges. First, the storage chips, which are the basic componentof a SSD, have widely different characteristics – e.g., copy-onwrite,erase-before-write and page-addressability for flash chipsvs. in-place update and byte-addressability for PCM chips.Second, SSDs are no longer a bottleneck in terms of I/O latencyforcing streamlined execution throughout the I/O stack. Finally,SSDs provide a high degree of parallelism that must be leveragedto reach nominal bandwidth. This evolution puts database systemresearchers at a crossroad. The first option is to hang on to thecurrent architecture where secondary storage is encapsulatedbehind a block device interface. This is the mainstream optionboth in industry and academia. This leaves the storage and OScommunities with the responsibility to deal with the complexityintroduced by SSDs in the hope that they will provide us with arobust, yet simple, performance model. In this paper, we showthat this option amounts to building on quicksand. We illustrateour point by debunking some popular myths about flash devicesand by pointing out mistakes in the papers we have publishedthroughout the years. The second option is to abandon the simpleabstraction of the block device interface and reconsider howdatabase storage managers, operating system drivers and SSDcontrollers interact. We give our vision of how modern databasesystems should interact with secondary storage. This approachrequires a deep re-design of the database system architecture,which is the only viable option for database system researchers toavoid becoming irrelevant
    corecore