138 research outputs found

    Exploiting Fine-Grained Spatial Optimization for Hybrid File System Space

    Get PDF
    Over decades, I/O optimizations implemented in legacy file systems have been concentrated on reducing HDD disk overhead, such as seek time. As SSD (Solid-State Device) is becoming the main storage medium in I/O storage subsystems, file systems integrated with SSD should take a different approach in designing I/O optimizations. This is because SSD deploys the peculiar device characteristics that do not take place in HDD, such as erasure overhead on flash blocks and absence of seek time to positioning data. In this paper, we present HP-hybrid (High Performance-hybrid) file system that provides a single hybrid file system space, by combining HDD and SSD partitions. HP-hybrid targets for optimizing I/O while considering the strength and weakness of two different partitions, to store large-scale amounts of data in a cost-effective way. Especially, HP-hybrid proposes spatial optimizations that are executed in a hierarchical, fine-grained I/O unit, to address the limited SSD storage resources. We conducted several performance experiments to verify the effectiveness of HP-hybrid while comparing to ext2, ext4 and xfs mounted on both SSD and HDD

    Study On Endurance Of Flash Memory Ssds

    Get PDF
    Flash memory promises to revolutionize storage systems because of its massive performance gains, ruggedness, large decrease in power usage and physical space requirements, but it is not a direct replacement for magnetic hard disks. Flash memory possesses fundamentally different characteristics and in order to fully utilize the positive aspects of flash memory, we must engineer around its unique limitations. The primary limitations are lack of in-place updates, the asymmetry between the sizes of the write and erase operations, and the limited endurance of flash memory cells. This leads to the need for efficient methods for block cleaning, combating write amplification and performing wear leveling. These are fundamental attributes of flash memory and will always need to be understood and efficiently managed to produce an efficient and high performance storage system. Our goal in this work is to provide analysis and algorithms for efficiently managing data storage for endurance in flash memory. We present update codes, a class of floating codes, which encodes data updates as flash memory cell increments that results in reduced block erases and longer lifespan of flash memory, and provides a new algorithm for constructing optimal floating codes. We also analyze the theoretically possible limits of write amplification reduction and minimization by using offline workloads. We give an estimation of the minimal write amplification by a workload decomposition algorithm and find that write amplification can be pushed to zero with relatively low over-provisioning. Additionally, we give simple, efficient and practical algorithms that are effective in reducing write amplification and performing wear leveling. Finally, we present a quantitative model of wear levels in flash memory by constructing a difference equation that gives erase counts of a block with workload, wear leveling strategy and SSD configuration as parameters

    A New I/O Scheduler for Solid State Devices

    Get PDF
    Since the emergence of solid state devices onto the storage scene, improvements in capacity and price have brought them to the point where they are becoming a viable alternative to traditional magnetic storage for some applications. Current file system and device level I/O scheduler design is optimized for rotational magnetic hard disk drives. Since solid state devices have drastically different properties and structure, we may need to rethink the design of some aspects of the file system and scheduler levels of the I/O subsystem. In this thesis, we consider the current approach to I/O scheduling and show that the current scheduler design may not be ideally suited to solid state devices. We also present a framework for extracting some device parameters of solid state drives. Using the information from the parameter extraction, we present a new I/O scheduler design which utilizes the structure of solid state devices to efficiently schedule writes. The new scheduler, implemented on a 2.6 Linux kernel, shows up to 25% improvement for common workloads

    Research reports: 1990 NASA/ASEE Summer Faculty Fellowship Program

    Get PDF
    Reports on the research projects performed under the NASA/ASEE Summer Faculty Fellowship Program are presented. The program was conducted by The University of Alabama and MSFC during the period from June 4, 1990 through August 10, 1990. Some of the topics covered include: (1) Space Shuttles; (2) Space Station Freedom; (3) information systems; (4) materials and processes; (4) Space Shuttle main engine; (5) aerospace sciences; (6) mathematical models; (7) mission operations; (8) systems analysis and integration; (9) systems control; (10) structures and dynamics; (11) aerospace safety; and (12) remote sensin

    Towards Successful Application of Phase Change Memories: Addressing Challenges from Write Operations

    Get PDF
    The emerging Phase Change Memory (PCM) technology is drawing increasing attention due to its advantages in non-volatility, byte-addressability and scalability. It is regarded as a promising candidate for future main memory. However, PCM's write operation has some limitations that pose challenges to its application in memory. The disadvantages include long write latency, high write power and limited write endurance. In this thesis, I present my effort towards successful application of PCM memory. My research consists of several optimizing techniques at both the circuit and architecture level. First, at the circuit level, I propose Differential Write to remove unnecessary bit changes in PCM writes. This is not only beneficial to endurance but also to the energy and latency of writes. Second, I propose two memory scheduling enhancements (AWP and RAWP) for a non-blocking bank design. My memory scheduling enhancements can exploit intra-bank parallelism provided by non-blocking bank design, and achieve significant throughput improvement. Third, I propose Bit Level Power Budgeting (BPB), a fine-grained power budgeting technique that leverages the information from Differential Write to achieve even higher memory throughput under the same power budget. Fourth, I propose techniques to improve the QoS tuning ability of high-priority applications when running on PCM memory. In summary, the techniques I propose effectively address the challenges of PCM's write operations. In addition, I present the experimental infrastructure in this work and my visions of potential future research topics, which could be helpful to other researchers in the area

    TACKLING PERFORMANCE AND SECURITY ISSUES FOR CLOUD STORAGE SYSTEMS

    Get PDF
    Building data-intensive applications and emerging computing paradigm (e.g., Machine Learning (ML), Artificial Intelligence (AI), Internet of Things (IoT) in cloud computing environments is becoming a norm, given the many advantages in scalability, reliability, security and performance. However, under rapid changes in applications, system middleware and underlying storage device, service providers are facing new challenges to deliver performance and security isolation in the context of shared resources among multiple tenants. The gap between the decades-old storage abstraction and modern storage device keeps widening, calling for software/hardware co-designs to approach more effective performance and security protocols. This dissertation rethinks the storage subsystem from device-level to system-level and proposes new designs at different levels to tackle performance and security issues for cloud storage systems. In the first part, we present an event-based SSD (Solid State Drive) simulator that models modern protocols, firmware and storage backend in detail. The proposed simulator can capture the nuances of SSD internal states under various I/O workloads, which help researchers understand the impact of various SSD designs and workload characteristics on end-to-end performance. In the second part, we study the security challenges of shared in-storage computing infrastructures. Many cloud providers offer isolation at multiple levels to secure data and instance, however, security measures in emerging in-storage computing infrastructures are not studied. We first investigate the attacks that could be conducted by offloaded in-storage programs in a multi-tenancy cloud environment. To defend against these attacks, we build a lightweight Trusted Execution Environment, IceClave to enable security isolation between in-storage programs and internal flash management functions. We show that while enforcing security isolation in the SSD controller with minimal hardware cost, IceClave still keeps the performance benefit of in-storage computing by delivering up to 2.4x better performance than the conventional host-based trusted computing approach. In the third part, we investigate the performance interference problem caused by other tenants' I/O flows. We demonstrate that I/O resource sharing can often lead to performance degradation and instability. The block device abstraction fails to expose SSD parallelism and pass application requirements. To this end, we propose a software/hardware co-design to enforce performance isolation by bridging the semantic gap. Our design can significantly improve QoS (Quality of Service) by reducing throughput penalties and tail latency spikes. Lastly, we explore more effective I/O control to address contention in the storage software stack. We illustrate that the state-of-the-art resource control mechanism, Linux cgroups is insufficient for controlling I/O resources. Inappropriate cgroup configurations may even hurt the performance of co-located workloads under memory intensive scenarios. We add kernel support for limiting page cache usage per cgroup and achieving I/O proportionality

    Cal Poly Supermileage Electric Vehicle Drivetrain and Motor Control Design

    Get PDF
    The Cal Poly Supermileage Vehicle team is a multidisciplinary club that designs and builds high efficiency vehicles to compete internationally at Shell Eco-Marathon (SEM). Cal Poly Supermileage Club has been competing in the internal combustion engine (ICE) category of the competition since 2007. The club has decided it is time to expand their competition goals and enter their first battery electric prototype vehicle. To this end, a yearlong senior design project was presented to this team of engineers giving us the opportunity to design an electric powertrain with a custom motor controller. This system has been integrated into Ventus, the 2017 Supermileage competition car, bringing it back to life as E-Ventus for future competitions. The scope of this project includes sizing a motor, designing the drivetrain, programing the motor driver, building a custom motor controller, and finally mounting all these components into the chassis. The main considerations in this design are the energy efficiency measured in distance per power used (mi/kWh) and the whole system reliability. Driven train system reliability has been defined as the car starts the first time every time and can complete two competition runs of 6.3 miles each without mechanical or electrical failure. Drivetrain weight target was less than 25 pounds, and the finished system came in at 20 lbs 4 oz. Due to the design difficulties of the custom controller, three iterations were able to be produced by the end of this project, but there will need to be further iterations to complete the controller. Because of these difficulties our sponsor, Will Sirski, and club advisor, Dr. Mello, have agreed that providing the club with a working mechanical powertrain, powertrain data from the club chassis dynamometer using the programmed TI evaluation motor controller board, and providing board layout for the third iteration design for the custom controller satisfy their requirements for this project

    Performance Evaluation of In-storage Processing Architectures for Diverse Applications and Benchmarks

    Get PDF
    University of Minnesota Ph.D. dissertation.June 2018. Major: Electrical/Computer Engineering. Advisor: David Lilja. 1 computer file (PDF); vii, 100 pages.As we inch towards the future, the storage needs of the world are going to be massive and diversied. To tackle the needs of the next generation, the storage systems are required to be studied and require innovative solutions. These solutions need to solve multitude of issues involving high power consumption of traditional systems, manageability, easy scaling out, and integration into existing systems. Therefore, we need to rethink the new technologies from the ground up. To keep the energy signature under control we devised a new architecture called Storage Processing Unit (SPU). For the modeling of this architecture we incorporate a processing element inside the storage medium to limit the data movement between the storage device and the host processor. This resulted in a hierarchal architecture which required an extensive design space exploration along with in-depth study of the applications. We found this new architecture to provide energy savings from 11-423X and gave performance gains from 4-66X for applications including k-means, Sparse BLAS, and others. Moreover, to understand the diverse nature of the applications and newer technologies, we tried the concept of in-storage processing for unstructured data. This type of data is demonstrating huge amount of growth and would continue to do so. Seagate's new class of drives - Kinetic Drives, address the rise of unstructured data. They have a processing element inside disk drives that execute LevelDB, a key-value store. We evaluated this off-the-shelf device using micro and macro benchmarks for an in-depth throughput and latency benchmarking. We observed sequential write throughput of 63 MB/sec and sequential read throughput of 78 MB/sec for 1 MB value sizes. We tested several unique features including P2P transfer that takes place in a Kinetic Drive. These new class of drives outperformed traditional servers workloads for several test cases. Finally, large number of these devices are needed for huge amounts of data. To demonstrate that Kinetic Drives reduce the management complexity for large-scale deployment, we conducted a study. We allocated large amounts of data on Kinetic Drives and then evaluated the performance of the system for migration of data amongst drives. Previously developed key indexing schemes were evaluated which gave important insights into their performance differences. Based on this study we can conclude that efficient mapping of key-value pairs to drives could be obtained. This lead to an understanding of the trade-offs between the number of empty drives and mapping of different key ranges to different drives. In conclusion, in-storage processing architectures bring an interesting aspect where processing is moved closer to the data. This leads to a paradigm shift which often results in a major software and hardware architectural changes. Furthermore, the new architectures have the potential to perform better than the traditional systems but require easy integration with the existing systems
    • …
    corecore