533 research outputs found

    Enabling Intra-Plane Parallel Block Erase to Alleviate the Impact of Garbage Collection

    Get PDF
    Garbage collection (GC) in NAND flash can significantly decrease I/O performance in SSDs by copying valid data to other locations, thus blocking incoming I/O requests. To help improve performance, NAND flash utilizes various advanced commands to increase internal parallelism. Currently, these commands only parallelize operations across channels, chips, dies, and planes, neglecting the block-level and below due structural bottlenecks along the data path and risk of disturbances that can compromise valid data by inducing errors. However, due to the triple-well structure of the NAND flash plane architecture and erasing procedure, it is possible to erase multiple blocks within a plane, in parallel, without being restricted by structural limitations or diminishing the integrity of the valid data. The number of page movements due to multiple block erases can be restrained so as to bound the overhead per GC. Moreover, more capacity can be reclaimed per GC which delays future GCs and effectively reduces their frequency. Such an Intra-Plane Parallel Block Erase (IPPBE) in turn diminishes the impact of GC on incoming requests, improving their response times. Experimental results show that IPPBE can reduce the time spent performing GC by up to 50.7% and 33.6% on average, read/write response time by up to 47.0%/45.4% and 16.5%/14.8% on average respectively, page movements by up to 52.2% and 26.6% on average, and blocks erased by up to 14.2% and 3.6% on average. An energy analysis conducted indicates that by reducing the number of page copies and the number of block erases, the energy cost of garbage collection can be reduced up to 44.1% and 19.3% on average

    Exploiting intrinsic flash properties to enhance modern storage systems

    Get PDF
    The longstanding goals of storage system design have been to provide simple abstractions for applications to efficiently access data while ensuring the data durability and security on a hardware device. The traditional storage system, which was designed for slow hard disk drive with little parallelism, does not fit for the new storage technologies such as the faster flash memory with high internal parallelism. The gap between the storage system software and flash device causes both resource inefficiency and sub-optimal performance. This dissertation focuses on the rethinking of the storage system design for flash memory with a holistic approach from the system level to the device level and revisits several critical aspects of the storage system design including the storage performance, performance isolation, energy-efficiency, and data security. The traditional storage system lacks full performance isolation between applications sharing the device because it does not make the software aware of the underlying flash properties and constraints. This dissertation proposes FlashBlox, a storage virtualization system that utilizes flash parallelism to provide hardware isolation between applications by assigning them on dedicated chips. FlashBlox reduces the tail latency of storage operations dramatically compared with the existing software-based isolation techniques while achieving uniform lifetime for the flash device. As the underlying flash device latency is reduced significantly compared to the conventional hard disk drive, the storage software overhead has become the major bottleneck. This dissertation presents FlashMap, a holistic flash-based storage stack that combines memory, storage and device-level indirections into a unified layer. By combining these layers, FlashMap reduces critical-path latency for accessing data in the flash device and improves DRAM caching efficiency significantly for flash management. The traditional storage software incurs energy-intensive storage operations due to the need for maintaining data durability and security for personal data, which has become a significant challenge for resource-constrained devices such as mobiles and wearables. This dissertation proposes WearDrive, a fast and energy-efficient storage system for wearables. WearDrive treats the battery-backed DRAM as non-volatile memory to store personal data and trades the connected phone’s battery for the wearable’s by performing large and energy-intensive tasks on the phone while performing small and energy-efficient tasks locally using battery-backed DRAM. WearDrive improves wearable’s battery life significantly with negligible impact to the phone’s battery life. The storage software which has been developed for decades is still vulnerable to malware attacks. For example, the encryption ransomware which is a malicious software that stealthily encrypts user files and demands a ransom to provide access to these files. Prior solutions such as ransomware detection and data backups have been proposed to defend against encryption ransomware. Unfortunately, by the time the ransomware is detected, some files already undergo encryption and the user is still required to pay a ransom to access those files. Furthermore, ransomware variants can obtain kernel privilege to terminate or destroy these software-based defense systems. This dissertation presents FlashGuard, a ransomware-tolerant SSD which has a firmware-level recovery system that allows effective data recovery from encryption ransomware. FlashGuard leverages the intrinsic flash properties to defend against the encryption ransomware and adds minimal overhead to regular storage operations.Ph.D

    Letter from the Special Issue Editor

    Get PDF
    Editorial work for DEBULL on a special issue on data management on Storage Class Memory (SCM) technologies

    TACKLING PERFORMANCE AND SECURITY ISSUES FOR CLOUD STORAGE SYSTEMS

    Get PDF
    Building data-intensive applications and emerging computing paradigm (e.g., Machine Learning (ML), Artificial Intelligence (AI), Internet of Things (IoT) in cloud computing environments is becoming a norm, given the many advantages in scalability, reliability, security and performance. However, under rapid changes in applications, system middleware and underlying storage device, service providers are facing new challenges to deliver performance and security isolation in the context of shared resources among multiple tenants. The gap between the decades-old storage abstraction and modern storage device keeps widening, calling for software/hardware co-designs to approach more effective performance and security protocols. This dissertation rethinks the storage subsystem from device-level to system-level and proposes new designs at different levels to tackle performance and security issues for cloud storage systems. In the first part, we present an event-based SSD (Solid State Drive) simulator that models modern protocols, firmware and storage backend in detail. The proposed simulator can capture the nuances of SSD internal states under various I/O workloads, which help researchers understand the impact of various SSD designs and workload characteristics on end-to-end performance. In the second part, we study the security challenges of shared in-storage computing infrastructures. Many cloud providers offer isolation at multiple levels to secure data and instance, however, security measures in emerging in-storage computing infrastructures are not studied. We first investigate the attacks that could be conducted by offloaded in-storage programs in a multi-tenancy cloud environment. To defend against these attacks, we build a lightweight Trusted Execution Environment, IceClave to enable security isolation between in-storage programs and internal flash management functions. We show that while enforcing security isolation in the SSD controller with minimal hardware cost, IceClave still keeps the performance benefit of in-storage computing by delivering up to 2.4x better performance than the conventional host-based trusted computing approach. In the third part, we investigate the performance interference problem caused by other tenants' I/O flows. We demonstrate that I/O resource sharing can often lead to performance degradation and instability. The block device abstraction fails to expose SSD parallelism and pass application requirements. To this end, we propose a software/hardware co-design to enforce performance isolation by bridging the semantic gap. Our design can significantly improve QoS (Quality of Service) by reducing throughput penalties and tail latency spikes. Lastly, we explore more effective I/O control to address contention in the storage software stack. We illustrate that the state-of-the-art resource control mechanism, Linux cgroups is insufficient for controlling I/O resources. Inappropriate cgroup configurations may even hurt the performance of co-located workloads under memory intensive scenarios. We add kernel support for limiting page cache usage per cgroup and achieving I/O proportionality

    High-Density Solid-State Memory Devices and Technologies

    Get PDF
    This Special Issue aims to examine high-density solid-state memory devices and technologies from various standpoints in an attempt to foster their continuous success in the future. Considering that broadening of the range of applications will likely offer different types of solid-state memories their chance in the spotlight, the Special Issue is not focused on a specific storage solution but rather embraces all the most relevant solid-state memory devices and technologies currently on stage. Even the subjects dealt with in this Special Issue are widespread, ranging from process and design issues/innovations to the experimental and theoretical analysis of the operation and from the performance and reliability of memory devices and arrays to the exploitation of solid-state memories to pursue new computing paradigms

    Advances in Solid State Circuit Technologies

    Get PDF
    This book brings together contributions from experts in the fields to describe the current status of important topics in solid-state circuit technologies. It consists of 20 chapters which are grouped under the following categories: general information, circuits and devices, materials, and characterization techniques. These chapters have been written by renowned experts in the respective fields making this book valuable to the integrated circuits and materials science communities. It is intended for a diverse readership including electrical engineers and material scientists in the industry and academic institutions. Readers will be able to familiarize themselves with the latest technologies in the various fields

    Remote aerial data acquisition and capture project (RADAC)

    Get PDF
    The RADAC Project encompasses the design and prototype implementation of a system for low-cost aerial data sensor acquisition. It includes a Ground Transponder Unit (GTU), and Aerial Interrogation System (AIS) mounted under an aircraft. The GTU captures and transmits water-meter readings; the AIS initiates’ measurements and processes and displays the results. The proposed system is based on RF devices in association with a small low-cost single chip camera and microcontrollers. During a consultancy to a large Queensland government authority which has approximately 8000 water meters in regional and remote parts of the state. It was realised that considerable savings could be made in the management of water resources and human resources needed to read these at three month intervals. This project will calculate the RF subsystem performance in terms of gain, beamwidth, return loss, bandwidth, and matching of the antenna into the RF transceiver device Design Executive Data Acquisition System and Design and implement Interrogation Microcontrollerand Design PCB for RF Transceiver and Design and calculate power usage and Power Supply design and calculate High-Gain Helical Antenna. The solution based on using RF devices based on IEEE802.15.4 (IEEE 2006) in association with a small low cost single chip camera andmicrocontroller. Figure 1 shows the Block Diagram of the Ground Transponder Unit and symbolic AIS. The potential saving in maintenance costs to industry by remotely taking measurements is significant enough to warrant furtherinvestigation with industry. There is the potential for its adaption in other resource sector

    Designs for increasing reliability while reducing energy and increasing lifetime

    Get PDF
    In the last decades, the computing technology experienced tremendous developments. For instance, transistors' feature size shrank to half at every two years as consistently from the first time Moore stated his law. Consequently, number of transistors and core count per chip doubles at each generation. Similarly, petascale systems that have the capability of processing more than one billion calculation per second have been developed. As a matter of fact, exascale systems are predicted to be available at year 2020. However, these developments in computer systems face a reliability wall. For instance, transistor feature sizes are getting so small that it becomes easier for high-energy particles to temporarily flip the state of a memory cell from 1-to-0 or 0-to-1. Also, even if we assume that fault-rate per transistor stays constant with scaling, the increase in total transistor and core count per chip will significantly increase the number of faults for future desktop and exascale systems. Moreover, circuit ageing is exacerbated due to increased manufacturing variability and thermal stresses, therefore, lifetime of processor structures are becoming shorter. On the other side, due to the limited power budget of the computer systems such that mobile devices, it is attractive to scale down the voltage. However, when the voltage level scales to beyond the safe margin especially to the ultra-low level, the error rate increases drastically. Nevertheless, new memory technologies such as NAND flashes present only limited amount of nominal lifetime, and when they exceed this lifetime, they can not guarantee storing of the data correctly leading to data retention problems. Due to these issues, reliability became a first-class design constraint for contemporary computing in addition to power and performance. Moreover, reliability even plays increasingly important role when computer systems process sensitive and life-critical information such as health records, financial information, power regulation, transportation, etc. In this thesis, we present several different reliability designs for detecting and correcting errors occurring in processor pipelines, L1 caches and non-volatile NAND flash memories due to various reasons. We design reliability solutions in order to serve three main purposes. Our first goal is to improve the reliability of computer systems by detecting and correcting random and non-predictable errors such as bit flips or ageing errors. Second, we aim to reduce the energy consumption of the computer systems by allowing them to operate reliably at ultra-low voltage level. Third, we target to increase the lifetime of new memory technologies by implementing efficient and low-cost reliability schemes

    Sleep studies in mice - open and closed loop devices for untethered recording and stimulation

    Get PDF
    Sleep is an important biological processes that has been studied extensively to date. Research in sleep typically involves mice experiments that use heavy benchtop equipment or basic neural loggers to record ECoG/EMG signals which are then processed offline in workstations. These systems limit the complexity of experiments that can be carried out to only simple open loop recordings, due to either the tethered setup used, which restricts animal movements, or the lack of devices that can offer more advanced features without compromising its portability. With rising popularity in exploring more physiological features that can affect sleep, such as temperature, whose importance has been highlighted in several papers [1][2][3] and advances in optogenetic stimulation, allowing high temporal and spatial neural control, there is now an unprecedented demand for experimental setups using new closed loop paradigms. To address this, this thesis presents compact and lightweight neural logging devices that are not only capable of measuring ECoG and EMG signals for core sleep analysis but also capable of taking high resolution temperature recordings and delivering optogenetic stimulus with fully adjustable parameters. Together with its embedded on-board automatic sleep stage scoring algorithm, the device will allow researchers for the first time to be able to quickly uncover the role a neural circuit plays in sleep regulation through selective neural stimulation when the animal is under the target sleep vigilance state. Original contributions include: the development of two novel multichannel neural logging devices, one for core sleep analysis and another for closed loop experimentation; the development and implementation of a lightweight, fast and highly accurate automatic on-line sleep stage scoring algorithm; and the development of a custom optogenetic coupler that is compatible with most current optogenetic setups for LED-Optical fibre coupling.Open Acces
    • …
    corecore