687 research outputs found

    Toward a Unified Performance and Power Consumption NAND Flash Memory Model of Embedded and Solid State Secondary Storage Systems

    Full text link
    This paper presents a set of models dedicated to describe a flash storage subsystem structure, functions, performance and power consumption behaviors. These models cover a large range of today's NAND flash memory applications. They are designed to be implemented in simulation tools allowing to estimate and compare performance and power consumption of I/O requests on flash memory based storage systems. Such tools can also help in designing and validating new flash storage systems and management mechanisms. This work is integrated in a global project aiming to build a framework simulating complex flash storage hierarchies for performance and power consumption analysis. This tool will be highly configurable and modular with various levels of usage complexity according to the required aim: from a software user point of view for simulating storage systems, to a developer point of view for designing, testing and validating new flash storage management systems

    Elevating commodity storage with the SALSA host translation layer

    Full text link
    To satisfy increasing storage demands in both capacity and performance, industry has turned to multiple storage technologies, including Flash SSDs and SMR disks. These devices employ a translation layer that conceals the idiosyncrasies of their mediums and enables random access. Device translation layers are, however, inherently constrained: resources on the drive are scarce, they cannot be adapted to application requirements, and lack visibility across multiple devices. As a result, performance and durability of many storage devices is severely degraded. In this paper, we present SALSA: a translation layer that executes on the host and allows unmodified applications to better utilize commodity storage. SALSA supports a wide range of single- and multi-device optimizations and, because is implemented in software, can adapt to specific workloads. We describe SALSA's design, and demonstrate its significant benefits using microbenchmarks and case studies based on three applications: MySQL, the Swift object store, and a video server.Comment: Presented at 2018 IEEE 26th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS

    Procedures and tools for acquisition and analysis of volatile memory on android smartphones

    Get PDF
    Mobile phone forensics have become more prominent since mobile phones have become ubiquitous both for personal and business practice. Android smartphones show tremendous growth in the global market share. Many researchers and works show the procedures and techniques for the acquisition and analysis the non-volatile memory inmobile phones. On the other hand, the physical memory (RAM) on the smartphone might retain incriminating evidence that could be acquired and analysed by the examiner. This study reveals the proper procedure for acquiring the volatile memory inthe Android smartphone and discusses the use of Linux Memory Extraction (LiME) for dumping the volatile memory. The study also discusses the analysis process of the memory image with Volatility 2.3, especially how the application shows its capability analysis. Despite its advancement there are two major concerns for both applications. First, the examiners have to gain root privileges before executing LiME. Second, both applications have no generic solution or approach. On the other hand, currently there is no other tool or option that might give the same result as LiME and Volatility 2.3

    Letter from the Special Issue Editor

    Get PDF
    Editorial work for DEBULL on a special issue on data management on Storage Class Memory (SCM) technologies

    Solid State Drive

    Get PDF
    This project documents the design and implementation of a solid state drive (SSD). SSDs are a non-volatile memory storage device that competes with hard disk drives. SSDs rely on flash memory, a type of non-volatile memory that is electrically erased and programmed. The appeal of SSDs lies in the fact that they allow a fast, reliable, and durable memory storage device. The goal of this project is to have a working external SSD built from scratch

    HMC-Based Accelerator Design For Compressed Deep Neural Networks

    Get PDF
    Deep Neural Networks (DNNs) offer remarkable performance of classifications and regressions in many high dimensional problems and have been widely utilized in real-word cognitive applications. In DNN applications, high computational cost of DNNs greatly hinder their deployment in resource-constrained applications, real-time systems and edge computing platforms. Moreover, energy consumption and performance cost of moving data between memory hierarchy and computational units are higher than that of the computation itself. To overcome the memory bottleneck, data locality and temporal data reuse are improved in accelerator design. In an attempt to further improve data locality, memory manufacturers have invented 3D-stacked memory where multiple layers of memory arrays are stacked on top of each other. Inherited from the concept of Process-In-Memory (PIM), some 3D-stacked memory architectures also include a logic layer that can integrate general-purpose computational logic directly within main memory to take advantages of high internal bandwidth during computation. In this dissertation, we are going to investigate hardware/software co-design for neural network accelerator. Specifically, we introduce a two-phase filter pruning framework for model compression and an accelerator tailored for efficient DNN execution on HMC, which can dynamically offload the primitives and functions to PIM logic layer through a latency-aware scheduling controller. In our compression framework, we formulate filter pruning process as an optimization problem and propose a filter selection criterion measured by conditional entropy. The key idea of our proposed approach is to establish a quantitative connection between filters and model accuracy. We define the connection as conditional entropy over filters in a convolutional layer, i.e., distribution of entropy conditioned on network loss. Based on the definition, different pruning efficiencies of global and layer-wise pruning strategies are compared, and two-phase pruning method is proposed. The proposed pruning method can achieve a reduction of 88% filters and 46% inference time reduction on VGG16 within 2% accuracy degradation. In this dissertation, we are going to investigate hardware/software co-design for neural network accelerator. Specifically, we introduce a two-phase filter pruning framework for model compres- sion and an accelerator tailored for efficient DNN execution on HMC, which can dynamically offload the primitives and functions to PIM logic layer through a latency-aware scheduling con- troller. In our compression framework, we formulate filter pruning process as an optimization problem and propose a filter selection criterion measured by conditional entropy. The key idea of our proposed approach is to establish a quantitative connection between filters and model accuracy. We define the connection as conditional entropy over filters in a convolutional layer, i.e., distribution of entropy conditioned on network loss. Based on the definition, different pruning efficiencies of global and layer-wise pruning strategies are compared, and two-phase pruning method is proposed. The proposed pruning method can achieve a reduction of 88% filters and 46% inference time reduction on VGG16 within 2% accuracy degradation
    • …
    corecore