3 research outputs found

    High Performance and Secure Execution Environments for Emerging Architectures

    Get PDF
    Energy-efficiency and performance have been the driving forces of system architectures and designers in the last century. Given the diversity of workloads and the significant performance and power improvements when running workloads on customized processing elements, system vendors are drifting towards new system architectures (e.g., FAM or HMM). Such architectures are being developed with the purpose of improving the system\u27s performance, allow easier data sharing, and reduce the overall power consumption. Additionally, current computing systems suffer from a very wide attack surface, mainly due to the fact that such systems comprise of tens to hundreds of sub-systems that could be manufactured by different vendors. Vulnerabilities, backdoors, and potentially hardware trojans injected anywhere in the system form a serious risk for confidentiality and integrity of data in computing systems. Thus, adding security features is becoming an essential requirement in modern systems. In the purpose of achieving these performance improvements and power consumption reduction, the emerging NVMs stand as a very appealing option to be the main memory building block or a part of it. However, integrating the NVMs in the memory system can lead to several challenges. First, if the NVM is used as the sole memory, incorporating security measures can exacerbate the NVM\u27s write endurance and reduce its lifetime. Second, integrating the NVM as a part of the main memory as in DRAM-NVM hybrid memory systems can lead to higher performance overheads of persistent applications. Third, Integrating the NVM as a memory extension as in fabric-attached memory architecture can cause a high contention over the security metadata cache. Additionally, in FAM architectures, the memory sharing can lead to security metadata coherence problems. In this dissertation, we study these problems and propose novel solutions to enable secure and efficient integration of NVMs in the emerging architectures

    Improving Data Management and Data Movement Efficiency in Hybrid Storage Systems

    Get PDF
    University of Minnesota Ph.D. dissertation.July 2017. Major: Computer Science. Advisor: David Du. 1 computer file (PDF); ix, 116 pages.In the big data era, large volumes of data being continuously generated drive the emergence of high performance large capacity storage systems. To reduce the total cost of ownership, storage systems are built in a more composite way with many different types of emerging storage technologies/devices including Storage Class Memory (SCM), Solid State Drives (SSD), Shingle Magnetic Recording (SMR), Hard Disk Drives (HDD), and even across off-premise cloud storage. To make better utilization of each type of storage, industries have provided multi-tier storage through dynamically placing hot data in the faster tiers and cold data in the slower tiers. Data movement happens between devices on one single device and as well as between devices connected via various networks. Toward improving data management and data movement efficiency in such hybrid storage systems, this work makes the following contributions: To bridge the giant semantic gap between applications and modern storage systems, passing a piece of tiny and useful information (I/O access hints) from upper layers to the block storage layer may greatly improve application performance or ease data management in heterogeneous storage systems. We present and develop a generic and flexible framework, called HintStor, to execute and evaluate various I/O access hints on heterogeneous storage systems with minor modifications to the kernel and applications. The design of HintStor contains a new application/user level interface, a file system plugin and a block storage data manager. With HintStor, storage systems composed of various storage devices can perform pre-devised data placement, space reallocation and data migration polices assisted by the added access hints. Each storage device/technology has its own unique price-performance tradeoffs and idiosyncrasies with respect to workload characteristics they prefer to support. To explore the internal access patterns and thus efficiently place data on storage systems with fully connected (i.e., data can move from one device to any other device instead of moving tier by tier) differential pools (each pool consists of storage devices of a particular type), we propose a chunk-level storage-aware workload analyzer framework, simplified as ChewAnalyzer. With ChewAnalzyer, the storage manager can adequately distribute and move the data chunks across different storage pools. To reduce the duplicate content transferred between local storage devices and devices in remote data centers, an inline Network Redundancy Elimination (NRE) process with Content-Defined Chunking (CDC) policy can obtain a higher Redundancy Elimination (RE) ratio but may suffer from a considerably higher computational requirement than fixed-size chunking. We build an inline NRE appliance which incorporates an improved FPGA based scheme to speed up CDC processing. To efficiently utilize the hardware resources, the whole NRE process is handled by a Virtualized NRE (VNRE) controller. The uniqueness of this VNRE that we developed lies in its ability to exploit the redundancy patterns of different TCP flows and customize the chunking process to achieve a higher RE ratio
    corecore