17 research outputs found

    Simurgh: a fully decentralized and secure NVMM user space file system

    Get PDF
    The availability of non-volatile main memory (NVMM) has started a new era for storage systems and NVMM specific file systems can support extremely high data and metadata rates, which are required by many HPC and data-intensive applications. Scaling metadata performance within NVMM file systems is nevertheless often restricted by the Linux kernel storage stack, while simply moving metadata management to the user space can compromise security or flexibility. This paper introduces Simurgh, a hardware-assisted user space file system with decentralized metadata management that allows secure metadata updates from within user space. Simurgh guarantees consistency, durability, and ordering of updates without sacrificing scalability. Security is enforced by only allowing NVMM access from protected user space functions, which can be implemented through two proposed instructions. Comparisons with other NVMM file systems show that Simurgh improves metadata performance up to 18x and application performance up to 89% compared to the second-fastest file system.This work has been supported by the European Comission’s BigStorage project H2020-MSCA-ITN2014-642963. It is also supported by the Big Data in Atmospheric Physics (BINARY) project, funded by the Carl Zeiss Foundation under Grant No.: P2018-02-003.Peer ReviewedPostprint (author's final draft

    A Survey on the Integration of NAND Flash Storage in the Design of File Systems and the Host Storage Software Stack

    Get PDF
    With the ever-increasing amount of data generate in the world, estimated to reach over 200 Zettabytes by 2025, pressure on efficient data storage systems is intensifying. The shift from HDD to flash-based SSD provides one of the most fundamental shifts in storage technology, increasing performance capabilities significantly. However, flash storage comes with different characteristics than prior HDD storage technology. Therefore, storage software was unsuitable for leveraging the capabilities of flash storage. As a result, a plethora of storage applications have been design to better integrate with flash storage and align with flash characteristics. In this literature study we evaluate the effect the introduction of flash storage has had on the design of file systems, which providing one of the most essential mechanisms for managing persistent storage. We analyze the mechanisms for effectively managing flash storage, managing overheads of introduced design requirements, and leverage the capabilities of flash storage. Numerous methods have been adopted in file systems, however prominently revolve around similar design decisions, adhering to the flash hardware constrains, and limiting software intervention. Future design of storage software remains prominent with the constant growth in flash-based storage devices and interfaces, providing an increasing possibility to enhance flash integration in the host storage software stack

    A Survey on the Integration of NAND Flash Storage in the Design of File Systems and the Host Storage Software Stack

    Full text link
    With the ever-increasing amount of data generate in the world, estimated to reach over 200 Zettabytes by 2025, pressure on efficient data storage systems is intensifying. The shift from HDD to flash-based SSD provides one of the most fundamental shifts in storage technology, increasing performance capabilities significantly. However, flash storage comes with different characteristics than prior HDD storage technology. Therefore, storage software was unsuitable for leveraging the capabilities of flash storage. As a result, a plethora of storage applications have been design to better integrate with flash storage and align with flash characteristics. In this literature study we evaluate the effect the introduction of flash storage has had on the design of file systems, which providing one of the most essential mechanisms for managing persistent storage. We analyze the mechanisms for effectively managing flash storage, managing overheads of introduced design requirements, and leverage the capabilities of flash storage. Numerous methods have been adopted in file systems, however prominently revolve around similar design decisions, adhering to the flash hardware constrains, and limiting software intervention. Future design of storage software remains prominent with the constant growth in flash-based storage devices and interfaces, providing an increasing possibility to enhance flash integration in the host storage software stack

    Securing Cloud File Systems using Shielded Execution

    Full text link
    Cloud file systems offer organizations a scalable and reliable file storage solution. However, cloud file systems have become prime targets for adversaries, and traditional designs are not equipped to protect organizations against the myriad of attacks that may be initiated by a malicious cloud provider, co-tenant, or end-client. Recently proposed designs leveraging cryptographic techniques and trusted execution environments (TEEs) still force organizations to make undesirable trade-offs, consequently leading to either security, functional, or performance limitations. In this paper, we introduce TFS, a cloud file system that leverages the security capabilities provided by TEEs to bootstrap new security protocols that meet real-world security, functional, and performance requirements. Through extensive security and performance analyses, we show that TFS can ensure stronger security guarantees while still providing practical utility and performance w.r.t. state-of-the-art systems; compared to the widely-used NFS, TFS achieves up to 2.1X speedups across micro-benchmarks and incurs <1X overhead for most macro-benchmark workloads. TFS demonstrates that organizations need not sacrifice file system security to embrace the functional and performance advantages of outsourcing

    Rapid Recovery of Program Execution Under Power Failures for Embedded Systems with NVM

    Full text link
    After power is switched on, recovering the interrupted program from the initial state can cause negative impact. Some programs are even unrecoverable. To rapid recovery of program execution under power failures, the execution states of checkpoints are backed up by NVM under power failures for embedded systems with NVM. However, frequent checkpoints will shorten the lifetime of the NVM and incur significant write overhead. In this paper, the technique of checkpoint setting triggered by function calls is proposed to reduce the write on NVM. The evaluation results show an average of 99.8% and 80.5$% reduction on NVM backup size for stack backup, compared to the log-based method and step-based method. In order to better achieve this, we also propose pseudo-function calls to increase backup points to reduce recovery costs, and exponential incremental call-based backup methods to reduce backup costs in the loop. To further avoid the content on NVM is cluttered and out of NVM, a method to clean the contents on the NVM that are useless for restoration is proposed. Based on aforementioned problems and techniques, the recovery technology is proposed, and the case is used to analyze how to recover rapidly under different power failures.Comment: This paper has been accepted for publication to Microprocessors and Microsystems in March 15, 202

    Mobile IP movement detection optimisations in 802.11 wireless LANs

    Get PDF
    The IEEE 802.11 standard was developed to support the establishment of highly flexible wireless local area networks (wireless LANs). However, when an 802.11 mobile node moves from a wireless LAN on one IP network to a wireless LAN on a different network, an IP layer handoff occurs. During the handoff, the mobile node's IP settings must be updated in order to re-establish its IP connectivity at the new point of attachment. The Mobile IP protocol allows a mobile node to perform an IP handoff without breaking its active upper-layer sessions. Unfortunately, these handoffs introduce large latencies into a mobile node's traffic, during which packets are lost. As a result, the mobile node's upper-layer sessions and applications suffer significant disruptions due to this handoff latency. One of the main components of a Mobile IP handoff is the movement detection process, whereby a mobile node senses that it is attached to a new IP network. This procedure contributes significantly to the total Mobile IP handover latency and resulting disruption. This study investigates different mechanisms that aim to lower movement detection delays and thereby improve Mobile IP performance. These mechanisms are considered specifically within the context of 802.11 wireless LANs. In general, a mobile node detects attachment to a new network when a periodic IP level broadcast (advertisement) is received from that network. It will be shown that the elimination of this dependence on periodic advertisements, and the reliance instead on external information from the 802.11 link layer, results in both faster and more efficient movement detection. Furthermore, a hybrid system is proposed that incorporates several techniques to ensure that movement detection performs reliably within a variety of different network configurations. An evaluation framework is designed and implemented that supports the assessment of a wide range of movement detection mechanisms. This test bed allows Mobile IP handoffs to be analysed in detail, with specific focus on the movement detection process. The performance of several movement detection optimisations is compared using handoff latency and packet loss as metrics. The evaluation framework also supports real-time Voice over IP (VoIP) traffic. This is used to ascertain the effects that different movement detection techniques have on the output voice quality. These evaluations not only provide a quantitative performance analysis of these movement detection mechanisms, but also a qualitative assessment based on a VoIP application

    Software tools for real-time simulation and control

    Get PDF
    The objective of this thesis is to design and simulate a multi-agent based energy management system for a shipboard power system in hard real-time environment. The automatic reconfiguration of shipboard power systems is essential to improve survivability. Multi-agent technology is used in designing the reconfigurable energy management system using a self-stabilizing maximum flow algorithm. The agent based energy management system is designed in a Matlab/Simulink environment. Reconfiguration is performed for several situations including start-up, loss of an agent, limited available power, and distribution to priority ranked loads. The number of steps taken to reach the global solution and the time taken are very promising. With the growing importance of timing accuracy in simulating control systems during design and development, there is an increased need for these simulations to run in a real-time environment. This research further focuses on software tools that support hard real-time environment to run real-time simulations. A detailed survey has been conducted on freely available real-time operating systems and other software tools to setup a desktop PC supporting real-time environment. Matlab/Simulink/RTW-RTAI was selected as real-time computer aided control design software for demonstrating real-time simulation of agent based energy management system. The timing accuracy of these simulations has been verified successfully
    corecore