27 research outputs found

    Evaluating Crash Consistency for PM Software using Intel Pin

    Get PDF
    Ongoing advancement in non-volatile memory such as NVDIMMs have prompted to huge improvement in the area of persistent memory. It is faster, byte addressable and can persist without power. It allows direct manipulation of data in memory unlike other memory system like hard disk and SSD . It furthers overcomes the limitation of file system overhead that incurs extra burden to application to handle crash during failure. A Persistent program needs to correctly implement certain crash consistency mechanisms such as undo and redo logging. The program should be able to recover to consistent state after failure. Due to volatile caching and reordering of writes within memory hierarchy, programs need to carefully manage the order in which writes become persistent when implementing crash consistent software. Persistent-memory applications ensure the consistency of persistent data by inserting ordering points between writes to PM allowing the construction of higher-level transaction mechanisms.PM System have introduced new instructions such as CLWB and SFENCE from x86 and DC CVAP from ARM to ensure ordering and further introduced high level transactional libraries to ensure persistence. Required by the crash consistency guarantee, that is a program returns to a consistent state and resumes the execution after a failure, a testing tool is expected to detect inconsistencies during the entire procedure of execution, recovery, and resumption. Therefore, we proposed new method that will the log all the I/Os using intel pin tool and replay the all the I/Os and check the consistency of program by comparing the initial image and a final image.We are checking the consistency of program post failure by emulating the failure by removing some of I/O’s writes while replaying to check if program can recover itself after cras

    Improving the Performance and Endurance of Persistent Memory with Loose-Ordering Consistency

    Full text link
    Persistent memory provides high-performance data persistence at main memory. Memory writes need to be performed in strict order to satisfy storage consistency requirements and enable correct recovery from system crashes. Unfortunately, adhering to such a strict order significantly degrades system performance and persistent memory endurance. This paper introduces a new mechanism, Loose-Ordering Consistency (LOC), that satisfies the ordering requirements at significantly lower performance and endurance loss. LOC consists of two key techniques. First, Eager Commit eliminates the need to perform a persistent commit record write within a transaction. We do so by ensuring that we can determine the status of all committed transactions during recovery by storing necessary metadata information statically with blocks of data written to memory. Second, Speculative Persistence relaxes the write ordering between transactions by allowing writes to be speculatively written to persistent memory. A speculative write is made visible to software only after its associated transaction commits. To enable this, our mechanism supports the tracking of committed transaction ID and multi-versioning in the CPU cache. Our evaluations show that LOC reduces the average performance overhead of memory persistence from 66.9% to 34.9% and the memory write traffic overhead from 17.1% to 3.4% on a variety of workloads.Comment: This paper has been accepted by IEEE Transactions on Parallel and Distributed System

    Exploring Optimization Opportunities in Non-Volatile Memory Systems

    Get PDF
    Modern storage systems utilize Non-Volatile Memories (NVMs) to reduce the performance and density gap between memory and storage. NVMs are a broad class of storage technologies, including flash-based SSDs, Phase Change Memory (PCM), Spin-Transfer-Torque Random Access-Memory (STTRAM). These devices offer low latency, fast I/Os, persistent writes, and large storage capacity compared to volatile DRAM. However, researchers are still working on the possibility of building systems that can leverage these NVMs to deliver low latency and high throughput to applications. Conventional systems were designed to persist data on hard drives, which has higher latency than NVM devices. Hence, in this work, we intend to explore opportunities to improve performance and reliability in the NVM based systems. One class of NVM devices that are placed on the memory bus is Persistent Memory (PM). Examples of PM technologies include 3D XPoint, NVDIMMs. Applications need to be modified to use the PM devices, which requires a lot of human effort and could lead to programming errors. Hence, reliability is also necessary to build systems to utilize the PM. Additionally, as persisted data is expected to be recoverable systems in case of a crash, PM applications are responsible for providing that reliability support at the application level instead of relying on the file system. In this work, we evaluate the performance of popular key-value store RocksDB that is optimized for flash storage and also the reliability guarantees provided by recent works, which provides the testing framework for determining crash-consistency bugs in PM systems. Based on this analysis, we also present some opportunities to optimize performance and reliability in NVM systems

    Defining and Verifying Durable Opacity: Correctness for Persistent Software Transactional Memory

    Full text link
    Non-volatile memory (NVM), aka persistent memory, is a new paradigm for memory that preserves its contents even after power loss. The expected ubiquity of NVM has stimulated interest in the design of novel concepts ensuring correctness of concurrent programming abstractions in the face of persistency. So far, this has lead to the design of a number of persistent concurrent data structures, built to satisfy an associated notion of correctness: durable linearizability. In this paper, we transfer the principle of durable concurrent correctness to the area of software transactional memory (STM). Software transactional memory algorithms allow for concurrent access to shared state. Like linearizability for concurrent data structures, opacity is the established notion of correctness for STMs. First, we provide a novel definition of durable opacity extending opacity to handle crashes and recovery in the context of NVM. Second, we develop a durably opaque version of an existing STM algorithm, namely the Transactional Mutex Lock (TML). Third, we design a proof technique for durable opacity based on refinement between TML and an operational characterisation of durable opacity by adapting the TMS2 specification. Finally, we apply this proof technique to show that the durable version of TML is indeed durably opaque. The correctness proof is mechanized within Isabelle.Comment: This is the full version of the paper that is to appear in FORTE 2020 (https://www.discotec.org/2020/forte
    corecore