1,266 research outputs found

    Cache-Oblivious Persistence

    Full text link
    Partial persistence is a general transformation that takes a data structure and allows queries to be executed on any past state of the structure. The cache-oblivious model is the leading model of a modern multi-level memory hierarchy.We present the first general transformation for making cache-oblivious model data structures partially persistent

    An aspect-oriented framework for orthogonal persistence

    Get PDF
    The life cycle of software applications in general is very short and with extreme volatile requirements. Within these conditions programmers need development tools and techniques with an extreme level of productivity. We consider the code reuse as the most prominent approach to solve that problem. Our proposal uses the advantages provided by the Aspect-Oriented Programming in order to build a reusable framework capable to turn both programmer and application oblivious as far as data persistence is concerned, thus avoiding the need to write any line of code about that concern. Besides the benefits to productivity, the software quality increases. This paper describes the actual state of the art, identifying the main challenge to build a complete and reusable framework for Orthogonal Persistence in concurrent environments with support for transactions. The present work also includes a successfully developed prototype of that framework, capable of freeing the programmer of implementing any read or write data operations. This prototype is supported by an object oriented database and, in the future, will also use a relational database and have support for transactions

    Anti-Persistence on Persistent Storage: History-Independent Sparse Tables and Dictionaries

    Get PDF
    International audienceWe present history-independent alternatives to a B-tree, the primary indexing data structure used in databases. A data structure is history independent (HI) if it is impossible to deduce any information by examining the bit representation of the data structure that is not already available through the API. We show how to build a history-independent cache-oblivious B-tree and a history-independent external-memory skip list. One of the main contributions is a data structure we build on the way—a history-independent packed-memory array (PMA). The PMA supports efficient range queries, one of the most important operations for answering database queries. Our HI PMA matches the asymptotic bounds of prior non-HI packed-memory arrays and sparse tables. Specifically, a PMA maintains a dynamic set of elements in sorted order in a linear-sized array. Inserts and deletes take an amortized O(log^2 N) element moves with high probability. Simple experiments with our implementation of HI PMAs corroborate our theoretical analysis. Comparisons to regular PMAs give preliminary indications that the practical cost of adding history-independence is not too large. Our HI cache-oblivious B-tree bounds match those of prior non-* HI cache-oblivious B-trees. Searches take O(log_B N) I/Os; inserts and deletes take O((log^2 N)/B + log_B N) amortized I/Os with high probability; and range queries returning k elements take O(log_B N + k/B) I/Os. Our HI external-memory skip list achieves optimal bounds with high probability, analogous to in-memory skip lists: O(log_B N) I/Os for point queries and amortized O(log_B N) I/Os for in-serts/deletes. Range queries returning k elements run in O(log_B N + k/B) I/Os. In contrast, the best possible high-probability bounds for inserting into the folklore B-skip list, which promotes elements with probability 1/B, is just Θ(log N) I/Os. This is no better than the bounds one gets from running an in-memory skip list in external memory

    Efficient Management of Short-Lived Data

    Full text link
    Motivated by the increasing prominence of loosely-coupled systems, such as mobile and sensor networks, which are characterised by intermittent connectivity and volatile data, we study the tagging of data with so-called expiration times. More specifically, when data are inserted into a database, they may be tagged with time values indicating when they expire, i.e., when they are regarded as stale or invalid and thus are no longer considered part of the database. In a number of applications, expiration times are known and can be assigned at insertion time. We present data structures and algorithms for online management of data tagged with expiration times. The algorithms are based on fully functional, persistent treaps, which are a combination of binary search trees with respect to a primary attribute and heaps with respect to a secondary attribute. The primary attribute implements primary keys, and the secondary attribute stores expiration times in a minimum heap, thus keeping a priority queue of tuples to expire. A detailed and comprehensive experimental study demonstrates the well-behavedness and scalability of the approach as well as its efficiency with respect to a number of competitors.Comment: switched to TimeCenter latex styl

    The Parallel Persistent Memory Model

    Full text link
    We consider a parallel computational model that consists of PP processors, each with a fast local ephemeral memory of limited size, and sharing a large persistent memory. The model allows for each processor to fault with bounded probability, and possibly restart. On faulting all processor state and local ephemeral memory are lost, but the persistent memory remains. This model is motivated by upcoming non-volatile memories that are as fast as existing random access memory, are accessible at the granularity of cache lines, and have the capability of surviving power outages. It is further motivated by the observation that in large parallel systems, failure of processors and their caches is not unusual. Within the model we develop a framework for developing locality efficient parallel algorithms that are resilient to failures. There are several challenges, including the need to recover from failures, the desire to do this in an asynchronous setting (i.e., not blocking other processors when one fails), and the need for synchronization primitives that are robust to failures. We describe approaches to solve these challenges based on breaking computations into what we call capsules, which have certain properties, and developing a work-stealing scheduler that functions properly within the context of failures. The scheduler guarantees a time bound of O(W/PA+D(P/PA)log1/fW)O(W/P_A + D(P/P_A) \lceil\log_{1/f} W\rceil) in expectation, where WW and DD are the work and depth of the computation (in the absence of failures), PAP_A is the average number of processors available during the computation, and f1/2f \le 1/2 is the probability that a capsule fails. Within the model and using the proposed methods, we develop efficient algorithms for parallel sorting and other primitives.Comment: This paper is the full version of a paper at SPAA 2018 with the same nam

    EHAP-ORAM: Efficient Hardware-Assisted Persistent ORAM System for Non-volatile Memory

    Full text link
    Oblivious RAM (ORAM) protected access pattern is essential for secure NVM. In the ORAM system, data and PosMap metadata are maps in pairs to perform secure access. Therefore, we focus on the problem of crash consistency in the ORAM system. Unfortunately, using traditional software-based support for ORAM system crash consistency is not only expensive, it can also lead to information leaks. At present, there is no relevant research on the specific crash consistency mechanism supporting the ORAM system. To support crash consistency without damaging ORAM system security and compromising the performance, we propose EHAP-ORAM. Firstly, we analyze the access steps of basic ORAM to obtain the basic requirements to support the ORAM system crash consistency. Secondly, improve the ORAM controller. Thirdly, for the improved hardware system, we propose several persistence protocols supporting the ORAM system crash consistency. Finally, we compared our persistent ORAM with the system without crash consistency support, non-recursive and recursive EHAP-ORAM only incurs 3.36% and 3.65% performance overhead. The results show that EHAP-ORAM not only supports effective crash consistency with minimal performance and hardware overhead but also is friendly to NVM lifetime
    corecore