281 research outputs found

    Inductive benchmarking for purely functional data structures

    Get PDF
    Every designer of a new data structure wants to know how well it performs in comparison with others. But finding, coding and testing applications as benchmarks can be tedious and time-consuming. Besides, how a benchmark uses a data structure may considerably affect its apparent efficiency, so the choice of applications may bias the results. We address these problems by developing a tool for inductive benchmarking. This tool, Auburn, can generate benchmarks across a wide distribution of uses. We precisely define 'the use of a data structure', upon which we build the core algorithms of Auburn: how to generate a benchmark from a description of use, and how to extract a description of use from an application. We then apply inductive classification techniques to obtain decision trees for the choice between competing data structures. We test Auburn by benchmarking several implementations of three common data structures: queues, random-access lists and heaps. These and other results show Auburn to be a useful and accurate tool, but they also reveal some limitations of the approach

    Quad Ropes: Immutable, Declarative Arrays with Parallelizable Operations

    Get PDF

    (Co-)Inductive semantics for Constraint Handling Rules

    Full text link
    In this paper, we address the problem of defining a fixpoint semantics for Constraint Handling Rules (CHR) that captures the behavior of both simplification and propagation rules in a sound and complete way with respect to their declarative semantics. Firstly, we show that the logical reading of states with respect to a set of simplification rules can be characterized by a least fixpoint over the transition system generated by the abstract operational semantics of CHR. Similarly, we demonstrate that the logical reading of states with respect to a set of propagation rules can be characterized by a greatest fixpoint. Then, in order to take advantage of both types of rules without losing fixpoint characterization, we present an operational semantics with persistent. We finally establish that this semantics can be characterized by two nested fixpoints, and we show the resulting language is an elegant framework to program using coinductive reasoning.Comment: 17 page

    I/O-Efficient Dynamic Planar Range Skyline Queries

    Get PDF
    We present the first fully dynamic worst case I/O-efficient data structures that support planar orthogonal \textit{3-sided range skyline reporting queries} in \bigO (\log_{2B^\epsilon} n + \frac{t}{B^{1-\epsilon}}) I/Os and updates in \bigO (\log_{2B^\epsilon} n) I/Os, using \bigO (\frac{n}{B^{1-\epsilon}}) blocks of space, for nn input planar points, tt reported points, and parameter 0ϵ10 \leq \epsilon \leq 1. We obtain the result by extending Sundar's priority queues with attrition to support the operations \textsc{DeleteMin} and \textsc{CatenateAndAttrite} in \bigO (1) worst case I/Os, and in \bigO(1/B) amortized I/Os given that a constant number of blocks is already loaded in main memory. Finally, we show that any pointer-based static data structure that supports \textit{dominated maxima reporting queries}, namely the difficult special case of 4-sided skyline queries, in \bigO(\log^{\bigO(1)}n +t) worst case time must occupy Ω(nlognloglogn)\Omega(n \frac{\log n}{\log \log n}) space, by adapting a similar lower bounding argument for planar 4-sided range reporting queries.Comment: Submitted to SODA 201

    Fast Functional Lists, Hash-Lists, Deques and Variable Length Arrays

    Get PDF
    Since its inception Functional Programming, J. McCarthy, has almost universally used the Linked List as the underpinning data structure. This paper introduces a new data structure, the VList, that is compact, thread safe and significantly faster to use than Linked Lists for nearly all list operations. Space usage can be reduced by 50% to 90% and in typical list operations speed improved by factors ranging from 4 to 20 or more. Some important operations such as indexing and length are typically changed from O(N) to O(1) and O(lgN) respectively. A language interpreter Visp, using a dialect of Common Lisp, has been implemented using VLists and the benchmark comparison with OCAML reported. It is also shown how to adapt the structure to create variable length arrays, persistent deques and functional hash tables. The VArray requires no resize copying and has an average O(1) random access time. Comparisons are made with previous resizable one dimensional arrays, Hash Array Trees (HAT) Sitarski [1996], and Brodnik, Carlsson, Demaine, Munro, and Sedgewick [1999]

    Хронологические деревья: способы представления в памяти

    Full text link
    Хронологические структуры данных используются для того, чтобы обеспечить хранение всех исторических состояний определенной структуры данных и быстрый доступ к ним. В настоящей работе рассматривается, как эффективно сделать хронологической такую важную на практике структуру данных, как дерево «левый ребенок - правый брат». Для хранения таких хронологических деревьев предлагается использовать реляционные таблицы особого вида. Предложен алгоритм восстановления хронологического дерева из таких таблиц за один проход
    corecore