151,008 research outputs found

    Transforming Databases with Recursive Data Structures

    Get PDF
    This thesis examines the problems of performing structural transformations on databases involving complex data-structures and object-identities, and proposes an approach to specifying and implementing such transformations. We start by looking at various applications of such database transformations, and at some of the more significant work in these areas. In particular we will look at work on transformations in the area of database integration, which has been one of the major motivating areas for this work. We will also look at various notions of correctness that have been proposed for database transformations, and show that the utility of such notions is limited by the dependence of transformations on certain implicit database constraints. We draw attention to the limitations of existing work on transformations, and argue that there is a need for a more general formalism for reasoning about database transformations and constraints. We will also argue that, in order to ensure that database transformations are well-defined and meaningful, it is necessary to understand the information capacity of the data-models being transformed. To this end we give a thorough analysis of the information capacity of data-models supporting object identity, and will show that this is dependent on the operations supported by a query language for comparing object identities. We introduce a declarative language, WOL, based on Horn-clause logic, for specifying database transformations and constraints. We also propose a method of implementing transformations specified in this language, by manipulating their clauses into a normal form which can then be translated into an underlying database programming language. Finally we will present a number of optimizations and techniques necessary in order to build a practical implementation based on these proposals, and will discuss the results of some of the trials that were carried out using a prototype of such a system

    Compressed Data Structures for Recursive Flow Classification

    Get PDF
    High-speed packet classification is crucial to the implementation of several advanced network services and protocols; many QoS implementations, active networking platforms, and security devices (such as firewalls and intrusion-detection systems) require it. But performing classification on multiple fields, at the speed of modern networks, is known to be a difficult problem. The Recursive Flow Classification (RFC) algorithm described by Gupta and McKeown performs classification very quickly, but can require excessive storage when using thousands of rules. This paper studies a compressed representation for the tables used in RFC, trading some memory accesses for space. The compression’s efficiency can be improved by rear-ranging rows and columns of the tables. Finding a near-optimal rearrangement can be transformed into a Traveling Salesman Problem in which certain approximation algorithms can be used. Also, in evaluating the compressed representation of tables, we study the effects of choosing different reduction trees in RFC. We evaluate these methods using a real-world filter database with 159 rules. Results show a reduction in the size of the cross product tables by 61.6% in the median case; in some cases their size is reduced by 87% or more. Furthermore, experimental evidence suggests larger databases may be more compressible

    The interpretation of multiple embedded genitive constructions by Wapichana and English speakers

    Get PDF
    While research on Amazonian languages shows controversial data about the universality of recursive structures, researchers in language acquisition with Indo-European and East Asian languages have shown that complex recursive constructions are acquired very early by children. This study contributes to both debates about recursive structures in indigenous languages and the acquisition of recursion by children. We tested the comprehension of multiple embedded genitive constructions in Wapichana and English to answer two distinct questions: (1) Does the Wapichana grammar accept recursive genitives? (2) If yes, do Wapichana children acquire the multiple embedded genitives at a similar rate as English speaking children? Our data show that the interpretation of recursive genitives in English and Wapichana by adult speakers is exactly the same. Moreover, we show that both groups of children acquire multiple embedded genitives very early, but only achieve adult performance after the age of seven.While research on Amazonian languages shows controversial data about the universality of recursive structures, researchers in language acquisition with Indo-European and East Asian languages have shown that complex recursive constructions are acquired very early by children. This study contributes to both debates about recursive structures in indigenous languages and the acquisition of recursion by children. We tested the comprehension of multiple embedded genitive constructions in Wapichana and English to answer two distinct questions: (1) Does the Wapichana grammar accept recursive genitives? (2) If yes, do Wapichana children acquire the multiple embedded genitives at a similar rate as English speaking children? Our data show that the interpretation of recursive genitives in English and Wapichana by adult speakers is exactly the same. Moreover, we show that both groups of children acquire multiple embedded genitives very early, but only achieve adult performance after the age of seven

    Recursive Neural Networks Can Learn Logical Semantics

    Full text link
    Tree-structured recursive neural networks (TreeRNNs) for sentence meaning have been successful for many applications, but it remains an open question whether the fixed-length representations that they learn can support tasks as demanding as logical deduction. We pursue this question by evaluating whether two such models---plain TreeRNNs and tree-structured neural tensor networks (TreeRNTNs)---can correctly learn to identify logical relationships such as entailment and contradiction using these representations. In our first set of experiments, we generate artificial data from a logical grammar and use it to evaluate the models' ability to learn to handle basic relational reasoning, recursive structures, and quantification. We then evaluate the models on the more natural SICK challenge data. Both models perform competitively on the SICK data and generalize well in all three experiments on simulated data, suggesting that they can learn suitable representations for logical inference in natural language

    Simple DFS on the Complement of a Graph and on Partially Complemented Digraphs

    Full text link
    A complementation operation on a vertex of a digraph changes all outgoing arcs into non-arcs, and outgoing non-arcs into arcs. A partially complemented digraph G~\widetilde{G} is a digraph obtained from a sequence of vertex complement operations on GG. Dahlhaus et al. showed that, given an adjacency-list representation of G~\widetilde{G}, depth-first search (DFS) on GG can be performed in O(n+m~)O(n + \widetilde{m}) time, where nn is the number of vertices and m~\widetilde{m} is the number of edges in G~\widetilde{G}. To achieve this bound, their algorithm makes use of a somewhat complicated stack-like data structure to simulate the recursion stack, instead of implementing it directly as a recursive algorithm. We give a recursive O(n+m~)O(n+\widetilde{m}) algorithm that uses no complicated data-structures

    Algorithms and Data Structures for Multi-Adaptive Time-Stepping

    Full text link
    Multi-adaptive Galerkin methods are extensions of the standard continuous and discontinuous Galerkin methods for the numerical solution of initial value problems for ordinary or partial differential equations. In particular, the multi-adaptive methods allow individual and adaptive time steps to be used for different components or in different regions of space. We present algorithms for efficient multi-adaptive time-stepping, including the recursive construction of time slabs and adaptive time step selection. We also present data structures for efficient storage and interpolation of the multi-adaptive solution. The efficiency of the proposed algorithms and data structures is demonstrated for a series of benchmark problems.Comment: ACM Transactions on Mathematical Software 35(3), 24 pages (2008

    On the contraction method with degenerate limit equation

    Full text link
    A class of random recursive sequences (Y_n) with slowly varying variances as arising for parameters of random trees or recursive algorithms leads after normalizations to degenerate limit equations of the form X\stackrel{L}{=}X. For nondegenerate limit equations the contraction method is a main tool to establish convergence of the scaled sequence to the ``unique'' solution of the limit equation. In this paper we develop an extension of the contraction method which allows us to derive limit theorems for parameters of algorithms and data structures with degenerate limit equation. In particular, we establish some new tools and a general convergence scheme, which transfers information on mean and variance into a central limit law (with normal limit). We also obtain a convergence rate result. For the proof we use selfdecomposability properties of the limit normal distribution which allow us to mimic the recursive sequence by an accompanying sequence in normal variables.Comment: Published by the Institute of Mathematical Statistics (http://www.imstat.org) in the Annals of Probability (http://www.imstat.org/aop/) at http://dx.doi.org/10.1214/00911790400000017

    On Automated Lemma Generation for Separation Logic with Inductive Definitions

    Get PDF
    Separation Logic with inductive definitions is a well-known approach for deductive verification of programs that manipulate dynamic data structures. Deciding verification conditions in this context is usually based on user-provided lemmas relating the inductive definitions. We propose a novel approach for generating these lemmas automatically which is based on simple syntactic criteria and deterministic strategies for applying them. Our approach focuses on iterative programs, although it can be applied to recursive programs as well, and specifications that describe not only the shape of the data structures, but also their content or their size. Empirically, we find that our approach is powerful enough to deal with sophisticated benchmarks, e.g., iterative procedures for searching, inserting, or deleting elements in sorted lists, binary search tress, red-black trees, and AVL trees, in a very efficient way
    corecore