9,108 research outputs found

    Logical Algorithms meets CHR: A meta-complexity result for Constraint Handling Rules with rule priorities

    Full text link
    This paper investigates the relationship between the Logical Algorithms language (LA) of Ganzinger and McAllester and Constraint Handling Rules (CHR). We present a translation schema from LA to CHR-rp: CHR with rule priorities, and show that the meta-complexity theorem for LA can be applied to a subset of CHR-rp via inverse translation. Inspired by the high-level implementation proposal for Logical Algorithm by Ganzinger and McAllester and based on a new scheduling algorithm, we propose an alternative implementation for CHR-rp that gives strong complexity guarantees and results in a new and accurate meta-complexity theorem for CHR-rp. It is furthermore shown that the translation from Logical Algorithms to CHR-rp combined with the new CHR-rp implementation, satisfies the required complexity for the Logical Algorithms meta-complexity result to hold.Comment: To appear in Theory and Practice of Logic Programming (TPLP

    Reordering Rows for Better Compression: Beyond the Lexicographic Order

    Get PDF
    Sorting database tables before compressing them improves the compression rate. Can we do better than the lexicographical order? For minimizing the number of runs in a run-length encoding compression scheme, the best approaches to row-ordering are derived from traveling salesman heuristics, although there is a significant trade-off between running time and compression. A new heuristic, Multiple Lists, which is a variant on Nearest Neighbor that trades off compression for a major running-time speedup, is a good option for very large tables. However, for some compression schemes, it is more important to generate long runs rather than few runs. For this case, another novel heuristic, Vortex, is promising. We find that we can improve run-length encoding up to a factor of 3 whereas we can improve prefix coding by up to 80%: these gains are on top of the gains due to lexicographically sorting the table. We prove that the new row reordering is optimal (within 10%) at minimizing the runs of identical values within columns, in a few cases.Comment: to appear in ACM TOD

    Dynamic Ordered Sets with Exponential Search Trees

    Full text link
    We introduce exponential search trees as a novel technique for converting static polynomial space search structures for ordered sets into fully-dynamic linear space data structures. This leads to an optimal bound of O(sqrt(log n/loglog n)) for searching and updating a dynamic set of n integer keys in linear space. Here searching an integer y means finding the maximum key in the set which is smaller than or equal to y. This problem is equivalent to the standard text book problem of maintaining an ordered set (see, e.g., Cormen, Leiserson, Rivest, and Stein: Introduction to Algorithms, 2nd ed., MIT Press, 2001). The best previous deterministic linear space bound was O(log n/loglog n) due Fredman and Willard from STOC 1990. No better deterministic search bound was known using polynomial space. We also get the following worst-case linear space trade-offs between the number n, the word length w, and the maximal key U < 2^w: O(min{loglog n+log n/log w, (loglog n)(loglog U)/(logloglog U)}). These trade-offs are, however, not likely to be optimal. Our results are generalized to finger searching and string searching, providing optimal results for both in terms of n.Comment: Revision corrects some typoes and state things better for applications in subsequent paper

    Substring filtering for low-cost linked data interfaces

    Get PDF
    Recently, Triple Pattern Fragments (TPFS) were introduced as a low-cost server-side interface when high numbers of clients need to evaluate SPARQL queries. Scalability is achieved by moving part of the query execution to the client, at the cost of elevated query times. Since the TPFS interface purposely does not support complex constructs such as SPARQL filters, queries that use them need to be executed mostly on the client, resulting in long execution times. We therefore investigated the impact of adding a literal substring matching feature to the TPFS interface, with the goal of improving query performance while maintaining low server cost. In this paper, we discuss the client/server setup and compare the performance of SPARQL queries on multiple implementations, including Elastic Search and case-insensitive FM-index. Our evaluations indicate that these improvements allow for faster query execution without significantly increasing the load on the server. Offering the substring feature on TPF servers allows users to obtain faster responses for filter-based SPARQL queries. Furthermore, substring matching can be used to support other filters such as complete regular expressions or range queries

    On the Necessary Memory to Compute the Plurality in Multi-Agent Systems

    Get PDF
    We consider the Relative-Majority Problem (also known as Plurality), in which, given a multi-agent system where each agent is initially provided an input value out of a set of kk possible ones, each agent is required to eventually compute the input value with the highest frequency in the initial configuration. We consider the problem in the general Population Protocols model in which, given an underlying undirected connected graph whose nodes represent the agents, edges are selected by a globally fair scheduler. The state complexity that is required for solving the Plurality Problem (i.e., the minimum number of memory states that each agent needs to have in order to solve the problem), has been a long-standing open problem. The best protocol so far for the general multi-valued case requires polynomial memory: Salehkaleybar et al. (2015) devised a protocol that solves the problem by employing O(k2k)O(k 2^k) states per agent, and they conjectured their upper bound to be optimal. On the other hand, under the strong assumption that agents initially agree on a total ordering of the initial input values, Gasieniec et al. (2017), provided an elegant logarithmic-memory plurality protocol. In this work, we refute Salehkaleybar et al.'s conjecture, by providing a plurality protocol which employs O(k11)O(k^{11}) states per agent. Central to our result is an ordering protocol which allows to leverage on the plurality protocol by Gasieniec et al., of independent interest. We also provide a Ω(k2)\Omega(k^2)-state lower bound on the necessary memory to solve the problem, proving that the Plurality Problem cannot be solved within the mere memory necessary to encode the output.Comment: 14 pages, accepted at CIAC 201
    • …
    corecore