17 research outputs found

    A linear lower bound for incrementing a space-optimal integer representation in the bit-probe model

    Get PDF
    We present the first linear lower bound for the number of bits required to be accessed in the worst case to increment an integer in an arbitrary space- optimal binary representation. The best previously known lower bound was logarithmic. It is known that a logarithmic number of read bits in the worst case is enough to increment some of the integer representations that use one bit of redundancy, therefore we show an exponential gap between space-optimal and redundant counters. Our proof is based on considering the increment procedure for a space optimal counter as a permutation and calculating its parity. For every space optimal counter, the permutation must be odd, and implementing an odd permutation requires reading at least half the bits in the worst case. The combination of these two observations explains why the worst-case space-optimal problem is substantially different from both average-case approach with constant expected number of reads and almost space optimal representations with logarithmic number of reads in the worst case.Comment: 12 pages, 4 figure

    Dynamic Complexity of Formal Languages

    Get PDF
    The paper investigates the power of the dynamic complexity classes DynFO, DynQF and DynPROP over string languages. The latter two classes contain problems that can be maintained using quantifier-free first-order updates, with and without auxiliary functions, respectively. It is shown that the languages maintainable in DynPROP exactly are the regular languages, even when allowing arbitrary precomputation. This enables lower bounds for DynPROP and separates DynPROP from DynQF and DynFO. Further, it is shown that any context-free language can be maintained in DynFO and a number of specific context-free languages, for example all Dyck-languages, are maintainable in DynQF. Furthermore, the dynamic complexity of regular tree languages is investigated and some results concerning arbitrary structures are obtained: there exist first-order definable properties which are not maintainable in DynPROP. On the other hand any existential first-order property can be maintained in DynQF when allowing precomputation.Comment: Contains the material presenten at STACS 2009, extendes with proofs and examples which were omitted due lack of spac

    Searching Constant Width Mazes Captures the AC0 Hierarchy

    Get PDF
    We show that searching a width k maze is complete for Pi_k, i.e.,for the k'th level of the AC0 hierarchy. Equivalently, st-connectivityfor width k grid graphs is complete for Pi_k. As an application, weshow that there is a data structure solving dynamic st-connectivity for constant width grid graphs with time bound O(log log n) per operation on a random access machine. The dynamic algorithm is derived from the parallel one in an indirect way using algebraic tools

    Towards the Efficient Generation of Gray Codes in the Bitprobe Model

    Get PDF
    We examine the problem of representing integers modulo L so that both increment and decrement operations can be performed efficiently. This problem is studied in the bitprobe model, where the complexity of the underlying problem is measured by the number of bit operations performed on the data structure. In this thesis, we will primarily be interested in constructing space-optimal data structures. That is, we would like to use exactly n bits to represent integers modulo 2^n. Brodal et al. gave such a data structure, which requires n-1 bit reads and 3 bit writes, in the worst case, to perform increment and decrement operations We provide several improvements to their data structure. First, we give a data structure that requires n-1 bit reads and 2 bit writes, in the worst case, to perform increment and decrement operations. Then, we refine this result to obtain a data structure that requires n-1 bit reads and a single bit write to perform both operations. This disproves the conjecture that, when a space-optimal data structure uses only 1 bit write to perform these operations, then every bit in the data structure must be inspected in the worst case

    Dynamic Data Structures for Parameterized String Problems

    Get PDF
    We revisit classic string problems considered in the area of parameterized complexity, and study them through the lens of dynamic data structures. That is, instead of asking for a static algorithm that solves the given instance efficiently, our goal is to design a data structure that efficiently maintains a solution, or reports a lack thereof, upon updates in the instance. We first consider the Closest String problem, for which we design randomized dynamic data structures with amortized update times dO(d)d^{\mathcal{O}(d)} and ΣO(d)|\Sigma|^{\mathcal{O}(d)}, respectively, where Σ\Sigma is the alphabet and dd is the assumed bound on the maximum distance. These are obtained by combining known static approaches to Closest String with color-coding. Next, we note that from a result of Frandsen et al.~[J. ACM'97] one can easily infer a meta-theorem that provides dynamic data structures for parameterized string problems with worst-case update time of the form O(loglogn)\mathcal{O}(\log \log n), where kk is the parameter in question and nn is the length of the string. We showcase the utility of this meta-theorem by giving such data structures for problems Disjoint Factors and Edit Distance. We also give explicit data structures for these problems, with worst-case update times O(k2kloglogn)\mathcal{O}(k2^{k}\log \log n) and O(k2loglogn)\mathcal{O}(k^2\log \log n), respectively. Finally, we discuss how a lower bound methodology introduced by Amarilli et al.~[ICALP'21] can be used to show that obtaining update time O(f(k))\mathcal{O}(f(k)) for Disjoint Factors and Edit Distance is unlikely already for a constant value of the parameter kk.Comment: 28 page

    Marked Ancestor Problems (Preliminary Version)

    Get PDF
    Consider a rooted tree whose nodes can be marked or unmarked. Given a node, we want to find its nearest marked ancestor. This generalises the well-known predecessor problem, where the tree is a path. We show tight upper and lower bounds for this problem. The lower bounds are proved in the cell probe model, the upper bounds run on a unit-cost RAM. As easy corollaries we prove (often optimal) lower bounds on a number of problems. These include planar range searching, including the existential or emptiness problem, priority search trees, static tree union-find, and several problems from dynamic computational geometry, including intersection problems, proximity problems, and ray shooting. Our upper bounds improve a number of algorithms from various fields, including dynamic dictionary matching and coloured ancestor problems

    Partial Sums on the Ultra-Wide Word RAM

    Full text link
    We consider the classic partial sums problem on the ultra-wide word RAM model of computation. This model extends the classic ww-bit word RAM model with special ultrawords of length w2w^2 bits that support standard arithmetic and boolean operation and scattered memory access operations that can access ww (non-contiguous) locations in memory. The ultra-wide word RAM model captures (and idealizes) modern vector processor architectures. Our main result is a new in-place data structure for the partial sum problem that only stores a constant number of ultraword in addition to the input and supports operations in doubly logarithmic time. This matches the best known time bounds for the problem (among polynomial space data structures) while improving the space from superlinear to a constant number of ultrawords. Our results are based on a simple and elegant in-place word RAM data structure, known as the Fenwick tree. Our main technical contribution is a new efficient parallel ultra-wide word RAM implementation of the Fenwick tree, which is likely of independent interest.Comment: Extended abstract appeared at TAMC 202

    Validation of a Scale to Assess the Reversibility of Thought in Verbal Arithmetic Problems

    Get PDF
    Background: Reversibility is a key concept for the understanding and development of mathematical thinking. There is an agreement regarding problem-solving as a fundamental part of mathematical competence, and some authors regard reversible thinking as a requirement for it. Objectives: We want to validate an instrument that assesses the reversibility of thought when solving verbal arithmetic problems (word problems) involving various operations, semantic-mathematical structures and proximity of situational information. Design: A qualitative study was carried out from the data obtained by experts, and a quantitative study was carried out to determine the validity and reliability of the instrument. Setting and Participants: 318 students from different Spanish schools attending primary education (6 to 12 years) participated. Data collection and analysis: Participants performed 180 mathematical tasks distributed over three theoretical scales, two operations, and four semantic configurations. Results: To determine the consistency of the data, a reliability analysis was performed globally and on each of the scales, all values being greater than 0.90. Exploratory factor analysis resulted in three factors that explained more than 70%. To analyse the validity of the instrument, confirmatory factor analysis was performed, and its indices showed an adjustment of the models. Conclusions: We consider that the designed instrument is sufficiently robust to assess the reversibility of the basic addition and subtraction operations and, in addition, to analyse the discrimination of word problems according to the semantic-mathematical structure and their situational context
    corecore