43,932 research outputs found

    Binary Jumbled String Matching for Highly Run-Length Compressible Texts

    Full text link
    The Binary Jumbled String Matching problem is defined as: Given a string ss over {a,b}\{a,b\} of length nn and a query (x,y)(x,y), with x,yx,y non-negative integers, decide whether ss has a substring tt with exactly xx aa's and yy bb's. Previous solutions created an index of size O(n) in a pre-processing step, which was then used to answer queries in constant time. The fastest algorithms for construction of this index have running time O(n2/logn)O(n^2/\log n) [Burcsi et al., FUN 2010; Moosa and Rahman, IPL 2010], or O(n2/log2n)O(n^2/\log^2 n) in the word-RAM model [Moosa and Rahman, JDA 2012]. We propose an index constructed directly from the run-length encoding of ss. The construction time of our index is O(n+ρ2logρ)O(n+\rho^2\log \rho), where O(n) is the time for computing the run-length encoding of ss and ρ\rho is the length of this encoding---this is no worse than previous solutions if ρ=O(n/logn)\rho = O(n/\log n) and better if ρ=o(n/logn)\rho = o(n/\log n). Our index LL can be queried in O(logρ)O(\log \rho) time. While L=O(min(n,ρ2))|L|= O(\min(n, \rho^{2})) in the worst case, preliminary investigations have indicated that L|L| may often be close to ρ\rho. Furthermore, the algorithm for constructing the index is conceptually simple and easy to implement. In an attempt to shed light on the structure and size of our index, we characterize it in terms of the prefix normal forms of ss introduced in [Fici and Lipt\'ak, DLT 2011].Comment: v2: only small cosmetic changes; v3: new title, weakened conjectures on size of Corner Index (we no longer conjecture it to be always linear in size of RLE); removed experimental part on random strings (these are valid but limited in their predictive power w.r.t. general strings); v3 published in IP

    Algorithms to Compute the Lyndon Array

    Get PDF
    We first describe three algorithms for computing the Lyndon array that have been suggested in the literature, but for which no structured exposition has been given. Two of these algorithms execute in quadratic time in the worst case, the third achieves linear time, but at the expense of prior computation of both the suffix array and the inverse suffix array of x. We then go on to describe two variants of a new algorithm that avoids prior computation of global data structures and executes in worst-case n log n time. Experimental evidence suggests that all but one of these five algorithms require only linear execution time in practice, with the two new algorithms faster by a small factor. We conjecture that there exists a fast and worst-case linear-time algorithm to compute the Lyndon array that is also elementary (making no use of global data structures such as the suffix array)

    On the maximal sum of exponents of runs in a string

    Get PDF
    A run is an inclusion maximal occurrence in a string (as a subinterval) of a repetition vv with a period pp such that 2pv2p \le |v|. The exponent of a run is defined as v/p|v|/p and is 2\ge 2. We show new bounds on the maximal sum of exponents of runs in a string of length nn. Our upper bound of 4.1n4.1n is better than the best previously known proven bound of 5.6n5.6n by Crochemore & Ilie (2008). The lower bound of 2.035n2.035n, obtained using a family of binary words, contradicts the conjecture of Kolpakov & Kucherov (1999) that the maximal sum of exponents of runs in a string of length nn is smaller than 2n2nComment: 7 pages, 1 figur

    Algorithms for Longest Common Abelian Factors

    Full text link
    In this paper we consider the problem of computing the longest common abelian factor (LCAF) between two given strings. We present a simple O(σ n2)O(\sigma~ n^2) time algorithm, where nn is the length of the strings and σ\sigma is the alphabet size, and a sub-quadratic running time solution for the binary string case, both having linear space requirement. Furthermore, we present a modified algorithm applying some interesting tricks and experimentally show that the resulting algorithm runs faster.Comment: 13 pages, 4 figure

    Faster subsequence recognition in compressed strings

    Full text link
    Computation on compressed strings is one of the key approaches to processing massive data sets. We consider local subsequence recognition problems on strings compressed by straight-line programs (SLP), which is closely related to Lempel--Ziv compression. For an SLP-compressed text of length mˉ\bar m, and an uncompressed pattern of length nn, C{\'e}gielski et al. gave an algorithm for local subsequence recognition running in time O(mˉn2logn)O(\bar mn^2 \log n). We improve the running time to O(mˉn1.5)O(\bar mn^{1.5}). Our algorithm can also be used to compute the longest common subsequence between a compressed text and an uncompressed pattern in time O(mˉn1.5)O(\bar mn^{1.5}); the same problem with a compressed pattern is known to be NP-hard
    corecore