2,450 research outputs found

    Algorithms for recognizing knots and 3-manifolds

    Full text link
    This is a survey paper on algorithms for solving problems in 3-dimensional topology. In particular, it discusses Haken's approach to the recognition of the unknot, and recent variations.Comment: 17 Pages, 7 figures, to appear in Chaos, Fractals and Soliton

    Compressing High-Dimensional Data Spaces Using Non-Differential Augmented Vector Quantization

    Get PDF
    query processing times and space requirements. Database compression has been discovered to alleviate the I/O bottleneck, reduce disk space, improve disk access speed, speed up query, reduce overall retrieval time and increase the effective I/O bandwidth. However, random access to individual tuples in a compressed database is very difficult to achieve with most available compression techniques. We propose a lossless compression technique called non-differential augmented vector quantization, a close variant of the novel augmented vector quantization. The technique is applicable to a collection of tuples and especially effective for tuples with many low to medium cardinality fields. In addition, the technique supports standard database operations, permits very fast random access and atomic decompression of tuples in large collections. The technique maps a database relation into a static bitmap index cached access structure. Consequently, we were able to achieve substantial savings in space by storing each database tuple as a bit value in the computer memory. Important distinguishing characteristics of our technique is that individual tuples can be compressed and decompressed, rather than a full page or entire relation at a time, (b) the information needed for tuple compression and decompression can reside in the memory or at worst in a single page. Promising application domains include decision support systems, statistical databases and life databases with low cardinality fields and possibly no text field

    On optimally partitioning a text to improve its compression

    Full text link
    In this paper we investigate the problem of partitioning an input string T in such a way that compressing individually its parts via a base-compressor C gets a compressed output that is shorter than applying C over the entire T at once. This problem was introduced in the context of table compression, and then further elaborated and extended to strings and trees. Unfortunately, the literature offers poor solutions: namely, we know either a cubic-time algorithm for computing the optimal partition based on dynamic programming, or few heuristics that do not guarantee any bounds on the efficacy of their computed partition, or algorithms that are efficient but work in some specific scenarios (such as the Burrows-Wheeler Transform) and achieve compression performance that might be worse than the optimal-partitioning by a Ω(logn)\Omega(\sqrt{\log n}) factor. Therefore, computing efficiently the optimal solution is still open. In this paper we provide the first algorithm which is guaranteed to compute in O(n \log_{1+\eps}n) time a partition of T whose compressed output is guaranteed to be no more than (1+ϵ)(1+\epsilon)-worse the optimal one, where ϵ\epsilon may be any positive constant

    Polynomial Kernels for Weighted Problems

    Full text link
    Kernelization is a formalization of efficient preprocessing for NP-hard problems using the framework of parameterized complexity. Among open problems in kernelization it has been asked many times whether there are deterministic polynomial kernelizations for Subset Sum and Knapsack when parameterized by the number nn of items. We answer both questions affirmatively by using an algorithm for compressing numbers due to Frank and Tardos (Combinatorica 1987). This result had been first used by Marx and V\'egh (ICALP 2013) in the context of kernelization. We further illustrate its applicability by giving polynomial kernels also for weighted versions of several well-studied parameterized problems. Furthermore, when parameterized by the different item sizes we obtain a polynomial kernelization for Subset Sum and an exponential kernelization for Knapsack. Finally, we also obtain kernelization results for polynomial integer programs
    corecore