466 research outputs found

    Relating Graph Thickness to Planar Layers and Bend Complexity

    Get PDF
    The thickness of a graph G=(V,E)G=(V,E) with nn vertices is the minimum number of planar subgraphs of GG whose union is GG. A polyline drawing of GG in R2\mathbb{R}^2 is a drawing Γ\Gamma of GG, where each vertex is mapped to a point and each edge is mapped to a polygonal chain. Bend and layer complexities are two important aesthetics of such a drawing. The bend complexity of Γ\Gamma is the maximum number of bends per edge in Γ\Gamma, and the layer complexity of Γ\Gamma is the minimum integer rr such that the set of polygonal chains in Γ\Gamma can be partitioned into rr disjoint sets, where each set corresponds to a planar polyline drawing. Let GG be a graph of thickness tt. By F\'{a}ry's theorem, if t=1t=1, then GG can be drawn on a single layer with bend complexity 00. A few extensions to higher thickness are known, e.g., if t=2t=2 (resp., t>2t>2), then GG can be drawn on tt layers with bend complexity 2 (resp., 3n+O(1)3n+O(1)). However, allowing a higher number of layers may reduce the bend complexity, e.g., complete graphs require Θ(n)\Theta(n) layers to be drawn using 0 bends per edge. In this paper we present an elegant extension of F\'{a}ry's theorem to draw graphs of thickness t>2t>2. We first prove that thickness-tt graphs can be drawn on tt layers with 2.25n+O(1)2.25n+O(1) bends per edge. We then develop another technique to draw thickness-tt graphs on tt layers with bend complexity, i.e., O(2tn1(1/β))O(\sqrt{2}^{t} \cdot n^{1-(1/\beta)}), where β=2(t2)/2\beta = 2^{\lceil (t-2)/2 \rceil }. Previously, the bend complexity was not known to be sublinear for t>2t>2. Finally, we show that graphs with linear arboricity kk can be drawn on kk layers with bend complexity 3(k1)n(4k2)\frac{3(k-1)n}{(4k-2)}.Comment: A preliminary version appeared at the 43rd International Colloquium on Automata, Languages and Programming (ICALP 2016

    LRM-Trees: Compressed Indices, Adaptive Sorting, and Compressed Permutations

    Full text link
    LRM-Trees are an elegant way to partition a sequence of values into sorted consecutive blocks, and to express the relative position of the first element of each block within a previous block. They were used to encode ordinal trees and to index integer arrays in order to support range minimum queries on them. We describe how they yield many other convenient results in a variety of areas, from data structures to algorithms: some compressed succinct indices for range minimum queries; a new adaptive sorting algorithm; and a compressed succinct data structure for permutations supporting direct and indirect application in time all the shortest as the permutation is compressible.Comment: 13 pages, 1 figur

    (min,+)(\min,+) Matrix and Vector Products for Inputs Decomposable into Few Monotone Subsequences

    Full text link
    We study the time complexity of computing the (min,+)(\min,+) matrix product of two n×nn\times n integer matrices in terms of nn and the number of monotone subsequences the rows of the first matrix and the columns of the second matrix can be decomposed into. In particular, we show that if each row of the first matrix can be decomposed into at most m1m_1 monotone subsequences and each column of the second matrix can be decomposed into at most m2m_2 monotone subsequences such that all the subsequences are non-decreasing or all of them are non-increasing then the (min,+)(\min,+) product of the matrices can be computed in O(m1m2n2.569)O(m_1m_2n^{2.569}) time. On the other hand, we observe that if all the rows of the first matrix are non-decreasing and all columns of the second matrix are non-increasing or {\em vice versa} then this case is as hard as the general one. Similarly, we also study the time complexity of computing the (min,+)(\min,+) convolution of two nn-dimensional integer vectors in terms of nn and the number of monotone subsequences the two vectors can be decomposed into. We show that if the first vector can be decomposed into at most m1m_1 monotone subsequences and the second vector can be decomposed into at most m2m_2 subsequences such that all the subsequences of the first vector are non-decreasing and all the subsequences of the second vector are non-increasing or {\em vice versa} then their (min,+)(\min,+) convolution can be computed in O~(m1m2n1.5)\tilde{O}(m_1m_2n^{1.5}) time. On the other, the case when both vectors are non-decreasing or both of them are non-increasing is as hard as the general case.Comment: 16 pages, accepted by COCOON 202

    Estimating the Longest Increasing Subsequence in Nearly Optimal Time

    Full text link
    Longest Increasing Subsequence (LIS) is a fundamental statistic of a sequence, and has been studied for decades. While the LIS of a sequence of length nn can be computed exactly in time O(nlogn)O(n\log n), the complexity of estimating the (length of the) LIS in sublinear time, especially when LIS n\ll n, is still open. We show that for any integer nn and any λ=o(1)\lambda = o(1), there exists a (randomized) non-adaptive algorithm that, given a sequence of length nn with LIS λn\ge \lambda n, approximates the LIS up to a factor of 1/λo(1)1/\lambda^{o(1)} in no(1)/λn^{o(1)} / \lambda time. Our algorithm improves upon prior work substantially in terms of both approximation and run-time: (i) we provide the first sub-polynomial approximation for LIS in sub-linear time; and (ii) our run-time complexity essentially matches the trivial sample complexity lower bound of Ω(1/λ)\Omega(1/\lambda), which is required to obtain any non-trivial approximation of the LIS. As part of our solution, we develop two novel ideas which may be of independent interest: First, we define a new Genuine-LIS problem, where each sequence element may either be genuine or corrupted. In this model, the user receives unrestricted access to actual sequence, but does not know apriori which elements are genuine. The goal is to estimate the LIS using genuine elements only, with the minimal number of "genuiness tests". The second idea, Precision Forest, enables accurate estimations for composition of general functions from "coarse" (sub-)estimates. Precision Forest essentially generalizes classical precision sampling, which works only for summations. As a central tool, the Precision Forest is initially pre-processed on a set of samples, which thereafter is repeatedly reused by multiple sub-parts of the algorithm, improving their amortized complexity.Comment: Full version of FOCS 2022 pape

    A linear time approximation algorithm for permutation flow shop scheduling

    Get PDF
    AbstractIn the last 40 years, the permutation flow shop scheduling (PFS) problem with makespan minimization has been a central problem, known for its intractability, that has been well studied from both theoretical and practical aspects. The currently best performance ratio of a deterministic approximation algorithm for the PFS was recently presented by Nagarajan and Sviridenko, using a connection between the PFS and the longest increasing subsequence problem. In a different and independent way, this paper employs monotone subsequences in the approximation analysis techniques. To do this, an extension of the Erdös–Szekeres theorem to weighted monotone subsequences is presented. The result is a simple deterministic algorithm for the PFS with a similar approximation guarantee, but a much lower time complexity

    Partition into heapable sequences, heap tableaux and a multiset extension of Hammersley's process

    Full text link
    We investigate partitioning of integer sequences into heapable subsequences (previously defined and established by Mitzenmacher et al). We show that an extension of patience sorting computes the decomposition into a minimal number of heapable subsequences (MHS). We connect this parameter to an interactive particle system, a multiset extension of Hammersley's process, and investigate its expected value on a random permutation. In contrast with the (well studied) case of the longest increasing subsequence, we bring experimental evidence that the correct asymptotic scaling is 1+52ln(n)\frac{1+\sqrt{5}}{2}\cdot \ln(n). Finally we give a heap-based extension of Young tableaux, prove a hook inequality and an extension of the Robinson-Schensted correspondence

    Linear time ordering of bins using a conveyor system

    Get PDF
    A local food wholesaler company is using an automated commissioning system, which brings the bins containing the appropriate product to the commissioning counter, where the worker picks the needed amounts to 12 bins corresponding to the same number of orders. To minimize the number of bins to pick from, they pick for several different spreading tours, so the order of bins containing the picked products coming from the commissioning counter can be considered random in this sense. Recently, the number of bins containing the picked orders increased over the available storage space, and it was necessary to find a new way of storing and ordering the bins to spreading tours. We developed a conveyor system which (after a preprocessing step) can order the bins in linear space and time
    corecore