466 research outputs found
Relating Graph Thickness to Planar Layers and Bend Complexity
The thickness of a graph with vertices is the minimum number of
planar subgraphs of whose union is . A polyline drawing of in
is a drawing of , where each vertex is mapped to a
point and each edge is mapped to a polygonal chain. Bend and layer complexities
are two important aesthetics of such a drawing. The bend complexity of
is the maximum number of bends per edge in , and the layer complexity
of is the minimum integer such that the set of polygonal chains in
can be partitioned into disjoint sets, where each set corresponds
to a planar polyline drawing. Let be a graph of thickness . By
F\'{a}ry's theorem, if , then can be drawn on a single layer with bend
complexity . A few extensions to higher thickness are known, e.g., if
(resp., ), then can be drawn on layers with bend complexity 2
(resp., ). However, allowing a higher number of layers may reduce the
bend complexity, e.g., complete graphs require layers to be drawn
using 0 bends per edge.
In this paper we present an elegant extension of F\'{a}ry's theorem to draw
graphs of thickness . We first prove that thickness- graphs can be
drawn on layers with bends per edge. We then develop another
technique to draw thickness- graphs on layers with bend complexity,
i.e., , where . Previously, the bend complexity was not known to be sublinear for
. Finally, we show that graphs with linear arboricity can be drawn on
layers with bend complexity .Comment: A preliminary version appeared at the 43rd International Colloquium
on Automata, Languages and Programming (ICALP 2016
LRM-Trees: Compressed Indices, Adaptive Sorting, and Compressed Permutations
LRM-Trees are an elegant way to partition a sequence of values into sorted
consecutive blocks, and to express the relative position of the first element
of each block within a previous block. They were used to encode ordinal trees
and to index integer arrays in order to support range minimum queries on them.
We describe how they yield many other convenient results in a variety of areas,
from data structures to algorithms: some compressed succinct indices for range
minimum queries; a new adaptive sorting algorithm; and a compressed succinct
data structure for permutations supporting direct and indirect application in
time all the shortest as the permutation is compressible.Comment: 13 pages, 1 figur
Matrix and Vector Products for Inputs Decomposable into Few Monotone Subsequences
We study the time complexity of computing the matrix
product of two integer matrices in terms of and the
number of monotone subsequences the rows of the first matrix and the
columns of the second matrix can be decomposed into. In particular,
we show that if each row of the first matrix can be decomposed into
at most monotone subsequences and each column of the second
matrix can be decomposed into at most monotone subsequences
such that all the subsequences are non-decreasing or all of them are
non-increasing then the product of the matrices can be
computed in time. On the other hand, we observe
that if all the rows of the first matrix are non-decreasing and all
columns of the second matrix are non-increasing or {\em vice versa}
then this case is as hard as the general one.
Similarly, we also study the time complexity of computing the
convolution of two -dimensional integer vectors in
terms of and the number of monotone subsequences the two vectors
can be decomposed into. We show that if the first vector can be
decomposed into at most monotone subsequences and the second
vector can be decomposed into at most subsequences such that
all the subsequences of the first vector are non-decreasing and all
the subsequences of the second vector are non-increasing or {\em
vice versa} then their convolution can be computed in
time. On the other, the case when both
vectors are non-decreasing or both of them are non-increasing is as
hard as the general case.Comment: 16 pages, accepted by COCOON 202
Estimating the Longest Increasing Subsequence in Nearly Optimal Time
Longest Increasing Subsequence (LIS) is a fundamental statistic of a
sequence, and has been studied for decades. While the LIS of a sequence of
length can be computed exactly in time , the complexity of
estimating the (length of the) LIS in sublinear time, especially when LIS , is still open.
We show that for any integer and any , there exists a
(randomized) non-adaptive algorithm that, given a sequence of length with
LIS , approximates the LIS up to a factor of
in time.
Our algorithm improves upon prior work substantially in terms of both
approximation and run-time: (i) we provide the first sub-polynomial
approximation for LIS in sub-linear time; and (ii) our run-time complexity
essentially matches the trivial sample complexity lower bound of
, which is required to obtain any non-trivial approximation
of the LIS.
As part of our solution, we develop two novel ideas which may be of
independent interest: First, we define a new Genuine-LIS problem, where each
sequence element may either be genuine or corrupted. In this model, the user
receives unrestricted access to actual sequence, but does not know apriori
which elements are genuine. The goal is to estimate the LIS using genuine
elements only, with the minimal number of "genuiness tests". The second idea,
Precision Forest, enables accurate estimations for composition of general
functions from "coarse" (sub-)estimates. Precision Forest essentially
generalizes classical precision sampling, which works only for summations. As a
central tool, the Precision Forest is initially pre-processed on a set of
samples, which thereafter is repeatedly reused by multiple sub-parts of the
algorithm, improving their amortized complexity.Comment: Full version of FOCS 2022 pape
A linear time approximation algorithm for permutation flow shop scheduling
AbstractIn the last 40 years, the permutation flow shop scheduling (PFS) problem with makespan minimization has been a central problem, known for its intractability, that has been well studied from both theoretical and practical aspects. The currently best performance ratio of a deterministic approximation algorithm for the PFS was recently presented by Nagarajan and Sviridenko, using a connection between the PFS and the longest increasing subsequence problem. In a different and independent way, this paper employs monotone subsequences in the approximation analysis techniques. To do this, an extension of the Erdös–Szekeres theorem to weighted monotone subsequences is presented. The result is a simple deterministic algorithm for the PFS with a similar approximation guarantee, but a much lower time complexity
Partition into heapable sequences, heap tableaux and a multiset extension of Hammersley's process
We investigate partitioning of integer sequences into heapable subsequences
(previously defined and established by Mitzenmacher et al). We show that an
extension of patience sorting computes the decomposition into a minimal number
of heapable subsequences (MHS). We connect this parameter to an interactive
particle system, a multiset extension of Hammersley's process, and investigate
its expected value on a random permutation. In contrast with the (well studied)
case of the longest increasing subsequence, we bring experimental evidence that
the correct asymptotic scaling is . Finally
we give a heap-based extension of Young tableaux, prove a hook inequality and
an extension of the Robinson-Schensted correspondence
Linear time ordering of bins using a conveyor system
A local food wholesaler company is using an automated commissioning system, which brings the bins containing the appropriate product to the commissioning counter, where the worker picks the needed amounts to 12 bins corresponding to the same number of orders. To minimize the number of bins to pick from, they pick for several different spreading tours, so the order of bins containing the picked products coming from the commissioning counter can be considered random in this sense. Recently, the number of bins containing the picked orders increased over the available storage space, and it was necessary to find a new way of storing and ordering the bins to spreading tours. We developed a conveyor system which (after a preprocessing step) can order the bins in linear space and time
- …