316 research outputs found
Solving -SUM using few linear queries
The -SUM problem is given input real numbers to determine whether any
of them sum to zero. The problem is of tremendous importance in the
emerging field of complexity theory within , and it is in particular open
whether it admits an algorithm of complexity with . Inspired by an algorithm due to Meiser (1993), we show
that there exist linear decision trees and algebraic computation trees of depth
solving -SUM. Furthermore, we show that there exists a
randomized algorithm that runs in
time, and performs linear queries on the input. Thus, we show
that it is possible to have an algorithm with a runtime almost identical (up to
the ) to the best known algorithm but for the first time also with the
number of queries on the input a polynomial that is independent of . The
bound on the number of linear queries is also a tighter bound
than any known algorithm solving -SUM, even allowing unlimited total time
outside of the queries. By simultaneously achieving few queries to the input
without significantly sacrificing runtime vis-\`{a}-vis known algorithms, we
deepen the understanding of this canonical problem which is a cornerstone of
complexity-within-.
We also consider a range of tradeoffs between the number of terms involved in
the queries and the depth of the decision tree. In particular, we prove that
there exist -linear decision trees of depth
Detecting all regular polygons in a point set
In this paper, we analyze the time complexity of finding regular polygons in
a set of n points. We combine two different approaches to find regular
polygons, depending on their number of edges. Our result depends on the
parameter alpha, which has been used to bound the maximum number of isosceles
triangles that can be formed by n points. This bound has been expressed as
O(n^{2+2alpha+epsilon}), and the current best value for alpha is ~0.068.
Our algorithm finds polygons with O(n^alpha) edges by sweeping a line through
the set of points, while larger polygons are found by random sampling. We can
find all regular polygons with high probability in O(n^{2+alpha+epsilon})
expected time for every positive epsilon. This compares well to the
O(n^{2+2alpha+epsilon}) deterministic algorithm of Brass.Comment: 11 pages, 4 figure
Solving k-SUM Using Few Linear Queries
The k-SUM problem is given n input real numbers to determine whether any k of them sum to zero. The problem is of tremendous importance in the emerging field of complexity theory within P, and it is in particular open whether it admits an algorithm of complexity O(n^c) with c<d where d is the ceiling of k/2. Inspired by an algorithm due to Meiser (1993), we show that there exist linear decision trees and algebraic computation trees of depth O(n^3 log^2 n) solving k-SUM. Furthermore, we show that there exists a randomized algorithm that runs in ~O(n^{d+8}) time, and performs O(n^3 log^2 n) linear queries on the input. Thus, we show that it is possible to have an algorithm with a runtime almost identical (up to the +8) to the best known algorithm but for the first time also with the number of queries on the input a polynomial that is independent of k. The O(n^3 log^2 n) bound on the number of linear queries is also a tighter bound than any known algorithm solving k-SUM, even allowing unlimited total time outside of the queries. By simultaneously achieving few queries to the input without significantly sacrificing runtime vis-a-vis known algorithms, we deepen the understanding of this canonical problem which is a cornerstone of complexity-within-P.
We also consider a range of tradeoffs between the number of terms involved in the queries and the depth of the decision tree. In particular, we prove that there exist o(n)-linear decision trees of depth ~O(n^3) for the
k-SUM problem
Modular Subset Sum, Dynamic Strings, and Zero-Sum Sets
The modular subset sum problem consists of deciding, given a modulus , a
multiset of integers in , and a target integer , whether
there exists a subset of with elements summing to , and to
report such a set if it exists. We give a simple -time with high
probability (w.h.p.) algorithm for the modular subset sum problem. This builds
on and improves on a previous w.h.p. algorithm from Axiotis,
Backurs, Jin, Tzamos, and Wu (SODA 19). Our method utilizes the ADT of the
dynamic strings structure of Gawrychowski et al. (SODA~18). However, as this
structure is rather complicated we present a much simpler alternative which we
call the Data Dependent Tree. As an application, we consider the computational
version of a fundamental theorem in zero-sum Ramsey theory. The
Erd\H{o}s-Ginzburg-Ziv Theorem states that a multiset of integers
always contains a subset of cardinality exactly whose values sum to a
multiple of . We give an algorithm for finding such a subset in time w.h.p. which improves on an algorithm due to Del Lungo,
Marini, and Mori (Disc. Math. 09).Comment: To appear at the SIAM Symposium on Simplicity in Algorithms (SOSA21
Worst-Case Efficient Dynamic Geometric Independent Set
We consider the problem of maintaining an approximate maximum independent set of geometric objects under insertions and deletions. We present a data structure that maintains a constant-factor approximate maximum independent set for broad classes of fat objects in d dimensions, where d is assumed to be a constant, in sublinear worst-case update time. This gives the first results for dynamic independent set in a wide variety of geometric settings, such as disks, fat polygons, and their high-dimensional equivalents. For axis-aligned squares and hypercubes, our result improves upon all (recently announced) previous works. We obtain, in particular, a dynamic (4+?)-approximation for squares, with O(log? n) worst-case update time.
Our result is obtained via a two-level approach. First, we develop a dynamic data structure which stores all objects and provides an approximate independent set when queried, with output-sensitive running time. We show that via standard methods such a structure can be used to obtain a dynamic algorithm with amortized update time bounds. Then, to obtain worst-case update time algorithms, we develop a generic deamortization scheme that with each insertion/deletion keeps (i) the update time bounded and (ii) the number of changes in the independent set constant. We show that such a scheme is applicable to fat objects by showing an appropriate generalization of a separator theorem.
Interestingly, we show that our deamortization scheme is also necessary in order to obtain worst-case update bounds: If for a class of objects our scheme is not applicable, then no constant-factor approximation with sublinear worst-case update time is possible. We show that such a lower bound applies even for seemingly simple classes of geometric objects including axis-aligned rectangles in the plane
Approximability of (Simultaneous) Class Cover for Boxes
Bereg et al. (2012) introduced the Boxes Class Cover problem, which has its
roots in classification and clustering applications: Given a set of n points in
the plane, each colored red or blue, find the smallest cardinality set of
axis-aligned boxes whose union covers the red points without covering any blue
point. In this paper we give an alternative proof of APX-hardness for this
problem, which also yields an explicit lower bound on its approximability. Our
proof also directly applies when restricted to sets of points in general
position and to the case where so-called half-strips are considered instead of
boxes, which is a new result.
We also introduce a symmetric variant of this problem, which we call
Simultaneous Boxes Class Cover and can be stated as follows: Given a set S of n
points in the plane, each colored red or blue, find the smallest cardinality
set of axis-aligned boxes which together cover S such that all boxes cover only
points of the same color and no box covering a red point intersects a box
covering a blue point. We show that this problem is also APX-hard and give a
polynomial-time constant-factor approximation algorithm
- …