10 research outputs found
Improved Bounds for 3SUM, -SUM, and Linear Degeneracy
Given a set of real numbers, the 3SUM problem is to decide whether there
are three of them that sum to zero. Until a recent breakthrough by Gr{\o}nlund
and Pettie [FOCS'14], a simple -time deterministic algorithm for
this problem was conjectured to be optimal. Over the years many algorithmic
problems have been shown to be reducible from the 3SUM problem or its variants,
including the more generalized forms of the problem, such as -SUM and
-variate linear degeneracy testing (-LDT). The conjectured hardness of
these problems have become extremely popular for basing conditional lower
bounds for numerous algorithmic problems in P.
In this paper, we show that the randomized -linear decision tree
complexity of 3SUM is , and that the randomized -linear
decision tree complexity of -SUM and -LDT is , for any odd
. These bounds improve (albeit randomized) the corresponding
and decision tree bounds
obtained by Gr{\o}nlund and Pettie. Our technique includes a specialized
randomized variant of fractional cascading data structure. Additionally, we
give another deterministic algorithm for 3SUM that runs in time. The latter bound matches a recent independent bound by Freund
[Algorithmica 2017], but our algorithm is somewhat simpler, due to a better use
of word-RAM model
Space Efficient Algorithms for Breadth-Depth Search
Continuing the recent trend, in this article we design several
space-efficient algorithms for two well-known graph search methods. Both these
search methods share the same name {\it breadth-depth search} (henceforth {\sf
BDS}), although they work entirely in different fashion. The classical
implementation for these graph search methods takes time and bits of space in the standard word RAM model (with word size being
bits), where and denotes the number of edges and
vertices of the input graph respectively. Our goal here is to beat the space
bound of the classical implementations, and design space
algorithms for these search methods by paying little to no penalty in the
running time. Note that our space bounds (i.e., with bits of
space) do not even allow us to explicitly store the required information to
implement the classical algorithms, yet our algorithms visits and reports all
the vertices of the input graph in correct order.Comment: 12 pages, This work will appear in FCT 201
Data Structures Meet Cryptography: 3SUM with Preprocessing
This paper shows several connections between data structure problems and
cryptography against preprocessing attacks. Our results span data structure
upper bounds, cryptographic applications, and data structure lower bounds, as
summarized next.
First, we apply Fiat--Naor inversion, a technique with cryptographic origins,
to obtain a data structure upper bound. In particular, our technique yields a
suite of algorithms with space and (online) time for a preprocessing
version of the -input 3SUM problem where .
This disproves a strong conjecture (Goldstein et al., WADS 2017) that there is
no data structure that solves this problem for and for any constant .
Secondly, we show equivalence between lower bounds for a broad class of
(static) data structure problems and one-way functions in the random oracle
model that resist a very strong form of preprocessing attack. Concretely, given
a random function (accessed as an oracle) we show how to
compile it into a function which resists -bit
preprocessing attacks that run in query time where
(assuming a corresponding data structure lower bound
on 3SUM). In contrast, a classical result of Hellman tells us that itself
can be more easily inverted, say with -bit preprocessing in
time. We also show that much stronger lower bounds follow from the hardness of
kSUM. Our results can be equivalently interpreted as security against
adversaries that are very non-uniform, or have large auxiliary input, or as
security in the face of a powerfully backdoored random oracle.
Thirdly, we give non-adaptive lower bounds for 3SUM and a range of geometric
problems which match the best known lower bounds for static data structure
problems
{SETH}-Based Lower Bounds for Subset Sum and Bicriteria Path
Subset-Sum and k-SAT are two of the most extensively studied problems in computer science, and conjectures about their hardness are among the cornerstones of fine-grained complexity. One of the most intriguing open problems in this area is to base the hardness of one of these problems on the other. Our main result is a tight reduction from k-SAT to Subset-Sum on dense instances, proving that Bellman's 1962 pseudo-polynomial -time algorithm for Subset-Sum on numbers and target cannot be improved to time for any , unless the Strong Exponential Time Hypothesis (SETH) fails. This is one of the strongest known connections between any two of the core problems of fine-grained complexity. As a corollary, we prove a "Direct-OR" theorem for Subset-Sum under SETH, offering a new tool for proving conditional lower bounds: It is now possible to assume that deciding whether one out of given instances of Subset-Sum is a YES instance requires time . As an application of this corollary, we prove a tight SETH-based lower bound for the classical Bicriteria s,t-Path problem, which is extensively studied in Operations Research. We separate its complexity from that of Subset-Sum: On graphs with edges and edge lengths bounded by , we show that the pseudo-polynomial time algorithm by Joksch from 1966 cannot be improved to , in contrast to a recent improvement for Subset Sum (Bringmann, SODA 2017)
Deterministic 3SUM-Hardness
As one of the three main pillars of fine-grained complexity theory, the 3SUM
problem explains the hardness of many diverse polynomial-time problems via
fine-grained reductions. Many of these reductions are either directly based on
or heavily inspired by P\u{a}tra\c{s}cu's framework involving additive hashing
and are thus randomized. Some selected reductions were derandomized in previous
work [Chan, He; SOSA'20], but the current techniques are limited and a major
fraction of the reductions remains randomized.
In this work we gather a toolkit aimed to derandomize reductions based on
additive hashing. Using this toolkit, we manage to derandomize almost all known
3SUM-hardness reductions. As technical highlights we derandomize the hardness
reductions to (offline) Set Disjointness, (offline) Set Intersection and
Triangle Listing -- these questions were explicitly left open in previous work
[Kopelowitz, Pettie, Porat; SODA'16]. The few exceptions to our work fall into
a special category of recent reductions based on structure-versus-randomness
dichotomies.
We expect that our toolkit can be readily applied to derandomize future
reductions as well. As a conceptual innovation, our work thereby promotes the
theory of deterministic 3SUM-hardness.
As our second contribution, we prove that there is a deterministic universe
reduction for 3SUM. Specifically, using additive hashing it is a standard trick
to assume that the numbers in 3SUM have size at most . We prove that this
assumption is similarly valid for deterministic algorithms.Comment: To appear at ITCS 202
Opacity and Structural Resilience in Cyberphysical Systems
Cyberphysical systems (CPSs) integrate communication, control, and computation with physical processes. Examples include power systems, water distribution networks, and on a smaller scale, medical devices and home control systems. Since these systems are often controlled over a network, the sharing of information among systems and across geographies makes them vulnerable to attacks carried out (possibly remotely) by malicious adversaries. An attack could be carried out on the physical system, on the computer(s) controlling the system, or on the communication links between the system and the computer. Thus, significant material damage can be caused by an attacker who is able to gain access to the system, and such attacks will often have the consequence of causing widespread disruption to everyday life. Therefore, ensuring the safety of information critical to nominal operation of the system is of utmost importance. This dissertation addresses two problems in the broad area of the Control and Security of Cyberphysical Systems.
First, we present a framework for opacity in CPSs modeled as a discrete-time linear time-invariant (DT-LTI) system. The current state-of-the-art in this field studies opacity for discrete event systems (DESs) described by regular languages. However, the states in a DES are discrete; in many practical systems, it is common for states (and other system variables) to take continuous values. We define a notion of opacity called k-initial state opacity (k-ISO) for such systems. A set of secret states is said to be k-ISO with respect to a set of nonsecret states if the outputs at time k of every trajectory starting from the set of secret states is indistinguishable from the output at time k of some trajectory starting from the set of nonsecret states. Necessary and sufficient conditions to establish k-ISO are presented in terms of sets of reachable states. Opacity of a given DT-LTI system is shown to be equivalent to the output controllability of a system obeying the same dynamics, but with different initial conditions.
We then study the case where there is more than one adversarial observer, and define several notions of decentralized opacity. These notions of decentralized opacity will depend on whether there is a centralized coordinator or not, and the presence or absence of collusion among the adversaries. We establish conditions for decentralized opacity in terms of sets of reachable states. In the case of colluding adversaries, we present a condition for non-opacity in terms of the structure of the communication graph.
We extend this work to formulate notions of opacity for discrete-time switched linear systems. A switched system consists of a finite number of subsystems and a rule that orchestrates switching among them. We distinguish between the cases when the secret is specified as a set of initial modes, a set of initial states, or a combination of the two. The novelty of our schemes is in the fact that we place restrictions on: i) the allowed transitions between modes (specified by a directed graph), ii) the number of allowed changes of modes (specified by lengths of paths in the directed graph), and iii) the dwell times in each mode. Each notion of opacity is characterized in terms of allowed switching sequences and sets of reachable states and/ or modes. Finally we present algorithmic procedures to verify these notions, and provide bounds on their computational complexity.
Second, we study the resilience of CPSs to denial-of-service (DoS) and integrity attacks. The CPS is modeled as a linear structured system, and its resilience to an attack is interpreted in a graph-theoretic framework. The structural systems approach presumes knowledge of only the positions of zero and nonzero entries in the system matrices to infer system properties. This approach is attractive due to the fact that these properties will hold for almost every admissible numerical realization of the system. The structural resilience of the system is characterized in terms of unmatched vertices in maximum matchings of the bipartite graph and connected components of directed graph representations of the system under attack. Further, we establish a condition based on the zero structure of an input matrix that will ensure that the system is structurally resilient to a state feedback integrity attack if it is also resilient to a DoS attack.
Finally, we formulate an extension to the case of switched structured systems, and derive conditions for such systems to be structurally resilient to a DoS attack