14 research outputs found
Alternative parameterizations of Metric Dimension
A set of vertices in a graph is called resolving if for any two
distinct , there is such that , where denotes the length of a shortest path
between and in the graph . The metric dimension of
is the minimum cardinality of a resolving set. The Metric Dimension problem,
i.e. deciding whether , is NP-complete even for interval
graphs (Foucaud et al., 2017). We study Metric Dimension (for arbitrary graphs)
from the lens of parameterized complexity. The problem parameterized by was
proved to be -hard by Hartung and Nichterlein (2013) and we study the
dual parameterization, i.e., the problem of whether
where is the order of . We prove that the dual parameterization admits
(a) a kernel with at most vertices and (b) an algorithm of runtime
Hartung and Nichterlein (2013) also observed that Metric
Dimension is fixed-parameter tractable when parameterized by the vertex cover
number of the input graph. We complement this observation by showing
that it does not admit a polynomial kernel even when parameterized by . Our reduction also gives evidence for non-existence of polynomial Turing
kernels
Cryptography from Information Loss
© Marshall Ball, Elette Boyle, Akshay Degwekar, Apoorvaa Deshpande, Alon Rosen, Vinod. Reductions between problems, the mainstay of theoretical computer science, efficiently map an instance of one problem to an instance of another in such a way that solving the latter allows solving the former.1 The subject of this work is “lossy” reductions, where the reduction loses some information about the input instance. We show that such reductions, when they exist, have interesting and powerful consequences for lifting hardness into “useful” hardness, namely cryptography. Our first, conceptual, contribution is a definition of lossy reductions in the language of mutual information. Roughly speaking, our definition says that a reduction C is t-lossy if, for any distribution X over its inputs, the mutual information I(X; C(X)) ≤ t. Our treatment generalizes a variety of seemingly related but distinct notions such as worst-case to average-case reductions, randomized encodings (Ishai and Kushilevitz, FOCS 2000), homomorphic computations (Gentry, STOC 2009), and instance compression (Harnik and Naor, FOCS 2006). We then proceed to show several consequences of lossy reductions: 1. We say that a language L has an f-reduction to a language L0 for a Boolean function f if there is a (randomized) polynomial-time algorithm C that takes an m-tuple of strings X = (x1, . . ., xm), with each xi ∈ {0, 1}n, and outputs a string z such that with high probability, L0(z) = f(L(x1), L(x2), . . ., L(xm)) Suppose a language L has an f-reduction C to L0 that is t-lossy. Our first result is that one-way functions exist if L is worst-case hard and one of the following conditions holds: f is the OR function, t ≤ m/100, and L0 is the same as L f is the Majority function, and t ≤ m/100 f is the OR function, t ≤ O(m log n), and the reduction has no error This improves on the implications that follow from combining (Drucker, FOCS 2012) with (Ostrovsky and Wigderson, ISTCS 1993) that result in auxiliary-input one-way functions. 2. Our second result is about the stronger notion of t-compressing f-reductions – reductions that only output t bits. We show that if there is an average-case hard language L that has a t-compressing Majority reduction to some language for t = m/100, then there exist collision-resistant hash functions. This improves on the result of (Harnik and Naor, STOC 2006), whose starting point is a cryptographic primitive (namely, one-way functions) rather than average-case hardness, and whose assumption is a compressing OR-reduction of SAT (which is now known to be false unless the polynomial hierarchy collapses). Along the way, we define a non-standard one-sided notion of average-case hardness, which is the notion of hardness used in the second result above, that may be of independent interest
Dynamic planar embedding is in DynFO
Planar Embedding is a drawing of a graph on the plane such that the edges do not intersect each other except at the vertices. We know that testing the planarity of a graph and computing its embedding (if it exists), can efficiently be computed, both sequentially [John E. Hopcroft and Robert Endre Tarjan, 1974] and in parallel [Vijaya Ramachandran and John H. Reif, 1994], when the entire graph is presented as input.
In the dynamic setting, the input graph changes one edge at a time through insertion and deletions and planarity testing/embedding has to be updated after every change. By storing auxilliary information we can improve the complexity of dynamic planarity testing/embedding over the obvious recomputation from scratch. In the sequential dynamic setting, there has been a series of works [David Eppstein et al., 1996; Giuseppe F. Italiano et al., 1993; Jacob Holm et al., 2018; Jacob Holm and Eva Rotenberg, 2020], culminating in the breakthrough result of polylog(n) sequential time (amortized) planarity testing algorithm of Holm and Rotenberg [Jacob Holm and Eva Rotenberg, 2020].
In this paper we study planar embedding through the lens of DynFO, a parallel dynamic complexity class introduced by Patnaik et al [Sushant Patnaik and Neil Immerman, 1997] (also [Guozhu Dong et al., 1995]). We show that it is possible to dynamically maintain whether an edge can be inserted to a planar graph without causing non-planarity in DynFO. We extend this to show how to maintain an embedding of a planar graph under both edge insertions and deletions, while rejecting edge insertions that violate planarity.
Our main idea is to maintain embeddings of only the triconnected components and a special two-colouring of separating pairs that enables us to side-step cascading flips when embedding of a biconnected planar graph changes, a major issue for sequential dynamic algorithms [Jacob Holm and Eva Rotenberg, 2020; Jacob Holm and Eva Rotenberg, 2020]
Towards Multiparty Computation Withstanding Coercion of All Parties
Incoercible multi-party computation (Canetti-Gennaro ’96) allows parties to engage in secure computation with the additional guarantee that the public transcript of the computation cannot be used by a coercive outsider to verify representations made by the parties regarding their inputs, outputs, and local random choices. That is, it is guaranteed that the only deductions regarding the truthfulness of such representations, made by an outsider who has witnessed the communication among the parties, are the ones that can be drawn just from the represented inputs and outputs alone.
To date, all incoercible secure computation protocols withstand coercion of only a fraction of the parties, or else assume that all parties use an execution environment that makes some crucial parts of their local states physically inaccessible even to themselves.
We consider, for the first time, the setting where all parties are coerced, and the coercer expects to see the entire history of the computation. We allow both protocol participants and external attackers to access a common reference string which is generated once and for all by an uncorruptable trusted party. In this setting we construct:
- A general multi-party function evaluation protocol, for any number of parties, that withstands coercion of all parties, as long as all parties use the prescribed ``faking algorithm\u27\u27 upon coercion. This holds even if the inputs and outputs represented by coerced parties are globally inconsistent with the evaluated function.
- A general two-party function evaluation protocol that withstands even the %``mixed\u27\u27
case where some of the coerced parties do follow the prescribed faking algorithm. (For instance, these parties might collude with the coercer and disclose their true local states.) This protocol is limited to functions where the input of at least one of the parties is taken from a small (poly-size) domain. It uses fully deniable encryption with public deniability for one of the parties; when instantiated using the fully deniable encryption of Canetti, Park, and Poburinnaya (Crypto\u2720), it takes 3 rounds of communication.
Both protocols operate in the common reference string model, and use fully bideniable encryption (Canetti Park and Poburinnaya, Crypto\u2720) and sub-exponential indistinguishability obfuscation. Finally, we show that protocols with certain communication pattern cannot be incoercible, even in a weaker setting where only some parties are coerced
New Approximation Bounds for Small-Set Vertex Expansion
The vertex expansion of the graph is a fundamental graph parameter. Given a
graph and a parameter , its -Small-Set
Vertex Expansion (SSVE) is defined as where is
the vertex boundary of a set . The SSVE~problem, in addition to being of
independent interest as a natural graph partitioning problem, is also of
interest due to its connections to the Strong Unique Games problem. We give a
randomized algorithm running in time , which outputs
a set of size , having vertex expansion at most where is the largest
vertex degree of the graph, and is the optimal -SSVE. The
previous best-known guarantees for this were the bi-criteria bounds of
and due to Louis-Makarychev [TOC'16].
Our algorithm uses the basic SDP relaxation of the problem augmented with
rounds of the Lasserre/SoS hierarchy. Our rounding
algorithm is a combination of the rounding algorithms of Raghavendra-Tan
[SODA'12] and Austrin-Benabbas-Georgiou [SODA'13]. A key component of our
analysis is novel Gaussian rounding lemma for hyperedges which might be of
independent interest.Comment: 55 Page
Approximation Algorithms and Hardness for -Pairs Shortest Paths and All-Nodes Shortest Cycles
We study the approximability of two related problems on graphs with nodes
and edges: -Pairs Shortest Paths (-PSP), where the goal is to find a
shortest path between prespecified pairs, and All Node Shortest Cycles
(ANSC), where the goal is to find the shortest cycle passing through each node.
Approximate -PSP has been previously studied, mostly in the context of
distance oracles. We ask the question of whether approximate -PSP can be
solved faster than by using distance oracles or All Pair Shortest Paths (APSP).
ANSC has also been studied previously, but only in terms of exact algorithms,
rather than approximation. We provide a thorough study of the approximability
of -PSP and ANSC, providing a wide array of algorithms and conditional lower
bounds that trade off between running time and approximation ratio.
A highlight of our conditional lower bounds results is that for any integer
, under the combinatorial -clique hypothesis, there is no
combinatorial algorithm for unweighted undirected -PSP with approximation
ratio better than that runs in
time. This nearly matches an upper bound implied by the result of Agarwal
(2014).
A highlight of our algorithmic results is that one can solve both -PSP and
ANSC in time with approximation factor
(and additive error that is function of ), for any
constant . For -PSP, our conditional lower bounds imply that
this approximation ratio is nearly optimal for any subquadratic-time
combinatorial algorithm. We further extend these algorithms for -PSP and
ANSC to obtain a time/accuracy trade-off that includes near-linear time
algorithms.Comment: Abstract truncated to meet arXiv requirement. To appear in FOCS 202
Quantum Garbled Circuits
We present a garbling scheme for quantum circuits, thus achieving a
decomposable randomized encoding scheme for quantum computation. Specifically,
we show how to compute an encoding of a given quantum circuit and quantum
input, from which it is possible to derive the output of the computation and
nothing else. In the classical setting, garbled circuits (and randomized
encodings in general) are a versatile cryptographic tool with many applications
such as secure multiparty computation, delegated computation, depth-reduction
of cryptographic primitives, complexity lower-bounds, and more. However, a
quantum analogue for garbling general circuits was not known prior to this
work. We hope that our quantum randomized encoding scheme can similarly be
useful for applications in quantum computing and cryptography.
To illustrate the usefulness of quantum randomized encoding, we use it to
design a conceptually-simple zero-knowledge (ZK) proof system for the
complexity class . Our protocol has the so-called format
with a single-bit challenge, and allows the inputs to be delayed to the last
round. The only previously-known ZK -protocol for is due
to Broadbent and Grilo (FOCS 2020), which does not have the aforementioned
properties.Comment: 66 pages. Updated the erroneous claim from v1 about the complexity of
information-theoretic QRE as matching the classical case. Added an
application of QRE to zero-knowledge for QM